id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
49562538
|
pes2o/s2orc
|
v3-fos-license
|
Pediatric brain arteriovenous malformation unfavorable hemorrhage risk: extrapolation to a morphologic model
Background Children with brain arteriovenous malformations (bAVM) are at risk of life-threatening hemorrhage contributing to unfavorable neurological deficit in their early lives. Our aim was to propose a classification system predicting the unfavorable hemorrhage in children with bAVM. Methods We identified all consecutive children admitted to our institution for bAVMs between July 2009 and August 2015. A hemorrhage event was defined as unfavorable when it is life-threatening (requiring emergent invasive intervention) or with post-hemorrhage mRS > 3. The effects of demographic characteristics and bAVM morphology on unfavorable hemorrhage risk were studied using univariate and multivariable regression analyses, followed by discrimination analysis using area under the receiver operating curve (AUROC) and 5-fold cross validation. Results A total of 162 pediatric bAVM cases were identified, unfavorable hemorrhage occurred in 49 (30.2%). Periventricular nidus location (HR, 4.46; 95%CI, 1.93–10.31; P < 0.001), non-temporal lobe location (HR, 2.72; 95%CI, 1.20–6.15; P = 0.02) and long pial draining vein (HR, 3.26; 95%CI, 1.53–6.97; P = 0.002) were independent predictors of an earlier unfavorable hemorrhage in pediatric bAVMs. We further classified the bAVM into three types: Type I, periventricular and non-temporal location (Ia, deep location; Ib, superificial location); Type II, with long pial draining vein and non-periventricular or temporal location; Type III, non-periventricular or temporal location without long draining vein. Predictive accuracy of this classification for unfavorable hemorrhage was assessed with AUROC of 0.77 (95% CI 0.69–0.85) and remained stable after cross validation. Conclusion A morphologic model based on nidus location and venous drainage might predict unfavorable hemorrhage in children with bAVM.
Background
Children with brain arteriovenous malformations (bAVM) are more commonly with a hemorrhagic presentation [1][2][3]. bAVM rupture is associated with a 5-10% chance of death and a 30-50% chance of permanent or disabling neurological deficit [4]. Therefore, what the children's family and health care professionals really concern is the risk of life-threatening and disabling hemorrhage. A recent study in pediatric bAVM patients indicated long-term poor outcome after treatment was significantly correlated with modified Rankin Scale (mRS) score after hemorrhage, rather than hemorrhagic presentation [5]. Previous reports revealed that 20-26% of children with bAVM experienced disabling neurological deficits after hemorrhage [5][6][7][8]. Given the prognosis correlating with post-hemorrhage neurological function and catastrophic impact of disabling neurological deficit in children, it is important to identify risk factors that can be used to predict unfavorable hemorrhage.
This study aimed to: (1) assess the association of morphologic features with the risk of unfavorable hemorrhage in children with bAVMs; (2) establish risk stratification models predictive of unfavorable in children with bAVM.
Methods
The study protocol was approved by the Institutional Review Board of our institution. Written informed consent was obtained from all participants and their guardians at admission.
Patients and study design
The AVM database at our institution has been previously described and is a prospectively maintained database collecting demographic, clinical, and neuroradiological data for all patients with a confirmed angiographic or histological diagnosis of intracranial AVM treated at our institute. [7,9] This database was reviewed to identify all consecutive children with bAVMs (≤18 year-old at the first angiographic diagnosis of bAVM) between July 2009 and August 2015.
Neurological function was assessed using mRS. Post-hemorrhage outcome was defined as unfavorable: (1) mRS score > 3 or (2) undergoing emergent invasive intervention. The immediate post-hemorrhage mRS were recorded within 24 h after hemorrhage. A clinician who was not directly involved in the care of these patients performed all scale assessments. Emergency intervention for ICH evacuation or drainage was considered in patients with GCS ≤ 8, ICH with midline shift or hydrocephalus.
Neuroradiological review
MR, CT and angiographic images available for each patient were evaluated by consensus between two experienced neuroradiologists (J.M. and C.W.) who were blinded to the clinical information. A structured list of angiographic and MR features (location, size, venous drainage and arterial supply) was retrospectively scored using a protocol that generally conformed to the consensus recommended by a Joint Writing Group for bAVM research reporting terminology. [10] bAVM location was further dichotomized into deep (basal ganglia, thalamus, cerebellum, and corpus callosum) and superficial (all other locations). A posterior fossa location was defined as brainstem, cerebellum, or both. Eloquent brain was defined as previously reported in Spetzler-Martin grading system. bAVMs were also classified as having a periventricular location if the nidus (with a contrast-enhancement or flow void) contacted the ependymal lining of the ventricle on contrast-enhanced T1-and T2-weighted images as we previously described. [7,[11][12][13] Venous drainage was dichotomized into exclusively deep venous drainage or non-exclusively deep venous drainage (superficial-only drainage or superficial and deep drainage). Long pial draining vein was defined as a superficial draining vein with a length longer than 3 cm [14]. Associated aneurysms only included aneurysms related to shunt flow. For statistical analysis, the associated aneurysm variable was dichotomized into absent or present.
Statistical analysis
Data were analyzed using IBM SPSS Statistics Version 22.0 (IBM, Armonk, New York) and MedCalc Version 18.5 (MedCalc software, Ostend). Statistical significance was set at P < .05. For the neuroradiological and clinical data, patients with and without unfavorable hemorrhage were compared using descriptive statistics, including t-tests for continuous variables and χ 2 tests for categorical variables.
We examined the association of potential predictors with the time from birth to unfavorable hemorrhagic event, censoring patients at the time of treatment of bAVM or last follow-up. Kaplan-Meier survival curves, log-rank tests were used. Multivariable Cox regression model was established including the potential predictors that have been reported to be predictive of hemorrhage in previous studies, as listed in Table 1. All the variables were entered and adjusted relative risks were calculated for all of them accordingly by the stepwise procedure (adjusted model with main effects).
With the multivariable analysis, we proposed three stratification models to predict unfavorable presentation: (1) full regression variable model; (2) concise model including significant risk factors from multivariable analysis and (3) AVM classification based on location and venous drainage type: Type I, periventricular and non-temporal location (Ia, deep location; Ib, superificial location); Type II, with long pial draining vein and non-periventricular or temporal location; Type III, non-periventricular or temporal location without long draining vein (Fig. 1). Receiver operating characteristic (ROC) analyses were performed, and the area under the receiver operating curve (AUROC) was compared using pairwise comparison of ROC curves, for discrimination of different models in predicting post-hemorrhage outcome. Five-fold cross validation was further used to evaluate whether the predictive accuracy of different models was overly optimistic. The data were divided into 5 equal subgroups; the model was then fitted 5 times to different combinations of 4 of the 5 subgroups and used to predict post-hemorrhage outcome in the remaining one-fifth of the data.
Results
A total of 162 pediatric patients with bAVM were identified in 997 patients with a confirmed angiographic or histological diagnosis of intracranial AVM. None of the patients had familial bAVM or hereditary hemorrhagic telangiectasia. Baseline characteristics are shown in Table 2. Unfavorable hemorrhage occurred in 49 of 162 patients (30.2%). The majority of the patients experienced . Subtype Ia based on corpus callosum, basal ganglia or thalamus adjacent to ventricle; Subtype Ib were based on frontal, parietal, occipital lobe or cerebellum adjacent to ventricle; Type II (c) were with long pial draining vein and based on non-periventricular location (frontal, parietal or occipital lobe or deep location) or temporal lobe; Type III (d) were also based on non-periventricular location or temporal lobe but without long draining vein
Association of Morphologic Features with unfavorable hemorrhage-free survival
Assuming that the BAVM had been present since birth, there were 2025 patient-years of follow-up for this population of 162 patients (mean ± SD, 12.5 ± 4.6 years). A total of 49 unfavorable hemorrhage occurred in children with BAVM, yielding an overall annual rate of unfavorable hemorrhage of 2.4% for pediatric patients.
There was a significant difference between the time to unfavorable hemorrhage for patients with periventricular and non-periventricular BAVMs (log-rank, P = 0.002). Children with periventricular BAVMs present unfavorable hemorrhage earlier, with a median unfavorable hemorrhage-free survival of 17.49 years (95 %CI, 14.45-20.53) for periventricular BAVMs (Fig. 2). There was also a significant difference between the time to poor outcome before treatment for patients with temporal location and non-temporal BAVMs (log-rank, P = 0.034).
Children with non-temporal BAVMs present poor outcomes earlier, with a median poor outcome-free survival of 18.28 years (95 %CI, 15.86-20.70) for non-temporal lobe BAVMs. The association is not significant between the presence of long pial draining vein (log-rank, P = 0.079) or exclusively deep venous drainage (log-rank, P = 0.214).
Morphologic model predicting unfavorable hemorrhage in pediatric bAVM
In the 49 pediatric patients with unfavorable hemorrhage, more than 85% had two or more risk factors: periventricular location, non-temporal location and long pial draining vein. The unfavorable hemorrhage survival was similar in children with or without one risk factor. In contrast, children harboring two or more predictors were at higher risk for earlier unfavorable hemorrhage (log-rank, P < .001).
The estimated annual rate of unfavorable hemorrhage ranged from 0.7-1.0% per year for those with or without one risk factor, to 5.7% per year in cases harboring all three predictors (Fig. 2). Notably, the majority of these patients (32/49; 65.3%) with unfavorable outcome after hemorrhage harbored bAVMs with periventricular and non-temporal location with or without long pial draining vein (Type I). Further analysis of this subgroup revealed that 18 of 32 bAVMs (56.3%) were supratentorial location involving corpus callosum, basal ganglia or thalamus adjacent to ventricle (Type Ia). 14 of 32 bAVMs (43.8%) were located in the frontal, parietal or occipital lobe or cerebellum adjacent to ventricle (Type Ib). Moreover, non-periventricular location (frontal, parietal, occipital lobe or deep location) or temporal lobe bAVMs with long pial draining vein (Type II) can also result in unfavorable hemorrhage (14/ 49; 28.6%) ( Table 3). 18.5% (30/162) were Type Ia bAVMs, 60% of whom (18/30) had unfavorable hemorrhage during 367 patient-years follow-up, yielding an annual rate of unfavorable hemorrhage of 4.9%. 17.9% (29/162) were Type Ib bAVMs; During 280 patient-years follow-up, unfavorable outcome occurred in 48.3% of them (14/29) after hemorrhage, yielding an annual rate of unfavorable hemorrhage of 5.0%; 32.1% (52/162) were Type II bAVMs with an unfavorable hemorrhage rate of 26.9% (14/52) during 667 patient-years follow-up, yielding an annual rate of unfavorable hemorrhage of 2.1%. Generally, 93.9% of all unfavorable hemorrhage (46/49) occurred in bAVMs of these three types, and children with Type I lesions had unfavorable hemorrhage earlier than those with Type II lesions (log-rank, P < .001) (Fig. 2). The odds ratio for unfavorable hemorrhage of Type Ia and Type Ib bAVMs, compared with Type II bAVMs, was 2.37 (95% CI,1.10-5.14) and 2.45(95% CI, 1.08-5.53), respectively. Fig. 2 Kaplan-Meier curves demonstrating unfavorable hemorrhage-free survival difference between bAVMs with and without certain morphologic features. a There was a significant difference between the time to unfavorable hemorrhage for patients with periventricular and non-periventricular bAVMs (log-rank, P = .002). b There was also a significant difference between the time to unfavorable hemorrhage for patients with temporal location and non-temporal bAVMs (log-rank, P = .03). c Children with non-temporal bAVMs tended to present unfavorable hemorrhage earlier than nontemporal lobe bAVMs (log-rank, P = .08). d The unfavorable-hemorrhage free survival was similar in children with or without one biomarker. In contrast, children harboring two or more biomarkers were at higher risk for early unfavorable hemorrhage (log-rank, P < .001). bAVMs were further classified into four types with these features. e Children with Type Ia and Ib lesions presented with unfavorable hemorrhage earlier than those with Type II lesions (log-rank, P < .001) Proposed grading scales predicting post-hemorrhage outcome were evaluated using: (1) full variable regression model (10 variables); (2) concise regression model including significant risk factors from multivariable analysis (3 variables) and (3) bAVM classification (combining Type I subgroups). The AUROC, indicating the predictive accuracy of each model, was 0.76 (95% CI 0.67-0.84) for the full variable regression model, 0.72 (95% CI 0.63-0.81) for concise regression model, and 0.77 (95% CI 0.69-0.85) for bAVM type model (Fig. 3). There was no significant difference of predictive accuracy between bAVM classification model and full variable regression model or concise regression model (P = 0.87 and 0.08, respectively) The 5-fold cross validation showed similar estimates with AUROC of 0.70 (95% CI 0.54-0.85) for full logistic regression model, 0.74 (95% CI 0.68-0.80) for reduced logistic regression model, and 0.77 (95% CI 0.71-0.82) for bAVM classification model.
Discussion
Unfavorable hemorrhage in pediatric patients with bAVM bAVMs are described as the underlying cause of the majority of childhood intracranial hemorrhage, and spontaneous ICH is generally considered to constitute the most devastating complication of bAVM, carrying a high burden of morbidity and mortality [1,15,16]. Although children with bAVMs are more likely to present with hemorrhage, a long-term study combining two Fig. 3 Receiver operating characteristic (ROC) analyses for unfavorable hemorrhage predictive models using all variants in regression model (blue curve), the bAVM classification system (green curve), and concise regression system including significant risk factors from the full model (red curve) (reference line shown in teal). The predictive accuracy of bAVM classification system (ROC areas of 0.77) was not less than that of full regression and concise regression model (ROC areas of 0.76 and 0.72, respectively; pairwise comparison of ROC curves, P = 0.87 for bAVM classification system versus full regression model and P = 0.08 for bAVM classification system versus concise regression model) bAVM cohorts found an annual subsequent hemorrhage rate of 2% in children, which was similar to the risk in adults [1]. Previous hospital-or population-based cohort studies revealed an overall case-fatality of 11-29% and severe disability rate of 14-38% after bAVM hemorrhage in adult [16][17][18]. Recent studies of children with untreated bAVM revealed that severe neurological deficit occurred in 22% of all patients and 30-42% of those with hemorrhage [5][6][7]. Therefore, bAVMs rupture is not always with unfavorable outcome or life-threatening. Furthermore, the presence of hemorrhage did not worsen the long-term neurological outcomes, while the children's neurological function after hemorrhage could influence the long-term outcomes after treatment [5]. Although many studies aimed at identifying morphologic features associated with the risk of subsequent bAVM hemorrhage have been performed, only a few studies have specifically been conducted to assess the unfavorable hemorrhage risk in pediatric patients [5,[19][20][21].
In our cohort, we defined the hemorrhage as unfavorable when it is life-threatening (requiring emergent invasive intervention) or with disabling neurological deficit with mRS > 3 within 24 h after hemorrhage. We observed unfavorable hemorrhage in 30% of all children and 50% of those with ruptured bAVM, which are relatively congruent with previous reports [5,6]. The annual rate of unfavorable hemorrhage in this pediatric cohort (2.5%) is higher than that quoted in other study of patients of all ages (1.4%) [22].
Predictive model for unfavorable hemorrhage risk in pediatric bAVM
Although previous studies identified that a rupture history is the most significant risk factor for future hemorrhage [23], we hope to find predictive models based on features independent of rupture history, which could be used to predict the unfavorable outcome before AVM rupture. In this study with follow-up, we established a morphologic model predictive of unfavorable hemorrhage risk in children with bAVM. bAVM were categorized into three main types: Type I were based on periventricular and non-temporal location, with unfavorable hemorrhage of 50-60% and an annual rate of unfavorable hemorrhage of 5.0%; Type II were with long pial draining vein, which were based on non-periventricular location (frontal, parietal or occipital lobe or deep location) or temporal lobe, with unfavorable hemorrhage of 27.9% and an annual rate of unfavorable hemorrhage of 2.1%; Type III were based on non-periventricular location or temporal lobe without long pial draining vein, with very low rate of unfavorable hemorrhage.
The anatomic characteristics of brain might explain this morphologic model. Unfavorable neurological deficit usually involves impairment of fiber tract and deep nucleus. A non-temporal periventricular location is a deep located area consisting of aggregation structures of vital nucleus including thalamus, basal ganglia and hypothalamus, and important fiber tract such as corticospinal tract and arcuate fasciculus. Hemorrhage in non-temporal periventricular location could cause major disruption of the aforementioned nucleus and/or fiber tract. A recent study revealed that the plasticity of motor cortex on BOLD fMRI doesn't prevent post-operative motor deficits, while the plasticity of motor fiber on preoperative DTI was correlated with neurological outcomes [24]. These findings suggest that the fiber tract might be more important in predicting neurological function than eloquent cortex, considering the distinct plasticity pattern and recovery capability after injury. Therefore, a non-temporal periventricular location might be a better alternative to define the "eloquence" of the brain than "deep location" and "eloquent region" which do not include important supratentorial fiber tracts.
Although an unfavorable neurological deficit might also occurred in a ruptured bAVM in temporal lobe, it is usually associated with language cortex and large hematoma contributing to temporal lobe hernia. However, our present study excluded those cases without radiologic or histologic diagnosis of bAVM, therefore, we might not include some ruptured bAVM with catastrophic temporal hernia but without a chance to confirm the diagnosis of bAVM through radiological methods or surgical specimen. For those temporal bAVM involving language cortex, previous studies suggested that language function reorganization is more commonly in the Wernick region [25] and we employed the mRS score to evaluate the neurological deficit in the present study, thus the language dysfunction might not be measured in details. Therefore, the present model only focused on predicting bAVM hemorrhage with disabling neurological deficit of extremities and life-threatening conditions, and the result should be interpreted with cautious in bAVM located in anterior-medial temporal lobe adjacent to cistern and Wernick region.
In the present study, we found a long draining vein was also a predictor for unfavorable hemorrhage. Our previous data suggested that a restricted venous outflow might be associated with higher hemorrhage risk of bAVM [26]. Recent quantitative studies of bAVM hemodynamics also found that ruptured bAVMs have significantly prolonged drainage times compared to unruptured bAVMs, indicating restricted bAVM venous drainage and increased intra-nidus pressure in ruptured bAVMs [27]. Actually, a long draining vein suggested both deeper bAVM location (distal to the superficial dura sinus) and more likelihood of venous restriction during the long draining course. Therefore, a long draining vein might represent a combination of a deep location and high venous outflow resistance, and be associated with higher risk of bAVM hemorrhage and injury of deep hematoma.
We suggested a subgroup in Type I bAVMs: Type Ia and Ib, considering different Spetzler-Martin grade and distinct treatment recommendation for these two subtypes. 70.8% of Type Ib lesions were classified as Spetzler-Martin Grade I-II (low-grade), for which microsurgical resection could be an effective treatment [28]. In contrast, only 24% of Type Ia lesion were Grade I-II (Table 3). With higher risk for unfavorable hemorrhage and more low-grade lesions, children with Type Ib bAVMs might benefit from treatment. Cautious follow-up might be recommended for children with untreated Type Ia lesions. Our data suggested that this bAVM classification system was a good predictor of post-hemorrhage outcome in children with bAVM. However, the validity of this stratification system should be further evaluated in future studies.
This study was limited by its sample size, single-institutional population. The annual unfavorable hemorrhage was based on an assumption and should be interpreted with caution [29]. The neurological deficit of unfavorable hemorrhage in this study was mainly measured through mRS scale, while further studies should analyze more conditions (for example, aphasia and cognitive function impairment) affecting children's quality of life.
Conclusions
Careful evaluation of nidus location and venous drainage is recommended to predict life-threatening and disabling hemorrhage in children with bAVM.
|
2018-07-04T02:51:48.727Z
|
2018-07-02T00:00:00.000
|
{
"year": 2018,
"sha1": "8ffe691b4e9ff3889fd8f09719a401205361103f",
"oa_license": "CCBY",
"oa_url": "https://cnjournal.biomedcentral.com/track/pdf/10.1186/s41016-018-0123-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8ffe691b4e9ff3889fd8f09719a401205361103f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247351760
|
pes2o/s2orc
|
v3-fos-license
|
The Relevancy of Teaching Method in Teaching Reading Comprehension with Syllabus and Lesson Plan A Case Study in Junior High Schools in Perbaungan
This study aims to describe the relevancy of teaching method in teaching Reading Comprehension with Syllabus and Lesson Plan in Junior High Schools in Perbaungan. The relevance that used is the relevance of an assumption to a context. The subjects of this study are the five English teachers of four Junior high Schools in Perbaungan. The instruments for collecting data used are : observations, treatment and the method of this study is qualitative research method. The data was taken after giving treatment. After conducting the research, it was found that two teachers used KWL Method in teaching Recount text, one teacher used PORPE Method in teaching Narrative text and they applied their teaching methods in teaching learning process in teaching Reading Comprehension are relevance with Lesson Plan and Syllabus result the students‟ achievement in reading comprehension is high and above KKM ( Kriteria Ketuntasan Minimal / Minimal Completeness Criteria), one teacher applied his teaching method in teaching learning process in teaching Reading Comprehension is not relevance with Lesson Plan and Syllabus and one teacher did not apply a teaching method in teaching learning process in teaching Reading Comprehension.
I. Introduction
Language is a system of communication based upon words and the combination of words into sentences. It is a code used to express and communicate all purposes that needed by the language users. In addition Sapir (1921 ) says that "language is a purely human and non instinctive method of communicating ideas, emotion and desires by means of a system of voluntarily produced symbols , commonly symbols are represented by words" ( p.7). Based on the definition above, it seems that there is an emphasis on the components of language such as vocabulary, reading, listening and aspects of language. Therefore, the language would be very useful if it is communicated with others to get the meaning of language itself.
To understand the language, It can be said that one must be able to comprehend the text. The language users have to invent the meaning of the words as many as possible either in a sentence at least. By knowing the meaning of the words, one would be able to get the idea from the reading text. Jack C. Richards and Theodore S. Rodgers ( 1986) suggest an English teacher as the classroom teacher and the program coordinator can choose methods and materials according to the needs of learners, the preferences of teachers, and the constraints of the school or educational setting to finding more efficient and more effective ways of teaching languages. Davis (1997: 2 ) suggests that the design and selection of teaching methods must take into account not only the nature of the subject matter but also how students learn.. It means that it is very important for teachers to apply planned methods of teaching to maintain effective teaching.
In the English curriculum, the government has regulated how to treat English as a foreign language, which must be mastered well by all students. More over Gurning (2008) states in his journal that the development of curriculum should take into account the kind of literacy. Therefore in the teaching and learning process , the teachers are also expected to be able to teach English well and the students are expected to learn and practice English fluently, good at listening, speaking, reading and writing. Therefore English teacher should have good performance in teaching English so that the students would understand what he / she is going to teach at that time. The performance of a teacher plays an important role in the teaching and learning process because as a teacher he / she has many functions in education. He / She would be acted as an educator, a manager of the class, a guide or a facilitator, a motivator and a stimulator and also a researcher or as an informant.
A teacher could be a producer because he / she produces a lesson plan which is arranged based on the syllabus stated by the government and he / she must be able to manage the teaching learning process, facilitates the students in learning and guide them to use many supported learning circumstances. He / she gives motivation to his / her students and observes the willingness of students. It can be said that the teacher should serve the students as sincerely as possible. It is connected with the motivation of the teacher what goals that they want to achieve : as Gardner states motivation presents a schematic representation of this model. There are four sections, external influences, individual differences, language acquisition contexts, and outcomes. In the socio-educational model, motivation to learn the second language includes three elements. First, the motivated individual expends effort to learn the language. Second, the motivated individual wants to achieve a goal. Third, the motivated individual will enjoy the task of learning the languages. ( 2001 ). As a language teacher, he / she must make decisions at all time. Some of the decisions made are relatively minor ones should be assigned that particular day, for instance other decisions have more preformed implications. What should be the goal of language instruction ? Which language teaching method will be the most effective in reaching it? However many teachers mostly do not care about the relevance of the teaching method with syllabus and lesson plan while they are teaching in the classroom . They just think that their duty is just to finish the contents of the curriculum instructed by the government, therefore some students cannot grasp the aim of the teaching process. In fact as a good teacher , he/ she should prepare their lesson plan well in order to support their performance in the process of teaching learning especially when the teacher wants to teach reading He or she should arrange the lesson plan well because in teaching reading the students need to master the vocabulary and to grasp the main idea of the text so that the students get information as much as possible through reading itself.
The Lesson plan is a teacher"s guide to do the learning process in class, laboratory and every teaching learning take place. It has a basic competence. Because of that, all points that have been written down in lesson plan covers directly the learning activity in order to achievethe basic competence mastering. Every teacher should make a plan and arranged their teaching and learning before he / she teaches the students. Planning is an anticipatory decision making. It involves deciding what to do, how to do and when to do something. By making the lesson plan, the teacher would have some advantages in the teaching learning process such as : 1. The aim of the lesson which is identified and maintained. 2. The lesson material which is selected and arranged in a logical manner.
3. An instructional method and technique which is selected appropriate to the learning task and the needs of the students. 4. The students who are involved largely in the teaching learning process. 5. The time of the teaching learning process could be managed properly.
Kurikulum Tingkat Satuan Pendidikan ( KTSP ) or School-Based Curriculum gives autonomy to schools to develop and manage their syllabuses themselves based on the standard suggested in the curriculum and under the supervision of the local government. Before the teacher arranges the lesson plan, he / she should arrange school syllabus based on the Competence Standard (Standar Kompetensi ) , The basic competence ( kompetensi dasar), the material , the procedure of teaching learning process, indicators, assessment , time allocation and the source.
The syllabus is elaborated by the teacher himself / herself or done by the same lesson in that school or done by the teachers from many different schools but have the same vision and the unity of subject matter . The teacher is the key agent to apply the syllabus become a lesson plan as authentic program. The best teacher should arrange and prepare the lesson plan well, so that the process of teaching learning would be relevance with the teacher"s performance. Relevance as an Explanation of Communication. Roberts ( 1991: 455 ) defines that There are three kinds of relevance, they are : 1. the relevance of an assumption to a context 2. The relevance of an assumption to an individual and 3. The relevance of phenomenon to an individual. However many teachers teach the students without preparing the tools of the process teaching learning and it can also occur in junior high schools in Perbaungan. The researcher as a supervisor of Junior high school English teachers in Serdang Bedagai has responsibility to analyze the problem of English teachers especially in Perbaungan and then tries to overcome.
The results of this research are expected to be significant theoretically and practically for education field. Theoretically the result of the research can provide new contribution to the teachers" teaching method and development of the conceptual knowledge about the significance of lesson plan in the process of teaching Reading comprehension. While practically, after conducting the research, the researcher hopes that the research could contribute to the English teacher as a pattern to develop their lesson plan and self reflection to consider the importance of contribution of syllabus and lesson plan in the teaching reading comprehension so that it has relevance with the performance of English teachers. More over the principal of the school could easily develop his / her teachers" knowledge to arrange the syllabus and lesson plan that stated by the government. As the supervisors of the education would be more easily to observe the teachers" activity and could help the teachers to reconstruct the teachers" performance and to improve the qualified lesson plan so that there is a gradual improvement of the quality of the teaching learning process.
General Concept of Reading
There are various definition of reading from educators, psychologists, linguist and sociologists to have filled volume with their definition of reading. The definition of reading used here is adapted from Anthony, Pearson, and Raphael (1993) as cited by Farrell (2009: 20) in which reading is the process of constructing meaning through the dynamic interaction among the reader"s existing knowledge, the information suggested by the written language, and the context of reading situation.
The Nature of Reading Comprehension
Anderson, ( 2003 : 67) divides the process of reading into three models: bottom-up , topdown models, and interactive models as the focus of this study. First, bottom-up is a process where readers begin with smaller element and build up comprehension of what is being read. It is a reading activity to recognize and analyze words and then to build up the grammartical structure, sentences and longer texts. The second, top-down as a process of reading where the reader begins with the larger elements and works down towards smaller elements to build comprehension of what is being read. It means that reader begins with their background knowledge in understanding a text makes prediction and searches the text to confirm or reject the predictions which are already made. The third interactive model of reading as most comprehensive description of reading process. It is a combination of top -down and bottomup processing.
The Process of Reading Comprehension
Making sense of text can occur in one point therefore the process of it would be before, during and after reading. The process occurs before reading or pre-reading process covers some aspects such as previewing, predicting, setting the purpose and choosing appropiate teaching method. Previewing a text can be done by looking at the title, picture, diagram or graphic in order to remind and activate memories, experiences or schemata. Then predict the text based on previewing in order to help understanding the text. After predicting, find the purpose of reading and the last, choose the appropiate teaching method related to the purpose of the text. As a result, reading comprehension is a complex process since it needs more than activity relationship that, reading comprehension is conceived of four conceptions, they are: 1)transmission, 2)translation, 3)interaction, and 4)transaction ( McNeil, 1992).
The Teaching Method
A teaching method comprises the principles and methods used for instruction to be implemented by teachers to achieve the desired learning in students. In teaching learning process, teaching method are needed to create condusive atmosphere. The condusive atmosphere will make students feel comfort in learning. A method of teaching is a system for teaching of a language that is based on either a particular theory of language or a particular theory of learning, or usually on both ( Thornbury, 2005 : 131). It is the way of teaching which is based on the theory of language and learning. Teaching method is the way of teaching based on the theory of language and the theory of learning and usually based on both theory of language learning by which what is believed to be going to be taught that is linguistically called the approach of the method. It is called method if it has proper procedure to apply or it is sometimes called techniques or in other words as the actualization of theory to practice.
The Method in Teaching Reading Comprehension
A reading method is a set of teaching and learning materials and/or activities often given a label, such as phonic method, literature based method, or language experience method (international reading association, 1999). The statements say that there is no single method or single combination methods that can successfully teach all students to read. Therefore a teacher / a lecturer must have a strong knowledge of multiple methods for teaching reading and a strong knowledge of the students in their care so they can create the appropriate balance of methods needed for the students they teach. (international reading association, 1999).
Although there is no single method or single combination methods that can successfully teach all students to read, it is still essential for teaching reading experts to conduct studies and invent better methods to revise the old ones. As a result, methods have been found and implemented in reading subject in classroom.
1) Choral Reading (CR) Method
Choral Reading means reading out loud with your child, the same text at the same time ( Wood, 2006 : 216 ). You read together in unison, and your child gets to hear your voice, guiding and supporting.
To prepare, each student reads the whole selection that is going to be chorally read. One can read it aloud and the others following along silently. Then the readers read the piece together. A few repetitions are necessary to fix the piece in the students" thought 2) Paired Reading (PR) Method Paired reading is a technique that parents can use to help their own child with reading practice ( Topping, 1995 ). The method involves the parent who is a skilled reader and the child who is learning, reading a book together. According to Koskinen andBlum ( 1986 ) , Paired reading is repeated reading, students work with a partner to readshort self-selected passages of text with the goal of improving fluent reading Teachers give students opportunities to watch others demonstrating the role of the reader, then the role of the listener, and finally practice the procedure while the teacher watches. After students have had the opportunity to watch and practice these activities, they are usually successful on their own.
3) PORPE Method The PORPE method suggested by Simpson and Stahl ( 1989 ). PORPE is a method to study textbook materials in which the students create and answer essay questions There are five steps in this learning strategy. 1) Predict : After reading the chapter, students predict possible essay questions from the information contained in the text. In the arrangement of these questions, the students should evade questions that begin with "what, who, or when and do not include analysis. 2) Organize: After predict and get the question from information in the text, the teacher asks students to organize or summarize by using their own words.
3)Rehearse : The teacher asks students to recite aloud information after they had summarized, The students have to share information that they get from the text. 4) Practice: The teacher asks students to answer their predicted question before. 5) Evaluate: Teachers evaluate the students" work by asking the following question: Do have enough clear examples? Is my answer complete, truthful, and suitable? 4) KWL Method an instructional method known as K-W-L, developed by Ogle (2006) has been implemented in classrooms. Students" prior knowledge is activated by asking them what they already know; then students set goals focusing on what they want to learn, and after reading, students discuss what they have studied. KWL is aimed to be an exercise for a study group or class that can direct the students in reading and understanding a text. The students can adjust it to working alone. The method is formed of only three stages that reflect a worksheet of three columns with the three letters. 1). K stands for Know. This first stage occasionally astonishing: Think first what the students understand about the topic and make a list about that. This activity constructs a knowledge of the new material. Build a scaffold to support it. 2). W stands for Will or Want. The second stage is to list a series of questions of what the students want to know more of the subject, based upon what the students listed in K. 3). L stands for Learned. The final stage is to answer the students" questions, as well as to list what new information the students have learned. Either while reading or after the students have finished.
5) SQ4R Method
SQ4R is a systematic method of reminding students how to learn from text with maximum effectiveness (Thomas &Robinson: 1972). This SQ4R Method is very practical to help students keep studying organized and efficient. According to Thomas &Robinson (1972: 10) the steps to SQ4R are Survey, Question, Read, Recite, Record, and Review. 1) Survey : Students should skim and scan the chapter. The purpose of surveying the chapter is to get the general idea of the content, structure, organization, and plan of the chapter. Surveying the chapter gives the "big picture" a framework of the main ideas, which will help to hold the details together later ( Richardson & Morgan, 1997: 124 ). 2) Question : Having students develop questions gives them a purpose for reading. Reading for specific purposes positively influences. Setting a purpose also aids the student in recalling information. Developing questions prior to reading results in spontaneous attempts to answer the questions based on information already known, increased concentration and attention while reading to find an answer to the question, and increased comprehension due to the mind in its attempt to find an answer to the question.
3) Read :
Reading promotes an active search for answers to the specific questions that students have developed. It forces the student to concentrate for better comprehension and aids in lengthening attention span (Richardson & Morgan, 1997). Students should read each section of the text to answer questions that were developed in the step above. If a word meaning is not clear through its use in the selection, reread. If it is still unclear, underline the word or jot it down and look it up when students finish reading. 4) Recite : Recite the main ideas, in other words, aloud or to the students, after finishing a page. Check the comprehension and make sure the students have the correct information. By reciting what the students" read, the students are able to see how much information the students absorbed, areas the students didn"t understand and need to review, and answers to the questions the students generated for the students. If the students cannot answer the questions, go back to the material and reread. 5) Record : Marking the textbook increases understanding of the material for the present and for future reference. The process of selecting and marking requires the students to find the main ideas. The students might use the following: a double underline for main ideas and a single underline for supporting points; a bracket to enclose several consecutive lines that are important, rather than underlining all of them; or a box or circle around key terms. 6) Review : Teachers should include regular review periods as an effective strategy for retaining information. . Students should study their outlines and notes and be able to retell what was read in their own words. Students should try to see relationships within the content. If they are unable to the teacher may need to model for students how to look for relationships. Student should be checking their memory by trying to recall main points and sub points (Richardson & Morgan, 1997).
The Principle in Teaching Reading Comprehension
There are some principles that to be provided in the teaching of reading comprehension. They are : 1) Rreading is not a passive skill 2) Students need engaging in reading comprehension 3) Students should be encouraged to respond to the content of the reading text.
Syllabus
A syllabus is a document describing what must be learnt. This definition implies that it contains the materials prepared for the learner in the form of organization. Dubin and Olshtain ( 2002 : 3 ) write syllabus as a circumssribed document, usually one which has been prepared for particular group of learners. Based on the definitions , we can see that curriculum concerns a general rationale for formulating policy decision, which combines educational and cultural goals with language goals. Whereas, a syllabus is a document that contains specific elements aimed to actualize the concept of curriculum. The specific elements refer to all the things that are to be taught in the course.
Syllabus could be designed by teacher if he / she knows his / her learners" character, school condition, and he / she has understood the step of syllabus development on the other hand if the teacher is not be able do that, school can make group of teachers to develop the syllabus which will be used in their school but if a school can not design syllabus by itself, many schools can be collaborated to discuss how to develop it. The government has suggested the design of syllabus as in Government"s rule no.19 / 2005 states : Ccompetence Standard is the qualification of the students" ability in mastering the knowledge, attitudes and skills of the lesson, while materials of the lesson is the material that must be achieved by the students to master the basic competence..And indicators as the pattern in assessment which is arranged based on the characteristic of the school, potentials and the students or in the other words that indicators are as the tool of the teacher whether the students are able to master the materials that are being taught, the assessment should be arranged based on the indicators.
Lesson Plan
A lesson plan is a teacher"s detailed description of the course of instruction for an individual lesson. A daily lesson plan is developed by a teacher to guide class instruction In Depdiknas ( 2006 ) lesson plan is defined as a teacher"s instruction do teaching learning process in class, laboratory and or in hall for every basic competence. To be a good teacher, he / she should manage the lesson plan by the following steps which described as follows :
Write the Identity
It includes many points such as : 1) School 2) Subject 3) Class / Semester.
4) The Standard of Competence
There are four competences based on " Kurikulum Tingkat Satuan Pendidikan (1) Listening, the students should comprehend meaning of simple interpersonal and transactional orally discourse, formally and informally, in recount, narrative, procedure, description and report form in daily life contexts.
(2) Speaking, state the meaning of simple interpersonal and transactional orally,formally and informally, in recount, narrative, procedure, description and report formin daily life contexts (3) Reading, comprehend the meaning of simple interpersonal and transactional discourse / passages in recount, narrative, procedure, description and report form in daily life contexts (4) Writing,state the meaning of simple interpersonal and transactional in written form, formally and informally, in recount, narrative, procedure, description and report formin daily life contexts 5) Basic competence Basic competence is the esential knowledge and skills acquired by students in relation to the learning targets and objectives set out in the curriculum for each key stage.
6) Standard Competence
Standard competence is known as the general objective and specific objective refers to the basic competence ; that is a minimal competence that learners possess in order to achieve the standard of competences provided. Standard of Competence : to communicate orally or in written form by using language model fluently and accurately in the interpersonal and / or monolog text in the form of narrative, descriptive, report, procedure and recount.
List of Learning Outcomes
Learning outcomes contains the mastering of operational competence which achieved in lesson plan. Learning outcomes are formulated in operational / behavioral statement from basic competence. If basic competence has been operational, these formulation would be basic learning outcomes formulation.
List of the Material
Learning material is the material which is used to attain learning outcomes. It can be improved based on prepared main material in syllabus. It is essential that teachers should master completely of what they are going to teach. It is equally important that they be up to date and acquainted technique and practices.
List of the Method of teaching
Method can be formulated as actual method but it can be formulated as model or instruction approach. It depends on the characteristic of chosen method and approach.
List Istrument Steps.
To achieve a basic competence, a teacher should write the step activity the each meeting. Basically, this activity of procedure can be divided into three steps, they are : 1) Opening activity 2) Main activity 3) Closing activity
Learning Source
In the past few years there has been a growing tendency amongs education list to shift from the traditional teachercentered teaching to studentscentered learning. It is believed that efficient instruction is more allied to the active participation than to passive participation of the learners. This shift in teaching -learning strategy calls for use of a variety of learning resources appropiate to the new strategy. Resource materials which are collected by the teacher should be managed regularly to the lesson plan. The resources that assist the teacher can be classified into planning documents and classroom material.
Assessment
Assessment includes technique, instrument form and uses instrument to collect data. Wragg ( 2001 : 7-56 ) classifies the assessment in two types, namely : Formal assessment and informal assessment. 1) Formal assessment is more structured examination that involves a standardized test, an examination paper, or an assessment schedule. Types of formal assessment are essay test and multiple choice 2) Informal assessment can take place in a varriety of setting and with very different purposes.
Research Methodology
the method and the procedure to answer the problem of this research. The researcher used qualitative descriptiveanalysis, an experimental research . Methodology is a specific set of procedures. It explains the design, population, and technique for collecting data and data analysis.
Data Analysis
The data were collected after given the treatment. After the data were collected, the data were analyzed by identifying the real condition in the process of teaching Reading Comprehension. The researcher used qualitative analysis to analyze the data. The sum of the teachers are five English teachers from four junior high schools in Perbaungan. After analyzing the data, the writer found that two teachers used KWL Method, one teacher used PORPE Method , one teacher used Choral Reading Method and one teacher did not use a teaching method. The teachers applied the teaching method in their teaching learning process by following the steps of learning activities of the teaching method.
From the data analysis, the writer also found that three teachers who applied the teaching method in teaching reading process that relevance with the lesson plan and syllabus, one teacher who applied the Teaching Method in teaching learning process in teaching Reading Comprehension is not Relevance with the Lesson Plan but Relevance with Syllabus and one teacher who applied the Teaching Method in teaching learning process in teaching Reading Comprehension is not Relevance with the Lesson Plan and Syllabus.
The writer also found thatthe teachers used the KWL Method in teaching Recount text and the PORPE method in teaching Narrative text. Two teachers applied the KWL method in teaching Recount Text and the method in teaching reading process is relevance with the lesson plan and syllabus result the mean of the students" achievement in reading comprehension is high and above KKM ( Kriteria Ketuntasan Minimal / Minimal Completeness Criteria). One teacher applied the PORPE method in teaching Narrative Text and the method in teaching reading process is relevance with the lesson plan and syllabus results the students" achievement in reading comprehension is high and above KKM ( Kriteria Ketuntasan Minimal / Minimal Completeness Criteria)
Conclusion
Two teachers used KWL Method, one teacher used PORPE Method, one teacher used Choral Reading Method and one teacher did not use a teaching Method in teaching Reading Comprehension.The teachers applied the teaching method in their teaching learning process by following the steps of learning activities of the teaching method.
Two teachers used KWL Method in teaching Recount text, one teacher used PORPE Method in teaching Narrative text and they applied their teaching methods in teaching learning process in teaching Reading Comprehension are relevance with Lesson Plan and Syllabus result the students" achievement in reading comprehension is high andabove KKM ( Kriteria Ketuntasan Minimal / Minimal Completeness Criteria), one teacher applied his teaching method in teaching learning process in teaching Reading Comprehension is not relevance with Lesson Plan and Syllabus and one teacher did not apply a teaching method in teaching learning process in teaching Reading Comprehension
|
2022-03-09T19:04:01.206Z
|
2021-07-24T00:00:00.000
|
{
"year": 2021,
"sha1": "d178781456235c8f05f2022ce488481d97403016",
"oa_license": "CCBYSA",
"oa_url": "https://journal.eltaorganization.org/index.php/joal/article/download/34/55",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1bab14d0b22940eaa91f40419f06d730d4dc1dcb",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
16792145
|
pes2o/s2orc
|
v3-fos-license
|
NLO Calculations for Charm Production in DIS
I present a short overview of the NLO QCD calculations available for deep-inelastic production of heavy quarks.
Introduction
Charm quarks produced in deep-inelastic scattering (DIS) have been identified in sizable numbers now by the H1 [1] and ZEUS [2] collaborations at HERA, and considerably more charm (and bottom) data are anticipated. At the theoretical level the reaction has already been studied extensively. In the framework where the heavy quark is not treated as a parton, leading order (LO) [3,4] and next-to-leading order (NLO) [5,6] calculations of the inclusive structure functions exist. Moreover, LO (AROMA, RAPGAP) [7,8] and NLO (HVQDIS) [9,10,11] Monte-Carlo programs, allowing a much larger class of observables to be compared with data, have been constructed in recent years. Overall, the NLO QCD description agrees quite well with the HERA data. Here I shall give a very brief overview of these NLO calculations.
Charm quarks are produced in DIS via the reaction where P (p) is a proton with momentum p, Q is a heavy quark with momentum p 1 (p 2 1 = m 2 ) and X is any hadronic state allowed, containing the heavy quark-antiquark pair. Its differential cross section may be expressed in general as where The V i stand for kinematic variables related to the heavy quarks. Examples are the transverse momentum of the heavy quark, the rapidity difference between the heavy quarks, etc.
Inclusive and single-charm inclusive production
The least difficult cross section to measure is the heavy quark inclusive cross section, expressed via (2) in the inclusive structure functions F 2 and F L . These were calculated to NLO in [5]. The results are parametrized as where k = 2, L and the upper boundary on the integration is given by The scale µ is the mass factorization and renormalization scale. The MS coefficient functions c k,j (η, ξ) , (j = g , q ,q ; l = 0, 1) and d (l) k,i (η, ξ), (i = q ,q ; l = 0, 1) depend on the scaling variables η = (s − 4m 2 )/4m 2 and ξ = Q 2 /m 2 , with s the square of the c.m. energy of the virtual photon-parton subprocess. This implies that in (4) z = Q 2 /(Q 2 + s).
In (4) To obtain numerical results for the inclusive cross section, it is better to use instead of the original, rather long expressions [5], the much faster parametrized form [6]. The lowest order term contains only the gluon density. Light quark densities only come in at next order, contributing only about 5%. This is the reason F 2 (x, Q 2 , m 2 ) is used in global analyses to constrain the gluon density. Besides the gluon density, the main source of theoretical uncertainty in F 2 (x, Q 2 , m 2 ) is the value of the charm quark (pole) mass, rather than the scale µ.
The calculation that lead to (4) also yielded the single heavy quark differential cross section, with V 1 = p Q T and V 2 = y Q [12]. These distributions are best generated using the HVQDIS program, described in the next section.
NNLO estimates based on soft gluon resummation are given in Ref. [13], for inclusive and single heavy quark inclusive cross section.
Fully differential charm production
A NLO calculation also exists for the fully differential cross section in (2) [14]. Maintaining full differentiality required a complete recalculation of the matrix elements, carefully eliminating intermediate divergences via the so-called subtraction method. The results are encoded in the program HVQDIS [10] ‡. The program can compute, to NLO, experimentally visible cross sections, which are in principle better for comparison with theory than fully inclusive ones. It returns parton kinematic configurations and their corresponding weights, accurate to O(αα 2 s ). The user is free to histogram any set of infrared-safe observables and apply cuts, all in a single histogramming subroutine.
Additionally, one may study heavy hadrons using the Peterson et al. fragmentation model. Detailed physics results from this program are given in [11].
HVQDIS has been used extensively in experimental analyses. As is shown elsewhere [16] in these proceedings, it reproduces the data very well indeed, except for D * 's at low p T and large pseudorapidity, where there are more events than HVQDIS would predict. This is possibly due to remnant beam drag effects distorting the pseudorapidity spectrum to larger values. For the case of charm photoproduction this was investigated in [17].
A more extensive overview of the NLO calculations and the phenomenology of DIS charm production can be found in [18].
|
2014-10-01T00:00:00.000Z
|
1999-12-15T00:00:00.000
|
{
"year": 1999,
"sha1": "2d326fb5807789f0232a138c362a141dff4a9efe",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/9912350",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "b2884e46d61826f28d624c1777fc00ab6b95db1e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
253001305
|
pes2o/s2orc
|
v3-fos-license
|
Goods and Services and Equivalent Economic Benefits of Sand Dunes of India
Sand dunes are simply an extension of the beach which is a reservoir of sand, during storms when the waves erode the dune and carry the sand into the sea. They are providing habitat for shellfish, birds, rodents, and ungulates. They have been used for coastal defense, water catchment areas, agriculture purposes, mining, and housing. Many of the goods and services produced by the sand dunes are not easily quantified and accounted for since they are not being traded in the formal market. Hence, many of the environmental benefits have been often neglected or even ignored by the economy, industry, coastal communities, and dune has been estimated at Rs.8220002/ha./yr =US$ 176103.66(average). Sand dunes have been distributed in 1231 patches with a total area of 32445 ha. Using the average value of sand dune benefits, the equivalent economic benefit from sand dunes of India cost Rs. 26670 crore/yr.=US$ 5.71 billion. Among the coastal States and UTs, Andhra Pradesh State has a huge area (11594 ha.) of sand dunes which share Rs. 9530 crore/yr.=US$ 2.04 billion which occupies 36% of the total sand dunes of India.
INTRODUCTION
A coastal sand dune is a transitional zone between sea and land that forms a unique ecosystem [1]. They are part of the sand-sharing system composed of a highly mobile beach and a more stable dune [2]. They are aeolian (deposited) landforms established by the supply of loose sediment transported by the ambient winds. The coastal sand dunes have been developed in places where there is an adequate supply of sand in the intertidal zone and where prevailing winds are strong enough for sand movement [3].
Sand dunes are simply an extension of the beach which is a reservoir of sand, during storms when the waves erode the dune and carry the sand into the sea. Without dunes, our beautiful sandy beaches would erode away. Without the dunes, sand would continue to blow inland, drifting over whatever lies in its path. Dune vegetation is extremely efficient at capturing and holding sand and preventing it from being lost from the beach [4]. Sand dunes are usually classified as incipient dune, fore dune, and hind dune. Incipient dunes are located seaward and are immature and distributed with grasses. In an accreting coastline, the incipient dune shall develop as a fore dune. A fore dune shall locate between the incipient dune and the hind dune and is distributed with grasses and shrubs. Fore dunes supply sand for erosion demand in storm conditions. They located inland developed areas and distributed with the vegetation such as trees and shrubs [5]. During storm conditions, incipient and fore dunes may be severely eroded by waves. During the intervals between storms, dunes are rebuilt by wave and wind effects. Dune vegetation is essential to prevent sand drift and associated problems [5].
Sand dune vegetation contains many specific fauna and flora species which can adapt to live in such harsh conditions in salty, marshy and swampy areas. They are providing habitat for fish, shellfish, birds, rodents, and ungulates [6,7].
Around 20% of landscapes of the world coastal areas have been distributed by sand dunes [8]. In Europe, the protection and restoration of dune wildlife and habitat have become a priority [9]. In many regions of the world, dunes have been used for agricultural purposes [7].
The coastal sand dunes are not as productive exporters of nutrients as many other coastal ecosystems. They serve as sediment reserves, stabilize coastlines, provide areas for recreation and provide breeding and feeding sites for seabirds, turtles, and other coastal species. They have been used for coastal defense, water catchment areas, agriculture purposes, mining, and housing [10]. They store rich diversified genetic resources along with high ecological values [11]. Sand dune living organisms are globally or provincially rare, and many are classified as species at risk [4,2]. Sand dunes have played a vital role in the economic and social life of coastal people not only by supporting unique values such as medicine, food, fodder, and economy. Though there are many ecosystem services for human well-being, estimates of the value of sand dunes are scarce [12,13].
STUDY AREA AND METHODOLOGY
Coastal sand dunes have been distributed in all coastal states coast of India. All over the mainland coast sand dunes have been distributed in 1231 patches with a total sand dune area of 32445 ha. The average size of 26 ha/patch is the study area of this study. Details of sand distribution in various coastal States, located districts, number of patches, and area (ha) are described in Table 1. Sand distributed patches along the mainland coast of India have been shown in Fig. 1. Millennium Ecosystem Assessment [14] and The Economics of Ecosystems and Biodiversity [15] have developed a framework to estimate environmental goods and services. The framework includes (i) Direct use value; (ii) indirect use value; (iii) option value; and (iv) non-use value. The first three are generally referred to together as 'use value'. Direct use values refer to ecosystem goods and services that are used directly by human beings. They include the value of consumptive uses such as harvesting of food products, timber for fuel or construction, and medicinal products and hunting of animals for consumption; and the value of non-consumptive uses such as the enjoyment of recreational and cultural activities that do not require harvesting of products. Direct use values are most often enjoyed by people visiting or residing in the ecosystem itself.
Indirect use values are derived from ecosystem services that provide benefits outside the ecosystem. Examples include natural water filtration which often benefits people far downstream, the storm protection function of mangrove forests which benefits coastal properties and infrastructure, and carbon sequestration which benefits the entire global community by abating climate change.
Option values are derived from preserving the option to use in the future ecosystem goods and services that may not be used at present, either by oneself (option value) or by others/heirs (bequest value). Provisioning, regulating, supporting and cultural services may all form part of the option value to the extent that they are not used now but may be used in the future. Non-use values refer to the enjoyment people may experience simply by knowing that a resource exists even if they never expect to use that resource directly themselves. In this study, the Benefit Transfer method has been applied to estimate the goods and services of sand dunes.
META-ANALYSIS (ACCOUNTING) SAND DUNE ECOSYSTEM -GOODS AND SERVICES
Sand dunes provide a wide range of provisioning, regulatory, cultural, and support services [13]. Many of the sand dune goods and services are not yet economically estimated [16]. Important provisional services from the sand dune are the supply of minerals and the supply of groundwater. The sand dunes provide regulatory services such as protection of infrastructure from natural hazards such as erosion, and flood control by regulating sand supply to the system by stabilization of dunes. In addition, the sand dunes act as a barrier between inland and sea and regulate water quality and pollution in the region. The sand dunes provide cultural services such as recreation space, aesthetics, psychological, therapeutic opportunities, and educational resources. The sand dunes provide supportive services by providing habitats to many fauna and flora, and nesting and roosting sites for many avifauna and turtles [17]. In this metaanalysis, the economic values of many ecosystem services of sand dunes have been discussed and the values are applied to India's coastal sand dune patches.
Provisioning Services of Sand Dunes
Provisioning Services are ecosystem services that describe the material or energy outputs from ecosystems. They include food, water, and other resources [15]. There are many minerals being extracted from sand dunes. The sand of sand dunes is washed and used in the construction industry [13]. However, sand mining from sand has been restricted in many countries. Mining for minerals and heavy metals from the sand dune of South Africa is very heavy [18] but there was no economic estimate of this benefit. The sand dune is also a suitable site for asparagus cultivation [19]. At the community level, Marram grass has been used to prepare mats, basketweaving, and thatching [20].
Sand dunes are an important source of coastal groundwater. The permeable sand dune system tends to support a freshwater lens which acts as a barrier to protect from saltwater intrusion into the inland. The freshwater lens is recharged both by direct precipitation and river discharges in the nearshore region [21]. It acts as a buffer against saltwater intrusion. In the Meijendel dunes of the Netherlands, dune aquifers have been used as a source of drinking water for centuries [22]. The aquifer supplies enough water for about 1.5 million people in the surrounding cities. Hence, the Meijendel dune has been managed as a nature reserve to supply drinking water needs. It has been estimated that the revenue from the reserve is $99.2 million/year (1991 estimate) however the cost of management of the reserve was estimated for $3.8 million/year [16].
Protection service
Sand dunes are naturally protecting the coast from storm surges due to their vegetated sandy structure and their height [23,4]. As a resilient natural barrier to the hazards such as wind and waves, sand dunes are the least expensive and most efficient natural structures against stormsurge, flood, and erosion to protect the coastal infrastructures [24,25,26]. The stabilized sand dunes protect the recreation and tourism beaches, oceanfront properties, near-shore developed lands, and wildlife habitats. The town of Misawa is a good example of where coastal sand dunes mitigated tsunami impacts during the Great Eastern Japan earthquake. According to witnesses of the Misawa villagers, the tsunami could not reach the top of the dune, leaving the village behind the dune undamaged [27].
Sand dune's disturbance regulation function in Mexico was estimated at $67874/ha/yr [28]. In the sand dune of comarques of Catalonia, Spain, the disturbance regulation function of the sand dune was estimated at $67,400 USD/ha/yr [29]. In South Carolina, the coastal protection function of the sand dune was estimated using a willingness to pay for the home price method of $254.00/30cm. In the same place (South Carolina), the contingency valuation method was applied to estimate an erosion control program which estimated the sand dune value at $4.45/household [30]. In Tramore, Ireland, the protection function of sand dunes was estimated at US$ 90,000/ha/50 year time scale [31].
Carbon sequestration
The coastal sand dunes are not as productive exporters of nutrients as many other coastal ecosystems. However, in sand dune grasslands and dune wetlands, Chrono sequence approaches were used to estimate carbon sequestration rates and estimated the carbon density as 212 tC/ha [32]. In another study, carbon sequestration rate of 1.25-3.12 total carbon dioxide (tCO 2 /ha/yr. was estimated for sand dune [33]. Accordingly, CO 2 sequestration function of sand dune was estimated between 18.36 and 45.9 £/ha/yr. [34].
Cultural Services of Sand Dunes
Coastal dunes also represent an important cultural value. In New Zealand, the earliest human settlements occurred on coastal dunes [35].
Many of the sand dune areas have archaeological evidence of Maori cultural heritage. Similarly, in Peru, the early hydraulic civilizations migrated to coastal dune fields [36]. Scenic attractions of sand dunes attract many painters in the Netherlands which is evident from many Dutch dunes that have been portrayed and also mentioned in a few patriotic Dutch folk songs. In addition, the sand dunes are important educational and knowledge-developing places for common people, academicians, and researchers. Since the 19 th century, studies conducted in these environments generated some of the first ecological theories that help to understand how ecological systems of sand dune ecosystem function [2].
Sand dunes provide tourism and recreational benefits by providing space for walking, beach combing, sunbathing, and scenic attraction [12]. Aesthetic and recreation value of the sand dune and beaches of comarques of Catalonia, Spain was estimated at $36,687 USD/ha/yr [29]. The tourism and recreation function of the sand dunes of Mexico was estimated at $12585/ha/yr [28]. The recreational benefit of Great Sand Dunes National Park and Preserve (GSD) in Colorado, USA was estimated at $89/visitor/yr or U.S. $54/visitor/24-recreational day (in 2002 U.S. $) using the individual travel cost model [37]. Similarly, in Tramore, Ireland, the recreational benefits of sand dunes were estimated at USD 290,000 including maintaining access and use of the coastal waters [31].
Supporting Service of Sand Dunes
Coastal sand dunes serve as essential habitats for many plants, invertebrates, and vertebrates (NSW DLWC, 2001). In addition, the sand dune acts as a feeding and nesting site for birds and sea turtles [38,39]. Many plants living in the coastal sand dune have been used incessantly in the traditional health care sector. Some coastal sand dune legumes are edible, endowed with medicinal properties; generate a variety of bioactive compounds of health and industrial importance. Mostly, they have been used to treat skin diseases, skin injuries, wounds, snake bites, and spider bites. They have been also been used to treat muscle sprain, and gynecological problems and to improve the immunological response. It has been estimated that the coastal sand dune legumes are contributing a significant share of US$400-US$500 million in India's herbal and traditional medicine global market [39,13]. However, there is no sufficient information about the economic share of the sand dune plants in traditional medicinal support in India. To get all services from the sand dunes, a sand dune project was undertaken in Monterey, California, USA, to revegetate 17.8 ha of coastal dune at a cost US$ 295,000. This represents US$18,800/ha and involved placing over 150,000 seedlings of 26 native dune plants [40]. Table 2.
BENEFIT TRANSFER AND
Among the coastal States and UTs, Andhra Pradesh has a huge area (11594 ha.) of sand dunes which shares Rs. 95306236262 (9530 crore) /yr. =US$ 2041745837.88 which is 36% of total sand dunes benefit out of National Green Account. State / UTs sand dunes' economic share in National Green Account is given in Table 3. Table 4.
The monetary values could be used in National, State, and regional policies to integrate the environment and economics. These would offer new opportunities for investment and employment and improve the environmental quality and welfare developments for human living. The values can be used for the costbenefit analysis for decision making, and indicative value to collect compensation for the violations made by stakeholders of the coastal areas.
ACKNOWLEDGEMENT
The authors wish to thank the officials working in various central and state government departments that shared many primary and secondary information about the benefits, and beneficiaries in the sand dune areas of the coast to estimate the monetary assessment. The authors express their gratitude to the Ministry of Environment Forests and Climate Change, New Delhi, and the World Bank, New Delhi for their continuous support in capacity development activities of coastal management in India. Especially, the guidance and encouragement of Dr. Purvaja Ramachandran, the Director (I/C), NCSCM, Dr.A. Senthil Vel, Advisor, MoEFCC, Shri. Tapas Paul & Shri. Ramakrishna, the World Bank, New Delhi, and Shri. Rajagopalan, IAS for facilitating this accounting exercise in coastal ecosystems of India. The authors wish to thank the colleagues at various departments of NCSCM for sharing data and providing inputs for the valuation exercise.
The opinions expressed in this publication are those of the authors concerned and do not necessarily represent the views of the organisation to that they are attached.
|
2022-10-20T16:06:00.736Z
|
2022-10-18T00:00:00.000
|
{
"year": 2022,
"sha1": "760da3896938eb9ffbfd5510fbed721fe59ee1ab",
"oa_license": "CCBY",
"oa_url": "https://journalajgr.com/index.php/AJGR/article/download/145/236",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "6f2c29533bd1b2bddd4991e8be40585885549cb8",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": []
}
|
52196115
|
pes2o/s2orc
|
v3-fos-license
|
Defects in interferon pathways as potential biomarkers of sensitivity to oncolytic viruses
Summary Increased sensitivity of cancer cells to viruses is a prerequisite for the success of oncolytic virotherapy. One of the major causes of such a phenotype is the disruption of innate antiviral defenses associated with dysfunction of type 1 interferons (IFNs) that permits unlimited replication of viruses in cancer cells. Defects in IFN pathways help cancer progression by providing additional advantages to tumor cells. However, while these defects promote the survival and accelerated proliferation of malignant cells, they facilitate viral replication and thus enhance the efficiency of viral oncolysis. This review describes a broad spectrum of defects in genes that participate in IFN induction and IFN response pathways. Expression levels and/or functional activities of these genes are frequently low or absent in cancer cells, making them sensitive to virus infection. Therefore, certain specific defects in IFN signaling cascades might serve as potential biomarkers to help in identifying individual cancer patients who are likely to benefit from oncolytic virotherapy.
Normal cells have a variety of mechanisms that protect them from pathogens. Key players in cellular antiviral defenses are type 1 interferons (IFNs). In response to initial virus intrusion, IFNs signal surrounding cells and stimulate them to enter an antiviral state that includes restricted proliferation, downregulated metabolism, and other specific changes that restrict viral replication. 1,2 Interferons also play an important role in immune surveillance, which reduces the probability of malignant transformation. 3 A malignant cell with dysfunctional IFN induction or response has numerous survival and growth advantages. Thus, accumulation of genetic defects in IFN signaling cascades and cancerogenesis go hand in hand. 4 Half of the silencing epigenetic changes associated with immortalization of cells belong to genes involved in IFN pathways. 5 The deletion of such genes is often observed in gliomas, 6 leukemias, 7 and bladder cancer. 8 Low expression of IFN receptors is characteristic of hepatocellular, 9,10 pancreatic, 11 gastric, 12 colon rectal, 13 and many other cancers. 4 Moreover, immune cells of cancer patients often have impaired IFN signaling. 14 Thus, along with the importance of IFN pathways for cellular defense against viruses, defects in these pathways promote viral oncolysis. The purpose of this review is to describe the broad range of these defects along with their role in promotion of malignant cell sensitivity to oncolytic viruses.
| DEFECTIVE INDUCTION OF TYPE 1 IFNs IN CANCER CELLS
In normal cells, virus infection triggers an antiviral mechanism which consists of 2 phases. First, as a result of recognition of viral components, expression and secretion of type 1 IFNs are initiated. Interferons then interact with specific receptors and stimulate a second phase, the IFN response. During the second phase, the secretion of IFNs is additionally stimulated by a positive regulatory feedback loop. The induction of IFN production is triggered by a set of molecular events that includes the interaction of viral components with specific cellular receptors and their activation. The activated receptors form a complex with adaptor proteins, and this complex promotes phosphorylation of transcription factors.
Finally, the phosphorylated transcription factors move to the nucleus and initiate the transcription of IFN mRNAs, leading to production and secretion of IFNs. In cancer cells, this chain of molecular events might be broken at different levels and by a variety of mechanisms.
The genes of the IFN induction pathway whose defects have been shown to be associated with sensitivity to viruses are described below and listed in Table 1. 48 MDA5, 49
| Sensors of viral DNA
Among PRRs that sense viral DNA, TLR2 is located in the surface membranes of cells, TLR9 is located in endosomal membranes, 34,35 and cGAS is a cytosolic PRR. 47 Type 1 IFN response is triggered by cGAS though activation of secondary messengers (Figure 3). 47 Epigenetic silencing of the cGAS gene was found in some melanoma cell lines. These cell lines were highly susceptible to herpes simplex virus 1 (HSV1) and vaccinia virus. 17
Toll-like receptors that recognize viral RNAs might be located in cell surface membranes (TLR3) or in endosomal membranes (TLR7/ 8). Sensitivity of primary prostatic adenocarcinoma cell lines to Sendai virus inversely correlates with expression levels of TLR3 and TLR7 mRNA. Cell lines with lower expression levels of these TLRs were more sensitive to virus infection. 16 Retinoic acid-inducible gene-I-like receptors include protein products of RIG-I and MDA5 genes. These proteins are RNA helicases that are ubiquitously expressed in most tissues. [53][54][55] Retinoic acid-inducible gene-I is a cytosolic viral RNA-sensing protein that is functionally activated after its interaction with ssRNAs containing 5′-triphosphate or with dsRNAs shorter than 1 kb 57,58 ( Figure 4). Several studies suggest a contribution of RIG-I deficiency to virus-mediated oncolysis. In multiple myeloma, glioblastoma, and astrocytoma derived cells, the ability of Newcastle disease virus (NDV) to induce the expression of RIG-I is critical for resistance to viral infection. 59 Similar results were obtained in another study, in which normal cells were compared with sarcoma, breast adenocarcinoma, and macrophage-derived tumor cells. A negative correlation , and PKR are functionally activated by viral RNAs. After activation, they trigger signaling cascades that result in increased production and secretion of type 1 IFNs. In turn, secreted IFN-beta interacts with intramembrane receptors of the cell in which it was produced or of another cell. Such interaction initiates autocrine or paracrine downstream signaling pathways, resulting in increased binding of IRF1 to promoters of RIG-I, MDA5, and PKR. Such binding stimulates transcription of these genes and ultimately increases production of the relevant proteins By these mechanisms, the activated PKR restrains virus replication.
Similar to RIG-I and MDA5 genes, the transcription of PKR mRNA is activated by type 1 IFNs, forming a self-activation regulatory loop. 50,51 Malignant progression often leads to impaired PKR function. 61 Truncations of the PKR gene have been observed in many leukemia cell lines. 62 Moreover, in most of the leukocyte samples from patients with chronic B-cell leukemia, PKR enzymatic activity was undetectable, despite the presence of a full-length PKR gene. 62 Decreased levels of PKR expression were found in samples obtained from patients with chronic lymphocytic leukemia 63 and nonsmall cell lung cancer. [64][65][66] Low levels of PKR expression were also associated with higher incidence of disease relapse and reduced overall survival of patients with rectal cancers. 67 In breast cancer samples, substantial decrease in PKR functional activity was also observed, despite an increase in its expression. 68 In response to viral infection, cancer cells with low or absent PKR activity could not induce eIF2α phosphorylation and IFN-beta transcription, resulting in increased NDV replication. 21 Ribonuclease L is another cytosolic sensor of viral RNAs. 56 After activation by interaction with viral dsRNA, it cleaves both viral and cellular RNAs. The RNA digestion products activate other viral RNA sensors, such as RIG-I, MDA5, and PKR (Figure 4), which in turn trigger IFN production 53 and/or promote apoptosis. 69 [70][71][72] and are also associated with a predisposition to prostate cancer. 73 A decrease in RNase L activity was also observed in lung cancer. 74 There is some evidence that impaired The pairing adaptor for cGAS is the stimulator of IFN genes (STING) protein, also known as transmembrane protein TMEM173 or MITA ( Figure 3). Stimulator of IFN genes is located on the ER and/or the mitochondria-associated ER membranes. 79 In some tumors, immune adaptor proteins' expression and/or activity is deregulated.
Thus, STING function is disabled in numerous colorectal adenocarcinomas. This loss of function is highly predictive of HSV1-mediated oncolytic activity. 24 Melanoma cells often lose STING signaling through the epigenetic silencing of either STING or cGAS genes.
Because of this loss, these cells are unable to produce type 1 IFNs in response to DNA viruses and are highly susceptible to HSV1 and vaccinia virus. 17
| ANTIVIRAL IRFs
Antiviral IRFs belong to a family of transcription factors that control many cellular processes, including the induction of antiviral cytokines and type I IFNs. Viral nucleic acid sensors such as TLR3/7/8/9, 34 RLRs, 55 PKR, 46 and cGAS, 47 by interacting with adaptor proteins, initiate phosphorylation of IRF3 and/or IRF7 (Figures 3 and 4). 80 This phosphorylation promotes IRF3 and/or IRF7 translocation from cytoplasm to nucleus. In the nucleus, these phosphorylated IRFs trigger transcription of type 1 IFNs, which initiate autocrine and paracrine loops of signal amplification. Interferon-beta stimulates transcription of viral RNA sensors such as RIG-I, 48 MDA5, 49 and PKR 51 by increasing the binding of IRF1 to promoter regions of these genes. Thus, on the one hand, IRFs could be activated by virus-sensing proteins; on the other hand, IRFs promote the signal amplification process, leading to the activation of transcription from the genes encoding the virussensing proteins ( Figure 2). As documented below, aberrant expression of IRFs, which characterizes many cancers, 81 contributes to viral oncolysis.
Interferon regulatory factor 1 is a transcription factor that activates transcription of the genes containing IFN-stimulated response elements. Viral nucleic acid-sensing proteins such as RIG-I, MDA5, and PKR are among these genes. Interferon regulatory factor 1 also participates in the upregulation of many genes that restrain cell proliferation and exert antitumor effects. 82 In cancer cells, the transcription activation function of IRF1 is often disabled. So, in acute myeloid leukemias 83 and gastric cancers, 84 the IRF1 gene is frequently deleted or silent; in many breast 85,86 and invasive hepatocellular carcinomas, 87 its expression is low or absent. The phosphatase and tensin homolog (PTEN) acts as a tumor suppressor gene and participates in IRF3 import into the nucleus, which is required for the transcription activation of IFN genes. 41 Therefore, it plays a critical role in antiviral innate immunity. 90 Phosphatase and tensin homolog tumor suppressor gene is commonly inactivated by mutations or deletions in human cancers, 91 resulting in VSV sensitivity. 32
| DEFECTIVE IFN RESPONSE IN MALIGNANT CELLS
The type 1 IFN response pathway, schematically shown in Figure 5, is triggered by interaction of either IFN-alpha or IFN-beta with a receptor on the cell surface. The receptor consists of a complex of 2 transmembrane subunits, the products of the IFNAR1 and IFNAR2 genes. 92 After the interaction, the complex activates receptor-associated Janus kinase (JAK) 1 and tyrosine kinase 2. The kinases phosphorylate the STAT1 and STAT2 proteins, which form a dimer and interact with IFN-regulatory factor 9 (also known as p48) to form a trimolecular complex called IFN-stimulated gene factor 3. An impaired IFN response pathway is a common defect in cancer cells. The genes that belong to the pathway whose defects were shown to be associated with sensitivity to viral oncolysis are described below and listed in Table 2.
| IFN RECEPTORS
Low expression levels of IFNAR1 and/or IFNAR2 protein and/or mRNA are characteristic of different malignancies, including melanomas, 104 mesotheliomas, 31 and carcinomas of hepatocellular, 9,10 pancreatic, 11 and gastric 12 origin. IFNAR1 expression was also missing in approximately 25% of 48 mesothelioma tumor biopsies. 31 In addition, low IFNAR protein levels are common in various bladder cancer cell lines and in clinical samples of bladder tumors. IFNAR expression level FIGURE 5 IFN-alpha and IFN-beta response pathways. Type 1 IFN response is triggered by a cell surface receptor represented by a complex of 2 transmembrane subunits IFNAR1 and IFNAR2. After interaction with IFN-alpha or IFN-beta, the complex activates receptor-associated Janus kinase 1 (JAK1) and tyrosine kinase 2 (TYK 2). The kinases phosphorylate the signal transducer and activator of transcription 1 (STAT1) and STAT2 proteins. These phosphorylated proteins (STAT1 and STAT2), in the form of a dimer, interact with IFN-regulatory factor 9 (IRF9, also known as p48) and form a trimolecular complex called IFN-stimulated gene factor 3. Interferon-stimulated gene factor 3 relocates from the cytoplasm to the nucleus, where it activates transcription of interferon stimulated genes. Interferon stimulated genes encode a family of proteins that inhibit multiple stages of viral infection, including virus entry, translation, replication, assembly, and spread. Malfunction of IFNAR1, IFNAR2, JAK1, STAT1, STAT2, and/or IRF9 disrupts transcription of IFN-stimulated genes, making cancer cells vulnerable to oncolytic virus infection correlated with tumor grade: The content of IFNAR was relatively high in more differentiated tumors and relatively low in less differentiated tumors. 96 IFNAR expression levels also correlated with cell sensitivity to viral infection. It was shown that in primary macrophages, deletion of the IFNAR1 gene was associated with NDV susceptibility. 60 In mesothelioma cell lines, which were sensitive to VSV, IFNAR1 and/ or IFNAR2 expression were significantly downregulated or undetectable. 31 In bladder carcinoma cell lines, an inverse correlation was found between the inhibition of cellular proliferation induced by type 101 The mechanism that underlies this sensitivity is not yet known, and it is not clear if the mutation is associated with downregulation of genes involved in type 1 IFN induction or response pathways.
| IFN STIMULATED GENES
The products of the MX genes are proteins with a protective function against both RNA and DNA viruses. Most mammals have two MX genes, MX1 and MX2, whose protein products are called MxA and MxB. 114 Mx proteins bind GTP and act as GTPases. 115 In the pleural mesothelioma cell lines that were most VSV sensitive, neither basal nor IFN-beta stimulated expression of MxA, PKR, and OAS was detected. However, in cell lines that were less sensitive to VSV, IFN-beta treatment stimulated, to various degrees, transcription from these genes. 31 Thus, IFN-beta treatment response, in the form of induction of MxA, PKR, and OAS transcription, correlates with VSV sensitivity.
A relationship was also found between the expression of GBP1, XAF1, and/or EPSTI1 genes and viral oncolysis in pancreatic ductal adenocarcinoma cells. These genes were constitutively expressed in VSV resistant cells and were not expressed at all in VSV-sensitive cells. 102
CONFLICT OF INTEREST
The authors declare that there is no conflict of interest regarding the publication of this article.
|
2018-09-16T08:12:25.310Z
|
2018-09-13T00:00:00.000
|
{
"year": 2018,
"sha1": "02f9a458a25a2b6ac3f9341ae573c70a095a969e",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/rmv.2008",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "dae84ec194c2ac844cecf497506374f4f00c967b",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
229243720
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of Malt Barley Genotypes for Grain Yield and Malting Quality Traits in the Central Highlands of Ethiopia
A multi-location variety trial was conducted with the objective to identify suitable malt barley varieties that satisfy the malt and brewing industry quality requirements and reduce the cost for importing malt barley. The trial was conducted to evaluate the performance of 28 promising malt barley varieties in seven environments (site-season combinations) during 2017 and 2018 cropping seasons. The phenological and agronomic data collected were subjected to analysis of variance (ANOVA) using ‘R’ software. All the traits analysis showed significant genotype, environment and genotype by environment interaction effects. Genotype, HB 52 X Bahati (G-5) exhibited high mean grain yield (5128 kgha -1 ) and significantly different for grain yield from one of the improved checks (Traveller). The other promising genotype, Bekoji-1 X Grace (G-8) showed acceptable malt quality results for extract (81.8 %), protein (10.0 %), friability (85 %) and lower values for beta -glucan (369 mg/l) with comparable grain yield to the improved checks (Traveller, HB1963). Similarly, genotype Sabin X Beka (G-7) showed acceptable malt quality results with lower level of beta-glucan (287mg/l). The “which-won-where” and “Mean vs. Stability” view of GGE biplot showed that, G-5 exhibited high mean grain yield and moderate grain yield stability and is the winner genotype in all test environments. Overall, G-8 is identified as potential malting barley candidate varieties to be tested for more industrial malt quality traits prior to variety verification trial and G-5 can be considered as potential parent in the malt barley crossing program for its good yield potential. mixed effect model ANOVA used statistical
INTRODUCTION
Barley is one of the most important cereal crops widely grown in the highlands of Ethiopia with annual production of about 2.0 million tons cultivated on an area of about 1 million hectares of land with an average national yield of 2.16 tons/ha (CSA, 2018). In the barley-based farming systems of the central highlands, smallholder farmers have very few alternative crops. One source of income could be growing malting barley, which has dependable local buyers in the country (Bayeh and Berhane, 2011). Both food and malt barley are grown side by side sharing similar agro-ecologies. However, the share of malt barley is roughly 15-20 % of the total barley production, which is the major input for beer production (Berhane et al., 2016). Traditionally, both six-row and two row barley types are cultivated in the country, but the best malt quality for beer is produced from two row varieties.
Malt barley is a high-opportunity cash crop, with great room for profitable expansion, particularly when connected to growing breweries. However, there is a shortage of quality malt barley varieties to meet the demand of the local breweries that forced the malt factories to import large quantity of malt barley from abroad. The gap between domestic supply and demand indicates an opportunity to enhance local production and substitute import through huge untapped malt barley potential in the country. Malt imports has grown tremendously reaching over 75 thousand tons in 2017 covering about 70% of total annual demand and costing the country about 41.5 million USD (ERCA, 2017).
Malt barley production has not expanded as expected, despite the potential of the country to grow malting barley both in quality and quantity. There is a relatively huge domestic market for malt of reasonable quality, where large number of farmers in the highlands of Ethiopia can commit part of their barley area to malt barley production. Even though barley grows in many highland regions of the country, the adoption of the malt barley vartieties is limited to the Arsi highlands and to a lessor extent in Bale where farmers can sale their proudce to the Asela malt factory and to the emerging brewery companies such as Heniken, Diageo and Dashen. The lack of adoption of malt barley to other highland barley producing areas is due to limited extension activities by the ministry of agriculture and relevant organizations. Curently, there is an attempt to promote malt barley production in the central highlands of Oromia and Amhra region to provide malt to breweries through contractual production. Therefore, improving the knowledge and skill of farmers through demonstrating new malt barley varieties would be vital to increase production and productivity to fill the existing supply gap in the country. In addition the quality demand from the brweries and the malt factory is a bench mark for the malt barley breeding program. However, most of the nationally released malt barley varieties did not satisfy all the requierments that showed the importance of developing breeding activity to release the malt barley varieties that satisfy the quality parameter demand of the brwereies and the malt factories.
The National Agricultural Research system (NARS), has developed more than 15 malt barley varieties over the last three decades with the collaboration of national research system and international organizations. This paper presents the results of the malt barley multi-location variety trial conducted in seven environments. The target was to select high yielding, agronomically superior varieties with good malting quality traits, disease and pest resistance and promot best performing varieties for release or incorporate in the breeding pipeline development as recipients/donors of useful genes.
MATERIALS AND METHODS
Twenty eight malt barley genotypes were evaluated in two sets of experiment in 2017 and 2018 cropping seasons and from these genotypes twelve of them were evaluated repeatedly in both years. However, the remaining sixteen genotypes appeared once in each year (Table 1). Therefore, twenty materials were tested each year using RCBD design with three replications. The experiments were carried out at Holetta (9°00'N, 38°38'E, elevation 2400m), Bekoji (7° 15'N, 39°15'E, elevation 2830m), Kofele (7°00' N, 38°45' E , elevation 2700m) and Debrebrhane (9°41'N, 39°32'E, elevation 2800m) experimental sites in a non-orthogonal set of six environments (site-season combinations). Twenty two of the experimental materials were selected from 2016 and 2017 malt barley preliminary variety trials and the other six genotypes were included as checks (Table 1).
Data were recorded on the following phenological and agronomic traits: days to 50% heading, days to 50% maturity, plant height (cm), hectoliter weigh (Kg hl -1 ), thousand kernel weight (gm) and grain yield (Kg ha -1 ) from four central rows. Plot yields were adjusted to 12.5% moisture content and converted to kilogram per hectare. Disease data were recorded on scald and net blotch on 0-9 scale and changed to percentage data, where 0=0%, 1=3%,2=12%,3=25%,4=42%, 5=58%,6=75%,7=88%,8=97%,9=100% before transformed using angular transformation for statistical analysis. These traits were subjected to analysis of variance using R-software (R Core Team, 2017). In the analysis the environments were considered as random and genotypes as fixed effects, and a mixed effect model ANOVA was used for statistical analysis. The individual and combine environment analysis of variance of the experiment was conducted as the model suggested by Singh and Ceccarelli (1996).
Where, Y ij = observed value of genotype i in block j, Y ijk = observed value of genotype i in block k of environment j, µ = grand mean of the experiment, G i = the effect of genotype i, B j = the effect of block j, Bk (j) = the effect of block k in environment j, e ij = error effect of genotype i in block j , E j = environment effect, GE ij = the interaction effect of genotype i with environment j, e ijk = error (residual) effect of genotype i in block k of environment j. GGE bi-plots were performed on grain yield to determine stability of the genotypes using GGEBiplotGUI packages of R-software (R Core Team, 2017).
The malt quality traits, namely extract content [% DM], protein content [% DM], friability [%] and ß-glucan content [mg/L] of selected genotypes were analyzed using the wet chemistry method in Germany malt quality laboratory "Versuchs-und Lehranstalt für Brauerei in Berlin" on malted grain following the appropriate procedure. Malt extract content was determined according to a small-scale version of the European Brewery Convention (EBC) Methods Manual, Section 4.9.1 (European Brewery Convention, 1998). Additionally, grain samples of all genotypes included in this study were analysed at Holetta quality laboratory following Near infrared spectroscopy (NIRs) technique using Bruker Tango instrument.
RESULTS AND DISCUSSION
The combined analysis of variance, across seven environments showed highly significant variations among the malt barley genotypes tested for all quantitative traits considered. The differences among the environments were statistically significant for all traits. Similarly, genotype by environment interaction was significant for all traits (Table 2). It showed that all genotypes were not affected by environment equally. This G x E interaction effect made the selection process difficult and requires further stability analysis of the genotype to select the one relatively stable across environments and provide high yield. (cm), SC=scald (%), NB=net blotch (%), TKW= thousand kernel weight (g), HLW= hectoliter weight (Kghl -1 ), GYLD= grain yield(kg ha -1 ), **, * significant at 5% and 1% probability level, ns=non significant, §these traits were not recorded at DB18 and mean squares under those traits are angular transformed values Among the tested genotypes even though not significantly different from some of the newly tested genotypes and check varieties (IBON 174/03, HB 1963, HB 1964, G5 (HB 52 x Bahati) showed highest mean grain yield. However, it had significantly highest mean grain yield than the registered European varieties (Explorer and Traveller). Then most genotypes (G21-G26) that were substituted during 2018 cropping season had higher mean grain yield comparable to G5, but it should be noticed that these genotypes were evaluated only at three environments (BK18, KF18 and DB18). Similarly, among the test genotypes evaluated at all environments, G6 (IBON 174/03 x Traveller) scored better mean grain yield. Whereas, the introduced malt barley (G13-G19) materials scored the lowest mean grain yield values. The maximum hectoliter weight (HLW) were recorded for the check variety G11 (HB1963), genotype G8 (Bekoji-1 x Grace) and G26 (MBHIBYT-23). Significantly higher mean TKW value was recorded on HB1964 (56.6g) and KWS-Eileen (52.6g). On the other hand, most introduced malt barley genotypes (viz. G13, G14, G15 G16, G17 and G19) had higher mean scald value of 67, 65, 69, 63, 67 and 72%, respectively. This is due to the fact that these materials evaluated under different environment condition than Ethiopia (Table 3). However, materials derived from crossing program had relatively better tolerance to scald. Accordingly, G1, G3, G4, G5, G6, G7 and G8 scored mean scald value of 53, 46, 45, 38, 51, 48 and 54%.
Regarding net blotch most of tested materials showed moderate resistant. In contrast, these genotypes which had higher scald values showed lower net blotch scores. As an example G15, G22 and G27 scored 69, 80 and 74% for scald and 9, 23 and 10% for net blotch, respectively. This may be due to the confounding effect of scald on net blotch. Plant height showed consistently large variation among the malt barley varieties. Similarly, most foreign materials have short plant height, in contrast G7 revealed high mean plant height value of 116 cm followed by G8 (115 cm). G6 (IBON 174/03 x Traveller) and G9 (IBON 174/03) were relatively early whereas G-17 (KWS-Sassy), G14 (KWS-Eileen), G-18 (KWS-Canton) and other European introduced materials were late in days to maturity. Generally, among the malt barley genotypes tested in all environments, HB 52 x Bahati and IBON 174/03 x Traveller showed grain yield advantage as compared to the recently released check varieties (HB1963 andHB 1964) and better disease resistance (Table 3). Similarly, Bekoji-1 x Grace had comparable mean grain yield value as the standard checks (HB 1963, Traveller) and high values in mean grain physical quality parameters (TKW and HLW). In addition, the newly inserted genotypes (G21-G26) showed similar grain yield with recent check varieties and to confirm their performance over year, these genotypes genotypes will be evaluated again in 2019/20 cropping season. .0 ghi 66.9 b-g 4215 bcd DHE=days to heading, DMA= days to maturity, PLH=plant height (cm), SC=scald (%), NB=net blotch (%), TKW= thousand kernel weight (g), HLW= hectoliter weight (hl g -1 ), GYLD= grain yield (kg ha -1 ), §these traits were not recorded at DB18, * = Improved checks Individual environment mean grain yield and malt quality parameters value of tested malt barley genotype have been presented in Table 4. HB 52 x Bahati (G5) showed higher mean grain yield values across most environments, ranged from 3614 -7026 kg ha -1 . Besides the grain yield the genotypes had better malt quality, it recorded 80.7, 8.3, 72, 699 values for extract, protein, friability and beta-glucan respectively. Similarly, the check variety (HB1963), exhibited the next high mean grain yield at most test environment and it also showed better values in malt quality traits. This confirmed that the variety is an alternative malting barley genotype for malting industry. Then IBON 174/03 x Traveller had higher mean grain yield values across test environments, but the genotype exhibited relatively poor malting quality. On the other hand, Bekoji-1 x Grace and Sabini x Beka showed premium malt qualities, they scored 81.8 and 81.5 for extract, 10.0 and 9.8 for protein, 85 and 78 for friability and 369 and 287 for beta glucan, respectively. These traits are the most important and relevant for the malt factories and breweries so that the breeders are working on improving it. Accordingly, the G8 and G7 scored mean grain yield value ranged 2915-5090 kg ha -1 and 2804-6248 kg ha -1 across the test environments with the high malt quality standard. (Table 4). Moreover G3 showed better values in all malt quality parameters. However, unlike other genotypes these values were recorded using NIRs techniques. So, the values should be further confirmed using wet chemistry method for solid conclusion. Consequently, even if HB 52 x Bahati and IBON 174/03 x Traveller were higher in grain yield performance, we can recommend genotype Bekoji-1 x Grace instead for its premium malt barley quality and acceptable grain yield potential. Table 4. Individual location average grain yield and malt quality traits performances of the 12 malt barley genotypes Figure 1. The which-won-where view of the GGE biplot of grain yield of Food barley genotypes based on the G × E data The GGE biplot explained 72% of the grain yield variation due to GGE (Figure 2). "Mean vs. Stability" view of GGE biplot is efficient tool to compare genotype based on mean performance and stability across environments within a mega-environment (Yan et al., 2007). Mean vs. stability view of GGE biplot presented in Figure 2. G-5 showed higher mean grain yield value than the other test genotypes and had moderate stability. The check varieties G-9 (IBON 174/03), G-11 (HB 1963) and the other test genotype G-6 (IBON 174/03 x Traveller) had the next highest mean grain values and these genotypes revealed good stability. On the other hand among high malt quality yielding genotypes, G8 (Bekoji-1 x Grace) showed relatively better stability (Figure 2).
CONCLUSIONS
The major finding from this study was that G-8 (Bekoji-1 x Grace) had shown a premium malt and physical grain quality result meeting the standard of the malt industry, which in most cases are missing in our elite varieties. The other genotype G-5 (HB 52 x Bahati) showed significantly higher mean grain yield and good malt quality traits, except for friability. The GGE biplot, G5 (HB 52 x Bahati) was the winning genotype in all test environments. Moreover, the "mean vs stability" view of GGE biplot indicated that G-5 recorded the highest mean grain yield. In terms of stability, the high yielding genotype (G-5) and the high quality genotype (G-8) showed moderate stability across test environments. Overall, genotypes G-8 is identified as potential malt barley candidate variety for further malt quality test prior to variety verification trial. Genotype G-5 is included in the crossing block as potential donor parent for its high yielding performance across the test environments. In addition, G-8 (Bekoji-1 x Grace) and G-7 (Sabini x Beka) are recommended for the malt barley crossing program as potential parent for their good malt quality traits (high malt extract and low beta-glucan).
|
2020-11-05T09:09:19.693Z
|
2020-10-01T00:00:00.000
|
{
"year": 2020,
"sha1": "45a674b2390ef4922a2d0d784f2e3e92d66818e2",
"oa_license": "CCBY",
"oa_url": "https://iiste.org/Journals/index.php/JBAH/article/download/54422/56235",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "452611f5e2de5d7d7d599c5a6299d61037007b82",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
11347762
|
pes2o/s2orc
|
v3-fos-license
|
Journal of Graph Algorithms and Applications Drawing Planar Graphs with Reduced Height 434 Durocher and Mondal Drawing Planar Graphs with Reduced Height
A polyline (resp., straight-line) drawing Γ of a planar graph G on a set L k of k parallel lines is a planar drawing that maps each vertex of G to a distinct point on L k and each edge of G to a polygonal chain (resp., straight line segment) between its corresponding endpoints, where the bends lie on L k. The height of Γ is k, i.e., the number of lines used in the drawing. In this paper we establish new upper bounds on the height of polyline drawings of planar graphs using planar separators. Specifically , we show that every n-vertex planar graph with maximum degree ∆, having an edge separator of size λ, admits a polyline drawing with height 4n/9 + O(λ), where the previously best known bound was 2n/3. Since λ ∈ O(√ n∆), this implies the existence of a drawing of height at most 4n/9 + o(n) for any planar triangulation with ∆ ∈ o(n). For n-vertex pla-nar 3-trees, we compute straight-line drawings, with height 4n/9 + O(1), which improves the previously best known upper bound of n/2. All these results can be viewed as an initial step towards compact drawings of pla-nar triangulations via choosing a suitable embedding of the graph.
Introduction
A polyline drawing of a planar graph G is a planar drawing of G such that each vertex of G is mapped to a distinct point in the Euclidean plane, and each edge is mapped to a polygonal chain between its endpoints.Let L k = {l 1 , l 2 , . . ., l k } be a set of k horizontal lines such that for each i ≤ k, line l i passes through the point (0, i).A polyline drawing of G is called a polyline drawing on L k if the vertices and bends of the drawing lie on the lines of L k .The height of such a drawing is k, i.e., the number of parallel horizontal lines used by the drawing.Such a drawing is also referred to as a k-layer drawing in the literature [21,25].
Let Γ be a polyline drawing of G.We call Γ a t-bend polyline drawing if each of its edges has at most t bends.Thus a 0-bend polyline drawing is also known as a straight-line drawing.G is called a planar triangulation if every face of G is bounded by a cycle of three vertices.Figure 1(a) shows a planar graph G, and Drawing planar graphs on a small integer grid is an active research area in graph drawing [4,9,17,24,15], which is motivated by the need of compact layout of VLSI circuits and visualization of software architecture.In visualization applications, the constraint on area is imposed naturally by the size of the display screen.For VLSI circuit layout, compact drawings reduce the microchip area.Minimizing area often requires the edges to have bends.Since simultaneously optimizing the width and height of the drawing is very challenging, researchers have also focused their attention on optimizing one dimension of the drawing [7,18,21,25], while the other dimension is unbounded.
In this paper we develop new techniques that can produce drawings with small height.We distinguish between the terms 'plane' and 'planar'.A plane graph is a planar graph with a fixed combinatorial embedding and a specified outer face.While drawing a planar graph, we allow the output to represent any planar embedding of the graph.On the other hand, while drawing a plane graph, the output is further constrained to respect the input embedding.
Related Work: State-of-the-art algorithms that compute straight-line drawings of n-vertex plane graphs on an (O(n) × 2n/3)-size grid imply an upper bound of 2n/3 on the height of straight-line drawings [6,7].This bound is tight for plane graphs, i.e., there exist n-vertex plane graphs such as plane nested triangles graphs and some plane 3-trees that require a height of 2n/3 in any of their straight-line drawings [12,22].Recall that an n-vertex nested triangles graph is a plane graph formed by a sequence of n/3 vertex disjoint cycles, , where for each i ∈ {2, . . ., n/3}, cycle C i contains the cycles C 1 , . . ., C i−1 in its interior, and a set of edges that connect each vertex of C i to a distinct vertex in C i−1 .Besides, a plane 3-tree is a triangulated plane graph that can be constructed by starting with a triangle, and then repeatedly adding a vertex to some inner face of the current graph and triangulating that face.
The 2n/3 upper bound on the height is also the currently best known bound for polyline drawings, even for planar graphs, i.e., when we are allowed to choose a suitable embedding for the output drawing.In the variable embedding setting, Frati and Patrignani [17] showed that every n-vertex nested triangles graph can be drawn with height at most n/3 + O(1), which is significantly smaller than the lower bound of 2n/3 in the fixed embedding setting.Zhou et al. [28] showed that series-parallel graphs can be drawn with 0.3941n 2 area, and hence with height 0.628n < 2n/3.Similarly, Hossain et al. [18] showed that an universal set of n/2 horizontal lines can support all n-vertex planar 3-trees, i.e., every planar 3-tree admits a drawing with height at most n/2.They also showed that 4n/9 lines suffice for some subclasses of planar 3-trees, and asked whether 4n/9 is indeed an upper bound for planar 3-trees.
In the context of optimization, Dujmović et al. [13] gave fixed-parametertractable (FPT) algorithms, parameterized by pathwidth, to decide whether a planar graph admits a straight-line drawing on k horizontal lines.Drawings with minimum number of parallel lines have been achieved for trees [21].Recently, Biedl [3] gave an algorithm to approximate the height of straight-line drawings of 2-connected outer planar graphs within a factor of 4. Several researchers have attempted to characterize planar graphs that can be drawn on few parallel lines [8,16,26].
Contributions:
In this paper we show that every n-vertex planar graph with maximum degree ∆, having an edge separator of size λ, admits a drawing with height 4n/9 + O(λ), which is better than the previously best known bound of 2n/3 for any λ ∈ o(n).This result is an outcome of a new application of the planar separator theorem [10].The resulting drawing is not a grid drawing, i.e., the vertices and bends are not restricted to lie on integer grid points, and it is not obvious whether our technique can be immediately adapted to improve the current best 8 9 n 2 -area upper bound [6] on the grid drawings of planar graphs.
However, the techniques developed in this paper have the potential to provide powerful tools for computing compact drawings for planar triangulations in the variable embedding setting.
If the input graphs are restricted to planar 3-trees, then we can improve the JGAA, 0(0) 0-0 (0) 3 upper bound to 4n/9 + O(1), which settles the question of Hossain et al. [18] affirmatively.Furthermore, the drawing we construct in this case is a straightline drawing.
Preliminary Definitions and Results
Let G be an n-vertex plane graph.G is called connected if there exists a path between every pair of vertices in G.We call G a k-connected graph, where Let P k , for some k ∈ {3, 4, . . ., n}, be the path w 1 (= v 1 ), . . ., w l , v k (= w l+1 ), w r , . . ., w t (= v 2 ).The edges (w l , v k ) and (v k , w r ) are the l-edge and r-edge of v k , respectively.The other edges incident to v k in G k are called the m-edges.
For example, in Figure 2(c), the edges (v 6 , v 1 ), (v 6 , v 4 ), and (v 5 , v 6 ) are the l-, rand m-edges of v 6 , respectively.Let E m be the set of all m-edges in G. Then the graph T vn induced by the edges in E m is a tree with root v n .Similarly, the graph and the graph T v2 induced by all r-edges except (v 2 , v n ) is a tree rooted at v 2 .
These three trees form the Schnyder realizer [24] of G, e.g., see Figure 2(a).Lemma 1 (Bonichon et al. [5]) The total number of leaves in all the trees in any Schnyder realizer of an n-vertex triangulation is at most 2n − 5.
Let G be a planar graph and let Γ be a straight-line drawing on k parallel lines.By l(v), where v is a vertex of G, we denote the horizontal line in Γ that passes through v.We now have the following lemma that bounds the height of a straight-line drawing in terms of the number of leaves in a Schnyder tree.
Although the lemma can be derived from known straight-line [6] and polyline drawing algorithms [4], we include a proof for completeness.
Lemma 2 Let G be an n-vertex plane triangulation and let v 1 , v n , v 2 be the outer vertices of G in clockwise order on the outer face.Assume that T vn has at most p leaves.Then for any placement of v n on line l 1 or l p+2 , there exists a straight-line drawing Γ of G on L p+2 such that v 2 and v 1 lie on lines l p+2 and l 1 , respectively.Symmetrically, there exists a straight-line drawing Γ of G on L p+2 such that v 1 and v 2 lie on lines l p+2 and l 1 , respectively.
Proof: We construct Γ by a variant of the shift algorithm [9].The case when G has n = 3 vertices is straightforward, and hence we assume that n > 3. The construction of Γ is incremental.We start with the drawing of G 3 and then add the other vertices in the canonical order corresponding to T vn .Let Γ 3 be the drawing of G 3 on L 3 , where v 1 and v 2 are placed on l 1 and l 3 , respectively, along a vertical line, and v 3 is placed on l 2 to the left of edge (v 1 , v 2 ), e.g., see Figure 3(b).We now add the vertices v i , where 3 < i < n, maintaining the following invariants: (a) P i is drawn as a strictly y-monotone polygonal chain.
JGAA, 0(0) 0-0 (0) 5 Observe that Γ 3 maintains all the above invariants.We now assume that i > 3 and for all j < i, Γ j maintains the above invariants, and consider the insertion of v i .Let w p , . . ., w q be the neighbors of v i on P i−1 .If q − p ≥ 2, then v i is a non-leaf vertex in T vn .In this case we place v i on l(w q−1 ) and add the edges (v i , w), where w ∈ {w p , . . ., w q }.Since P i−1 is strictly y-monotone, we can place v i sufficiently far from w q−1 to the left such that the edges (v i , w) do not create any edge crossing, and P i is strictly y-monotone in Γ i .Figures 3(d)-(e) illustrate such a scenario.Since the number of leaves in v 3 , . . ., v i is same as the number of leaves in v 3 , . . ., v i−1 , Invariants (a)-(c) hold in Γ i .
In the remaining case, q − p = 1, i.e., v i is a leaf in T vn .Here we shift the vertices w q , . . ., w t (= v 2 ) and their descendants in T vn above by one unit from their current positions.Such a shift does not create edge crossings [9].
Figures 3(b)-(c),(f) illustrate such a scenario.We then place v i on l(w q ) − 1 sufficiently far to the left such that the edges (v i , w p ) and (v i , w q ) do not create any edge crossing, and P i is strictly y-monotone in Γ i .Since the number of leaves in v 3 , . . ., v i is one more than the number of leaves in v 3 , . . ., v i−1 , Invariants (a)-(c) hold in Γ i .
Since P n−1 is strictly y-monotone in Γ n−1 , there exists a point c on l 1 (similarly, on l p+2 ) which is visible to all the vertices on P n−1 .We place v n at c, and draw the edges incident to it, which completes the drawing of G.
Chrobak and Nakano [7] showed that every planar graph admits a straight-line drawing with height 2n/3.We now observe some properties of Chrobak and Nakano's algorithm [7].Let G be a plane triangulation with n vertices and let x, y be two prescribed outer vertices of G in clockwise order on the outer face of G. Let Γ be the drawing of G produced by the Algorithm of Chrobak and Nakano [7].Then Γ has the following properties: (CN 1 ) Γ is a drawing on L q , where q ≤ 2n/3.
(CN 2 ) For the vertices x and y, we have l(x) = l 1 and l(y) = l q in Γ.The remaining outer vertex z lies on either l 1 or l q .
Note that the placement of z cannot be prescribed to the algorithm, i.e., the algorithm may produce a drawing where l(x) = l 1 , l(y) = l q and l(z) = l 1 , however, this does not imply that there exists another drawing where l(x) = l 1 , l(y) = l q and l(z) = l q .We end this section with the following lemma.
Lemma 3 Let G be a plane graph and let Γ be a straight-line drawing of G on a set L k of k horizontal lines, where the lines are not necessarily equally spaced.
Then there exists a straight-line drawing Γ of G on a set of k horizontal lines that are equally spaced.Furthermore, for every i ∈ {1, 2, . . ., k}, the left to right order of the vertices on the ith line in Γ coincides with that of Γ .In fact, this construction is inspired by the technique for computing visibility representation of planar graphs, as described in [27,1].
We now adjust the length of the vertical edges so that the layers in D become equally spaced.Biedl [2] showed that such a drawing D can be transformed to the required straight-line drawing Γ , where for every i ∈ {1, 2, . . ., k}, the left to right order of the vertices on the ith line in D coincides with that of Γ .
In the following sections we describe our drawing algorithms.For simplicity we often omit the floor and ceiling functions while defining different parameters of the algorithms.One can describe a more careful computation using proper floor and ceiling functions, but that does not affect the asymptotic results discussed in this paper.
Drawing Triangulations with Small Height
Every planar triangulation has a simple cycle separator of size O( √ n) [11].In the preliminary version of this paper [14], we used this result to prove that every n-vertex planar graph with maximum degree ∆ ∈ o( √ n) admits a 4-bend polyline drawing with height at most 4n/9 + o(n).In this section we use edge separator, and prove that every planar graph with ∆ ∈ o(n) can be drawn with 3 bends per edge and height at most 4n/9 + o(n).
We first present an overview of our algorithm, and then describe the algorithmic details.
Algorithm Overview
Let G = (V, E) be an n-vertex planar graph, where n ≥ 9, and let Γ be a planar drawing of G on the Euclidean plane.Without loss of generality assume that G is a planar triangulation.Let M ⊆ E be an edge separator of G such that the corresponding edges in the dual graph G * form a simple cycle (respectively, V i ⊆ V ) be the vertices that lie outside (respectively, inside) of C * .Diks et al. [10] proved that there always exists such an edge separator illustrate a planar triangulation G and an edge separator of G. Since G is a planar triangulation, there must be an outer vertex q on G i or G o such that q is incident to two or more edges of M .Without loss of generality assume that q lies on G i , e.g., see vertex v
Algorithm Details
Let G be the embedding obtained from G by choosing q, b, c as the outer face.
We We now use the algorithm of Chrobak and Nakano [7] to compute a straight- We now construct a graph G i from G i , as follows.Observe that the vertex a is an outer vertex of G i , which appears immediately after q while walking on the outer face of G i .We add a vertex w d on the outer face of G i , and make it adjacent to q and a.We now add another vertex w i on the outer face, and make it adjacent to w d and q such that the cycle w i , q, w d becomes the boundary of the outer face, e.g., see Figure 5(b).
If w o lies in l x in Γ o , then we make w i adjacent to all the outer vertices of G i .Otherwise, we make w d adjacent to all the outer vertices of G i .We remove any resulting multi-edges by subdividing each corresponding inner edge with a dummy vertex, and then by triangulating the resulting graph.To construct a drawing of G , we merge the drawings of G o and G i .
Γ o Γ i the outer vertices of G o and G i , respectively.Let i be a vertical line to the right of segment w d w i in Γ i such that all the other vertices of Γ i are in the right halfplane of i .Furthermore, i must be close enough such that all the intersection points with the edges incident to w i lie in between the horizontal line l(w i ) and the horizontal line immediately below l(w i ).For each intersection point, we insert a division vertex at that point and create a horizontal line through that vertex.We then delete vertex w i from Γ i , but not the division vertices.Let the resulting drawing be D. Since every planar triangulation with maximum degree ∆ has an edge separator of size O( √ ∆n) [10], we obtain the following corollary.
Corollary 1 Every n-vertex planar triangulation with maximum degree o(n) admits a polyline drawing with height at most 4n/9 + o(n).
Pach and Tóth [23] showed that polyline drawings can be transformed into straight-line drawings while preserving the height if the polyline drawing is monotone, i.e., if every edge in the polyline drawing is drawn as a y-monotone curve.Unfortunately, our algorithm does not necessarily produce monotone drawings.
Drawing Planar 3-Trees with Small Height
In this section we examine straight-line drawings of planar 3-trees.We first introduce a few more definitions and recall some known results.Afterwards, we describe the algorithm details.
Technical Background
Let G be an n-vertex planar 3-tree and let Γ be a straight-line drawing of G.
Then Γ can be constructed by starting with a triangle, which corresponds to the outer face of Γ, and then iteratively inserting the other vertices into the inner faces and triangulating the resulting graph.Let a, b, c be the outer vertices of Γ in clockwise order.If n > 3, then Γ has a unique vertex p that is incident to all the outer vertices.This vertex p is called the representative vertex of G.
For any cycle i, j, k in G, let G ijk be the subgraph induced by the vertices i, j, k and the vertices lying inside the cycle.Let G * ijk be the number of vertices in G ijk .The following two lemmas describe some known results.
Lemma 4 (Mondal et al. [22]) Let G be a plane 3-tree and let i, j, k be a cycle of three vertices in G. Then G ijk is a plane 3-tree.Let G be a plane 3-tree and let a, b, c be the outer vertices of G. Assume that G has a drawing Γ on L k , where a, b lie on lines l 1 , l k , respectively, and c lies on line l i , where 1 ≤ i ≤ k.Then the following properties hold for Γ [18].
Reshape.Let p, q and r be three distinct non-collinear points on lines l 1 , l k and l i , respectively.Then G has a drawing Γ on L k such that the outer face of Γ coincides with triangle pqr (e.g., Figures 7(a)-(b)).
Stretch.For any integer t ≥ k, G admits a drawing Γ on L t such that a, b, c lie on l 1 , l t , l i , respectively (e.g., Figure 7(c)).Lemma 6 Let G be an n-vertex plane 3-tree with outer vertices a, b, c in clockwise order.Let w 1 , . . ., w k (= p), w k+1 (= q), . . ., w t (= c) be the maximal path P such that each vertex on P is adjacent to both a and b (e.g., see Figure 8).Assume that n = n + 3, and and max ∀i>k+1 {G * awiwi−1 , G * bwiwi−1 } ≤ 4n /9, then G admits a drawing with height at most 4n /9 + 4.
Proof: To construct the required drawing of G, we distinguish two cases depending on whether leaf(T p,G abp ) ≤ x or not.Let H be the subgraph of G induced by the vertices {a, b} ∪ {w k , . . ., w t }.In each case, we first construct a drawing of H on L x+4 , and then extend it to compute the required drawing using Lemmas 2-5.
Case 1 (leaf(T p,Gabp ) ≤ x).Since G * bqp ≤n /2, by Lemma 1, one of the trees in the Schnyder realizer of G bqp has at most n /3 ≤ x leaves.We now draw G abq considering the following scenarios.
Case 1A (leaf(T p,Gbqp ) ≤ x).We refer the reader to Figures 9(a such that a, p, q lie on l 1 , l x+2 , l 1 , respectively, and by the Reshape property we can merge these drawings to obtain a drawing of G abq on L x+3 .Case 1B (leaf(T q,Gbqp ) ≤ x).We refer the reader to Figures 9(a)-(b).
By Lemma 2 and the Stretch condition, G abp admits a drawing Γ abp on JGAA, 0(0) 0-0 (0) 13 L x+2 such that the vertices a, b, p lie on l 1 , l x+2 , l 1 , respectively.Similarly, G bqp admits a drawing Γ bpq on L x+2 such that the vertices p, b, q lie on l 1 , l x+2 , l x+2 , respectively.By Lemma 5, G apq admits a drawing Γ apq on L (n +2)/3 such that a, p, q lie on l 1 , l 1 , l (n +2)/3 , respectively.By Stretch, we modify Γ apq such that a, p, q lie on l 1 , l 1 , l x+2 , respectively.Finally, by Stretch and Reshape we can merge these drawings to obtain a drawing of G abq on L x+3 .Figures 9(c)-(d) show an illustration.
Case 1C (leaf(T b,Gbqp ) ≤ x).The drawing of this case is similar to Case 1B.The only difference is that we use T b,G bqp while drawing G bqp .
Observe that each of the Cases 1A-1C produces a drawing of G abq such that a, b lie on l 1 , l x+3 , respectively, and q lies on either l 1 or l x+3 .We use the Stretch operation to modify the drawing such that a, b lie on l 1 , l x+4 , respectively, and q lies on either l 2 or l x+3 .Specifically, if q is on l x+3 , then we push b to l l+4 .
Otherwise, q is on l 1 , and in this case we push a to l 0 , and then shift the drawing up by one layer to move a back to l 1 .
If q lies on l x+3 , then we place the vertices w k+1 , . . ., w t (= c) on l 2 and l x+3 alternatively, as shown in Figure 10(a).Similarly, if q lies on l 2 , then we draw the path w k+1 , . . ., w t (= c) in a zigzag fashion, placing the vertices on l x+3 and l 2 alternatively such that each vertex is visible to both a and b.For each If leaf(T p,G bpq ) > x, then either leaf(T b,G bpq ) ≤ x or leaf(T q,G bpq ) ≤ x.
In this case we draw G abq on L x+3 , where a, b, p, q lie on l 1 , l x+3 , l 2 , l x+3 , respectively, as in Figure 10 JGAA, 0(0) 0-0 (0) 15 Each of the Cases 2A-2C produces a drawing of G abq such that a, b lies on l 1 , l x+3 , respectively, and q lies on either l 1 or l x+3 .Hence we can extend these drawings to draw G as in Case 1.
Drawing Algorithm
We are now ready to describe our algorithm.
Decomposition.
Let G be an n-vertex plane 3-tree with the outer vertices a, b, c and the representative vertex p.A tree spanning the inner vertices of G is called the representative tree T if it satisfies the following conditions [22]: -The plane 3-tree G i , where 1 ≤ i ≤ 3, is determined by the representative tree rooted at the ith child of v, and thus contains at most r/2 + 3 = (n − 3)/2 + 3 = (n + 3)/2 vertices.
-The plane 3-tree G 4 is obtained by deleting v and the vertices from G that are descendent of v in T , and contains at most (n + 3)/2 vertices.
Drawing Technique.
Without loss generality assume that
Case 1 (G
then G admits a drawing on L x+4 by Lemma 6.We may thus assume that there exists some j > k + 1 such that either G * awj wj−1 > x or G * bwj wj−1 > x.
We first show that G abq can be drawn on L x+3 in two ways: One drawing Γ 1 contains the vertices a, b, q on l 1 , l x+3 , l 2 , respectively, and the other drawing Γ 2 contains a, b, q on l 1 , l x+3 , l x+2 , respectively.We then extend these drawings to obtain the required drawing of G. Consider the following scenarios depending on whether G * 1 ≤ x or not. - Here we draw the subgraph G induced by the vertices a, b, p, q such that they lie on l 1 , l x+3 , l x+2 , l 2 , respectively.Since can be drawn inside their corresponding triangles, which corresponds to Γ 1 .Similarly, we can find another drawing Γ 2 of G abq , where the vertices a, b, p, q lie on l 1 , l x+3 , l 2 , l x+2 , respectively. - , we can use Chrobak and Nakano's algorithm [7] and Stretch operation to draw G 1 such that that a, b lie on l 1 , l n /3+1 , respectively, and p lies either on l 2 or l n /3 .First consider the case when p lies on l n /3 .We then use the Stretch condition to push b to l x+3 .To construct Γ 1 , we place q on l 2 , and to construct Γ 2 , we place q on l x+2 .Since G * 3 ≤ G * 2 ≤ n /9, for each placement of q, we can draw G 2 and G 3 using Lemma 5 inside their corresponding triangles.
The case when p lies on l 2 is handled symmetrically, i.e., first by pushing a downward using Stretch operation so that the drawing spans (x + 3) horizontal lines, then shifting the drawing upward such that a comes back to l 1 , and finally placing the vertex q on l 2 (for Γ 1 ) and l x+2 (for Γ 2 ) .
We now show how to extend the drawing of G abq to compute the drawing of G.
-Assume that G * awj wj−1 > x.Shift b to l x+4 , and draw the path w k+1 , . . ., w j−1 in a zigzag fashion, placing the vertices on l 2 and l x+3 alternatively, such JGAA, 0(0) 0-0 (0) 17 that l(w k+1 ) = l(w k+2 ), and each vertex is visible to both a and b.Choose Γ 1 or Γ 2 such that the edge (a, w j−1 ) spans at least x + 3 lines.We now draw G awj wj−1 using Chrobak and Nakano's algorithm [7].Since -Assume that G * bwj wj−1 > x.The drawing in this case is similar to the case when G * awj wj−1 > x.The only difference is that while drawing the path w k+1 , . . ., w j−1 , we choose Γ 1 or Γ 2 such that the edge (b, w j−1 ) spans at least x + 3 lines.The following theorem summarizes the result of this section.
Conclusion
In this paper we have shown that every n-vertex planar graph with maximum degree ∆, having an edge separator of size λ, admits a polyline drawing with height 4n/9 + O(λ), which is 4n/9 + o(n) for any planar graph with ∆ ∈ o(n).
While restricted to n-vertex planar 3-trees, we compute straight-line drawings with height at most 4n/9 + O(1).In some cases the width of the drawings that we compute for plane 3-trees may be exponentially large over n.Hence it would be interesting to find drawing algorithms that can produce drawings with the same height as ours, but bound the width as a polynomial function of n.
Several natural open question follows.
-Does every n-vertex planar triangulation admit a straight-line drawing with height at most 4n/9 + O(1)?
-What is the minimum constant c such that every n-vertex planar 3-tree admits a straight-line (or polyline) drawing with height at most cn?
-Does a lower bound on the height for straight-line drawings of triangulations determine a lower bound also for their polyline drawings?
Recently, Biedl [2] has examined height-preserving transformations of planar graph drawings, which shed some light on the last open question.
Figure 1 (
Figure 1(b) illustrates a 1-bend polyline drawing of G on L 8 .
Figure 1 :
Figure 1: (a) A triangulation G. (b) A polyline drawing of G with height 8.
if the removal of fewer than k vertices does not disconnect the graph.A plane graph delimits the plane into topologically connected regions called faces.The bounded regions are called the inner faces and the unbounded region is called the outer face of G.The vertices on the boundary of the outer face are called the outer vertices, and the remaining vertices are called the inner vertices of G.If every face of G (including the outer face) is a cycle of length three, then we call G a triangulation, or a maximal planar graph.G is called an internally triangulated graph if every face except the outer face is a cycle of length three.Let G = (V, E) be an n-vertex triangulated plane graph.A simple cycle C in G is called a cycle separator if the interior and the exterior of C each contains at most 2n/3 vertices.An edge separator of G is a subset of edges M of G such that the graph G = (V, E \ M ) consists of two induced subgraphs, each containing at most 2n/3 vertices.Every planar graph with maximum degree ∆ admits an edge separator of size 2 √ 2∆n, where the corresponding edges in the dual graph form a simple cycle [10].Let v 1 , v n and v 2 be the outer vertices of G in clockwise order on the outer face.Let σ = (v 1 , v 2 , ..., v n ) be an ordering of all vertices of G.By G k , 2 ≤ k ≤ n, we denote the subgraph of G induced by v 1 , v 2 , . . ., v k .For each G k , the notation P k denotes the path (while walking clockwise) on the outer face of G k that starts at v 1 and ends at v 2 .We call σ a canonical ordering of G with respect to the outer edge (v 1 , v 2 ) if for each k, 3 ≤ k ≤ n, the following conditions are satisfied [9]: (a) G k is 2-connected and internally triangulated.(b) If k ≤ n, then v k is an outer vertex of G k and the neighbors of v k in G k−1 are consecutive on P k−1 .
Figure 2 :
Figure 2: (a) A plane triangulation G with a canonical ordering.The associated realizer, where the l-, r-and m-edges are shown in dashed, bold-solid, and thinsolid lines, respectively.(b) T v1 .(c) Neighbors of v 6 in G 6 .(d)-(g) Illustrating Lemma 3.
Figure 3 :
Figure 3: (a) A plane triangulation G with a canonical ordering of its vertices.(b)-(f) Illustration for drawing Γ i .
Proof: A flat visibility drawing of G on L k maps each vertex of G to a distinct horizontal interval on some horizontal line of L k , and each edge of G to a horizontal or vertical line segment between the corresponding intervals.Given a straight-line drawing Γ of G on L k , it is straightforward to transform Γ into a flat visibility drawing D on L k such that for every i ∈ {1, 2, . . ., k}, the left to right order of the vertices on the ith line in Γ coincides with that of D, and for every vertex v in D, the clockwise ordering of the edges around v coincides with the ordering in Γ.One way to construct such a drawing D is to direct the edges of Γ from bottom to top, and then draw the directed paths in a depth-first search order from left to right.Figures 2(d)-(g) illustrate such a construction.
be the subgraphs of G induced by the vertices of V i and V o , respectively.Since n ≥ 9, each of G i and G o contains at least 3 vertices.
5 in Figure4(c).Let a, b, c be three consecutive neighbors of q in G in counter clockwise order such that a ∈ V i and {b, c} ⊆ V o .We take an embedding G of G with q, b, c as the outer face, as shown in Figure4(d) with q = v 5 , a = v 3 , b = v 2 , and c = v 11 .Consequently, G o and G i lie on the outer face of each other, as illustrated in Figures 4(d)-(e).We first draw G o and G i separately with small height, and then merge these drawings to compute the final output.The drawings of G o and G i are placed side by side.Consequently, the height of the final output can be expressed in terms of the maximum height of the drawings of G o and G i , and hence the area of the final drawing becomes small.
first construct a graph G o from G o by adding a vertex w o on the outer face of G o , and making w o adjacent to all the outer vertices of G o such that the edge (b, c) remains as an outer edge.We remove any resulting multi-edges by subdividing each corresponding inner edge with a dummy vertex, and then by triangulating the resulting graph.Note that we do not need to add dummy vertices on the outer edges.Figure 5(a) illustrates an example of G o , where the dummy vertex d removes the multi-edges between v 7 and w o .Since there are O( √ ∆n) edges in M , the number of vertices in G o is at most 2n/3 + O( √ ∆n).
Figure 4 :
Figure 4: (a) A planar triangulation.(b) An edge separator M of G, and the corresponding simple cycle in the dual graph.The edges of M and C * are shown in thin and thick gray, respectively.(c) G o and G i are shaded in light-gray and dark-gray, respectively.(d)-(e) Choosing a suitable embedding G .
Figure 5
(b) illustrates an example of G i , where d is a dummy vertex.Since there are O( √ ∆n) edges in M , the number of vertices in G i is at most 2n/3 + O( √ ∆n).We now use the algorithm of Chrobak and Nakano[7] to compute a straightline drawing Γ i of G i with height y = 4n/9 + O( √ ∆n) such that w d , w i lie on l 1 , l y , respectively, and the segment w d w i is vertical.Assume without loss of generality that all the vertices of G i are in the right half-plane of the line determined by w d , w i .
Figures
Figures 6(c)-(d) illustrate this scenario.By Lemma 3, we can modify Γ i such
Figure 6 (Theorem 1
e) shows a schematic representation of D. Since the division vertices correspond to the bends, each edge may contain at most four bends (one bend inside Γ o , one bend inside Γ i , and two bends to merge the drawings Γ i and Γ o ).Since there are at most O( √ ∆n) edges that may have bends, the number of bends is at most O( √ ∆n) in total.Note that for every edge containing four bends, two of the bends correspond to w o and w i , and they are adjacent one the same horizontal line in the final drawing.Therefore, we can now transform D into a flat-visibility drawing, where the adjacent pair of bends correspond to a single vertex, and then transform the flat-visibility drawing back into a polyline drawing (similar to the proof of Lemma 3), where the bends that correspond to w o and w i are merged to a single bend.Consequently, the number of bends per edge reduces to 3. The following theorem summarizes the result of this section.Let G be an n-vertex planar graph.If G contains a simple cycle separator of size λ, then G admits a 3-bend polyline drawing with height 4n/9 + O(λ) and at most O(λ) bends in total.
Lemma 5 (
Hossain et al.[18]) Let G be an n-vertex plane 3-tree with the outer vertices a, b, c in clockwise order.Let D be a drawing of the outer cycle a, b, c on L n , where the vertices lie on l 1 , l k and l i with k ≤ n and i ∈ {l 1 , l 2 , l n , l n−1 }.Then G admits a straight-line drawing Γ on L k , where the outer cycle of Γ coincides with D.
For
any triangulation H with the outer vertices a, b, c, let T a,H , T b,H , T c,H be the Schnyder trees rooted at a, b, c, respectively.By leaf(T ) we denote the number of leaves in T .The following lemma establishes a sufficient condition for a plane 3-tree G to have a straight-line drawing with height at most 4(n+3)/9+4.
3 Figure 8 :
Figure 8: Illustration for Lemma 6, where the graph G abp is in shaded region.
)-(b).By Lemma 2 and the Stretch condition, G abp admits a drawing Γ abp on L x+2 such that the vertices a, b, p lie on l 1 , l x+2 , l x+2 , respectively.Similarly, since leaf(T p,G bqp ) ≤ x, by Lemma 2 G bqp admits a drawing Γ bpq on L x+2 such that the vertices q, b, p lie on l 1 , l x+2 , l x+2 , respectively, as shown in Figure 9(a).By the Stretch property, Γ abp can be extended to a drawing Γ abp on L x+3 , where a, b, p lie on l 1 , l x+3 , l x+2 , respectively.Similarly, Γ bqp can be extended to a drawing Γ bqp on L x+3 , where q, b, p lie on l 1 , l x+3 , l x+2 , respectively.Since G * apq ≤ (n + 2)/3, by Lemma 5 and the Stretch condition, G apq admits a drawing Γ apq on L (n +2)/3 .Finally, by the Stretch property Γ apq can be extended to a drawing Γ apq on L x+2
(c).Specifically, we use Lemma 2 to draw G bpq .Since leaf(T a,G abp ) ≤ x, we use Lemma 2 to draw G abp , and since G * apq ≤ (n + 2)/3, we draw G apq using Lemma 5. Finally, we use Stretch and Reshape to merge these drawings.Case 2B (leaf(T a,Gabp ) > x and leaf(T b,Gabp ) ≤ n /9).If leaf(T p,G bpq ) ≤ n /3, then we first draw G bpq using Lemma 2 such that b, p, q lie on l n /3+2 , l n /3+2 , l 1 , respectively, and then use the Stretch condition to shift b to l x+3 .By Lemma 2 and the Stretch condition, there exists a drawing of G abp on L x+3 with a, b, p lying on l 1 , l x+3 , l n /3+2 , respectively.Since G * apq ≤ (n + 2)/3, we can draw G apq using Lemma 5 inside triangle apq.
Figure 10 (
Figure 10(d) illustrates the scenario after applying Stretch and Reshape.If leaf(T p,G bpq ) > n /3, then by Lemma 1 either leaf(T b,G bpq ) ≤ n /3 − 2 or leaf(T q,G bpq ) ≤ n /3 − 2. Hence we can use Lemma 2 and the Stretch condition to draw G bpq such that b, p, q lie on l x+3 , l n /9+2 , l x+3 , respectively.On the other hand, we use Lemma 2 to draw G abp such that a, b, p lie on l 1 , l n /9+2 , l n /9+2 , respectively, and then use the Stretch condition to move b to l x+3 .Since G * apq ≤ (n + 2)/3, we can draw G apq using Lemma 5 inside triangle apq.Figure 10(e) illustrates the scenario
Figure 10 (
e) illustrates the scenario after applying Stretch and Reshape.Case 2C (leaf(T a,Gabp ) ≤ n /9 and leaf(T b,Gabp ) > x).The drawing in this case is analogous to Case 2B.The only difference is that we use T a,G abp while drawing G abp .
(a) If n = 3, then T is empty.(b) If n = 4, then T consists of a single vertex.(c) If n > 4, then the root p of T is the representative vertex of G and the subtrees rooted at the three clockwise ordered children p 1 , p 2 and p 3 of p in T are the representative trees of G abp , G bcp and G cap , respectively.Recall that every r-vertex tree T has a vertex v such that the connected components of T \ v are all of size at most r/2[19].Such a vertex v in T corresponds to a decomposition of G into four smaller plane 3-trees G 1 , G 2 , G 3 , and G 4 , as follows.
to the outer face of G, then let (a, b) be the corresponding outer edge.Otherwise, G 1 does not have any edge incident to the outer face of G.In this case there exists an inner face f in G that is incident to G 1 , but does not belong to G 1 .We choose f as the outer face of G, and now we have an edge (a, b) of G 1 that is incident to the outer face of G. Let P =(w 1 , . . ., w k (= p), w k+1 (= q), . . ., w t ) be the maximal path in G such that each vertex on P is adjacent to both a and b, where {a, b, p}, {a, p, q}, {b, q, p} are the outer vertices of G 1 , G 2 , G 3 , respectively, e.g., see Figure11.Assume that n = n + 3 and x = 4n /9.We draw G on L x+4 by distinguishing two cases depending on whether G * 4 > x or not.
x < G * awj wj− 1 ≤ n / 2 ,
we can draw G awj wj−1 on at most n /3 parallel lines.By the Stretch and Reshape conditions, we merge this drawing with the current drawing such that w j lies on either l x+3 or l n /9+2 .Since G * bwj wj−1 ≤ n /9, we can draw G bwj wj−1 inside its corresponding triangle using Lemma 5. Since max ∀i>j {G * awiwi−1 , G * bwiwi−1 } ≤ n /9, it is straightforward to extend the current drawing to a drawing of G on x + 4 parallel lines by continuing the path w j , . . ., w t in the zigzag fashion.
Let o be a vertical line to the left of w o in Γ o such that all the other vertices of Γ o are in the left half-plane of o .Furthermore, o must be close enough such that all the intersection points with the edges incident to w o lie in between l(w o ) and l(w o ) − 1.For each intersection point, we insert a division vertex at that point and create a horizontal line through that vertex.Delete vertex w o , but not the division vertices.Finally, by Lemma 3, we can modify Γ o such that the horizontal lines are equally spaced.Note that Γ o is a drawing on at most
|
2017-02-17T08:44:35.884Z
|
0001-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4076f25064221369b30e7abdef115354a187d343",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7155/jgaa.00424",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "b46d2e2aca79cdf52dfc6148ceabe441827700f4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
5219796
|
pes2o/s2orc
|
v3-fos-license
|
Correlation of MRI Brain Injury Findings with Neonatal Clinical Factors in Infants with Congenital Diaphragmatic Hernia
The authors evaluated MRIs performed before hospital discharge in 53 infants with congenital diaphragmatic hernia that were scored forbrain injury by 2 pediatric neuroradiologists. Potential associations with perinatal and clinical variables from the neonatal intensive care unit stay were probed. The most common findings were enlarged extra-axial spaces (36%), intraventricular hemorrhage (23%), ventriculomegaly (19%), white matter injury (17%), and cerebellar hemorrhage (17%). Brain injury score was associated with extracorporeal membrane oxygenation, lack of oral feeding at discharge, use of inotropes, and gastrostomy tube placement before hospital discharge. BACKGROUND AND PURPOSE: Infants with congenital diaphragmatic hernia are reported to have evidence of brain MR imaging abnormalities. Our study aimed to identify perinatal clinical factors in infants with congenital diaphragmatic hernia that are associated with evidence of brain injury on MR imaging performed before hospital discharge. MATERIALS AND METHODS: MRIs performed before hospital discharge in infants with congenital diaphragmatic hernia were scored for brain injury by 2 pediatric neuroradiologists. Perinatal variables and clinical variables from the neonatal intensive care unit stay were analyzed for potential associations with brain MR imaging findings. RESULTS: Fifty-three infants with congenital diaphragmatic hernia (31 boys) were included. At least 1 abnormality was seen on MR imaging in 32 infants (60%). The most common MR imaging findings were enlarged extra-axial spaces (36%), intraventricular hemorrhage (23%), ventriculomegaly (19%), white matter injury (17%), and cerebellar hemorrhage (17%). The MR imaging brain injury score was associated with extracorporeal membrane oxygenation (P = .0001), lack of oral feeding at discharge (P = .012), use of inotropes (P = .027), and gastrostomy tube placement before hospital discharge (P = .024). The MR imaging brain injury score was also associated with a large diaphragmatic defect size (P = .011). CONCLUSIONS: Most infants with congenital diaphragmatic hernia have at least 1 abnormality identified on MR imaging of the brain performed before discharge. The main predictors of brain injury in this population are a requirement for extracorporeal membrane oxygenation, large diaphragmatic defect size, and lack of oral feeding at discharge.
1 case per 2000 live births, is an anomaly associated with substantial morbidity and mortality. 1 Survivors of CDH are at risk for long-term respiratory, gastrointestinal, nutritional, hearing, and neurologic sequelae, requiring multidisciplinary support, especially during early childhood. 1 Prenatal predictive factors for increased morbidity and mortality include prenatal imaging findings of liver herniation into the chest, lung to head ratio on prenatal sonography, or lung volumes on fetal MR imaging. [2][3][4] The size of the diaphragmatic defect is another factor that likely plays a major role in morbidity and mortality in infants with congenital diaphragmatic hernia. 5 However, the association of the defect size with evidence of injury on brain imaging has not been studied, to our knowledge.
Long-term neurodevelopmental and neurobehavioral disabilities are reported in up to 70% of infants with congenital diaphragmatic hernia. [6][7][8][9] Both brain maturational delays and evidence of brain injury have been reported on imaging. 8,9 There continues to be some controversy about the correlation of neuroimaging abnormalities in CDH with neurologic outcome. In a small cohort of patients with CDH with prenatal and postnatal imaging, Tracy et al 9 identified an association between brain injury seen on postnatal CT/MR imaging in 4 infants and neurodevelopmental outcome at 1 year. There was no correlation between prenatal factors and neurodevelopmental outcome in this study. 9 In another study by Danzer et al, 10 postnatal brain MR imaging abnormalities were associated with lower cognitive scores, motor dysfunction, and language deficits.
The impact of extracorporeal membrane oxygenation (ECMO) on neonates with CDH requiring ECMO is of clinical relevance. Studies suggest that neonates with CDH who require ECMO have a greater incidence of adverse neurodevelopmental sequelae, though it unclear whether the severity of the illness leading up to ECMO (hypercapnia, hypotension, and so forth) or the ECMO itself should be implicated. 6,11 Which clinical factors in the neonatal intensive care unit play a role in brain injury in infants with CDH is yet to be determined.
In this study, we have developed a brain injury score to determine whether brain injury seen on predischarge MRI in infants with CDH is associated with diaphragmatic defect size and postnatal clinical factors in the NICU.
MATERIALS AND METHODS
After institutional review board approval, we retrospectively identified infants with CDH born between February 2009 and March 2014. We included infants with CDH with MR imaging of the brain before discharge as described in the flow chart (Fig 1).
MR imaging was performed on these infants on 1.5T and 3T, and with a small-bore extremity 1.5T MR imaging scanner that was adapted for neonatal imaging in the neonatal intensive care unit. 12 MR imaging was performed without sedation after feeding and swaddling, except when excessive motion required light sedation. All studies included standard T1, T2, and susceptibility-and diffusion-weighted sequences.
The MR images were independently reviewed by 2 pediatric neuroradiologists who were blinded to the clinical variables except for postmenstrual age at the time of MR imaging. Conflicts were resolved by consensus. The brain was evaluated for evidence of injury and then was scored on the basis of a system modified from previous work by Danzer et al 8,10 and Tracy et al. 9 Table 1 lists the brain injury scoring guide. The total brain injury score was calculated in each infant. Examples of our scoring system are provided in Fig 2. The medical charts were reviewed to identify perinatal clinical factors and courses in the neonatal intensive care unit. The clinical variables extracted from the electronic medical record included type of diaphragmatic defect, gestational age, birth weight, Apgar scores at 1 and 5 minutes, the presence of congenital heart disease, the need for ECMO, days on a ventilator, the presence of blood stream infection, the need for and type of inotropic support, the need for oxygen at 28 days of life, any oral feeding at discharge, and gastrostomy tube placement before hospital discharge. The size of the diaphragmatic defect identified at surgery was classified according to the Congenital Diaphragmatic Hernia Study Group classification from A to D, A being a small defect and D indicating diaphragmatic agenesis. 5 Data were entered into REDCap (http://www.project-redcap. org/software.php), a secure on-line Web application. Statistical analysis was performed by using SAS software (Version 9.3; SAS Institute, Cary, North Carolina). Nonparametric tests and multivariable linear regression were performed by using clinical variables as potential predictors with the brain injury score as the continuous outcome.
RESULTS
There were 91 neonates with CDH admitted to the neonatal intensive care unit during the study; 28 died and 53 remained suitable for the study (Fig 1). Demographic details are provided in Table 2. Nine of the 53 neonates required ECMO during their treatment. In the 9 neonates who had ECMO, 2 had a type D defect and 7 had a type C defect.
MR imaging was performed at a mean corrected gestational age of 43.6 weeks (range, 37.0 -64.6 weeks). Fifty of the 53 infants were imaged at Ͻ48 weeks' corrected gestational age. MR imaging findings were considered abnormal (injury score of Ͼ0) in 32/53 (60%) infants. Enlarged extra-axial spaces were the most common abnormal finding on MR imaging, seen in 36% of infants. Other abnormal findings on MR imaging included intraventricular hemorrhage in 23%, ventriculomegaly in 19%, white matter injury in 17%, cerebellar hemorrhage in 17%, parenchymal hemorrhage in 8%, cortical injury in 4%, and basal ganglia injury in 2% of infants. The distribution of brain injury scores in infants with CDH treated with ECMO and those without it is provided in Fig 3. Ventriculomegaly, enlarged extra-axial spaces, parenchymal hemorrhage, and white matter injury were seen significantly more often in neonates who required ECMO compared with those without it (Table 3).
With the Spearman correlation for continuous predictor variables, there was a weak but significant correlation of the brain injury score with the number of ventilator days (correlation coefficient, 0.34; P ϭ .0014) and diaphragmatic defect severity (correlation coefficient, 0.30; P ϭ .027). There was no correlation between total injury score and gestational age in weeks, birth weight, Apgar scores, or bloodstream infection.
Univariate analysis of median differences in the brain injury scores and clinical variables in the neonatal intensive care unit showed that the use of ECMO (P ϭ .0001), use of inotropes (P ϭ .027), gastronomy tube placement before discharge (P ϭ .012), and not taking any oral feeding at discharge (P ϭ .024) were significantly associated with higher MR imaging brain injury scores. When we grouped the diaphragmatic defects into small (A and B) and large (C and D) defects, analysis with the Wilcoxon rank sum test showed that infants with type C or D diaphragmatic defects had significantly higher brain injury scores compared with infants with type A or B defects (P ϭ .011). Table 4 lists the clinical predictors of brain injury. There was a significant association among the use of ECMO, the presence of C or D diaphragmatic defect, and evidence of brain injury. Infants who were taking any feeding by mouth (as opposed to being entirely tube-fed) at the time of discharge had a greater chance of having normal brain MR imaging findings.
To better understand the etiology of ventriculomegaly and enlarged extra-axial spaces, we plotted the head circumference measurements of infants imaged before 50 weeks' corrected gestational age on a standard Fenton head circumference chart (Fig 4). 13 Head circumference was measured within 5 days of the brain MR imaging in all cases. Infants with moderate or severe enlargement of the extra-axial spaces had normal or enlarged head size.
DISCUSSION
Management of CDH is complex, with various strategies used to improve outcomes. In general, our clinical management strategy for infants with CDH includes gentle ventilation, aggressive management of pulmonary hypertension with pulmonary vasodilators such as nitric oxide and epoprostenol, and surgical repair when pulmonary hypertension is subsystemic or has stabilized. ECMO is reserved for extremely sick infants who do not respond to standard medical management. Despite improved therapies, the early course in infants with CDH continues to be an extremely vulnerable period with great morbidity. In this article, we report the associations between clinical variables in the critical neonatal intensive care unit period and MR imaging evidence of brain injury in infants with CDH.
MR imaging has been used as a tool to identify evidence of perinatal brain injury in vulnerable populations and to predict 21 In neonates with CDH, intrathoracic mass effect can cause relative obstruction of the central veins without ECMO. The large head circumferences in 2 of the 3 infants with ECMO with moderate-to-severe enlarged extra-axial spaces in our cohort support this hypothesis. While our prevalence of parenchymal hemorrhage is similar to that in the study of Danzer et al, 10 our cohort had a much higher incidence of intraventricular hemorrhage (23% compared with 2%). This may be due to use of susceptibility-weighted imaging in our population, which increases the sensitivity for the detection of blood products. In our study, infants on ECMO had a higher proportion of intraventricular and intraparenchymal hemorrhage compared with those without it. In 3 of these infants, minor intracranial hemorrhage was identified on screening head sonography during the course of ECMO, but this did not necessitate withdrawal of ECMO in these infants. We did not include small amounts of subdural or extra-axial hemorrhage in infants younger than 4 weeks of age as abnormal, given that this can be seen in healthy neonates following both vaginal and cesarean delivery. 22 Although previous reports describe subdural hemorrhage in infants with CDH, we did not see any abnormal subdural hemorrhage in our cohort.
Cerebellar hemorrhage was seen in 17% of our population and has not been previously described in infants with CDH. Again, the use of SWI in our cohort may have increased the detection of this abnormality. The proposed etiology of cerebellar hemorrhage is impaired brain autoregulation, elevated venous pressure, and fetal distress. 23 We did not find a significant difference in cerebellar hemorrhage between infants with and without ECMO.
Multiple intrauterine, perinatal, and surgical factors are described as associated with neonatal neurodevelopmental outcome, including the need for ECMO, the use of patch repair (a surrogate for large defect size), the presence of liver in the chest, the need for oxygen at 4 weeks of life, and hypotonicity. 24,25 These factors would presumably also be associated with brain injury on imaging. Among the clinical variables assessed in our study, both large defect size and ECMO were predictors of the severity of brain injury identified on MR imaging. In our study, all the infants who needed ECMO had a large diaphragmatic defect; however, there was a substantial percentage (38%) of infants with large defects who did not require ECMO. The need for supplemental oxygen at 4 weeks of life was not associated with brain injury on imaging in our study. As expected, all our patients who required ECMO had evidence of intracranial abnormality, com- Boxplots of the distribution of the brain injury score in infants with CDH with ECMO and those without it. The central thick line in each box is the median score. The limits of the box indicate upper and lower quartiles. The whisker limits indicate the highest or lowest score, not considering outliers. There is 1 outlier with a higher brain injury score in the non-ECMO group with a score of Ͼ1.5 interquartile range above the upper quartile. This infant had a CDH defect D. pared with 52% of the group without ECMO. In a previous study from 1999, Ahmad et al 11 showed that only 35% of patients with CDH and ECMO had evidence of CNS abnormality. This study used head sonography and CT to identify brain injury, and our numbers are likely higher due to the greater sensitivity of MR imaging in identifying subtle brain injury. In a study of infants with CDH from 1997, McGahren et al 6 showed a greater incidence of intracranial findings on head sonography and poor neurologic outcome in infants with CDH requiring ECMO. Rollins et al 26 performed a retrospective review of 50 neonates (24 with CDH) who underwent brain MR imaging after neonatal ECMO. MR imaging findings were abnormal in 62% of the infants, and MR imaging was more sensitive than head sonography at detecting brain abnormalities. Similar to findings in our study, abnormal carotid flow, ventriculomegaly, and increased extra-axial spaces were common. They found that neuroimaging was, in fact, not correlated with neurodevelopmental outcomes in the 20 patients who had 12-month follow-up. However, they only classified neuroimaging findings as "normal" or "abnormal" and did not attempt to categorize the severity of imaging abnormalities, which could have potentially impacted their results. Venoarterial ECMO, which is used at our center for infants with CDH, requires cannulation of the right internal carotid artery. In addition to this vascular injury, systemic anticoagulation is required, greatly increasing the risk of cerebral hemorrhage. The initiation of ECMO launches a systemic inflammatory response, which also appears to affect the brain. 27 Use of ECMO has been found to have an effect on neurodevelopmental outcome. Gross motor skills are delayed in many children who received ECMO as neonates compared with healthy controls, though severe disability is uncommon. 28 Survivors of CDH who received ECMO appear to be more affected than those who required ECMO for other reasons, 29 and the cause of poor outcome in this cohort is unclear. Cognitive development in ECMO survivors also appears to be overall within the normal range, but lower than that in healthy controls. 28 Decreased performance is seen on neuropsychological tasks such as verbal reasoning and spatial abilities, 30 and children treated with ECMO are at risk for school difficulties despite normal overall intelligence. 31,32 The increased brain injury seen in infants with CDH requiring ECMO in our study could provide a mechanism for some of these findings seen later in childhood.
Our study has several limitations. Because this is a retrospective study, we could not control for imaging timing, which might affect the identification of brain injury. Predischarge MR imaging examinations assessed in this study were performed when the infants were clinically stable after surgical correction and ECMO if used. The exact cause of brain injury in this CDH population, especially in those requiring ECMO, is difficult to postulate. Neonates who require ECMO are usually sicker than those who do not require ECMO, and they may have had a period of clinical instability with hypotension, hypercarbia, and impaired cerebral autoregulation before the initiation of ECMO. It is therefore difficult to determine whether ECMO itself or the illness requiring ECMO contributes more to brain injury. Even if they do not require ECMO, infants with CDH are often initially very sick, with episodes of hypoxia, hypercarbia, hypotension, and acidosis. Use of inotropes can be associated with altered cerebral perfusion leading to brain injury. 33,34 Major surgery with anesthesia would also contribute to brain injury in these neonates. 35,36 Only 53 of the 91 infants admitted to our neonatal intensive care unit during the study were included, mainly due to high mortality. Of the 28 infants who died, 21 had been on ECMO. Of the 63 survivors, 14 infants had ECMO and 9 were included in the study. The ECMO survivors that were included in our cohort may have represented the relatively less sick infants on ECMO who were clinically stable enough to be imaged. Because all our infants with ECMO had evidence of brain injury and the brain injury score was higher in infants with ECMO compared with those without it, it may be reasonably assumed that we imaged only the milder end of the spectrum and that infants with CDH needing ECMO would have a greater association with brain injury.
Imaging was performed on both 1.5T and 3T clinical scanners, which might affect the sensitivity of detection of brain abnormalities. We did not include sonography and CT reviews, because we believed that MR imaging was more sensitive in identifying brain injury. Neurodevelopmental tests on our group of infants with CDH are ongoing, and results are not included in this study. The clinical significance of the brain imaging abnormalities is therefore currently unknown. However, on the basis of available evi- dence of brain injury in infants with CDH, all infants with CDH now undergo brain MR imaging before discharge.
CONCLUSIONS
In this study, we demonstrated an association between perinatal clinical factors and brain imaging abnormalities in infants with congenital diaphragmatic hernia. The strongest association with brain abnormality is the use of ECMO, with the most common abnormality being enlarged extra-axial spaces. The presence of a large diaphragmatic defect was an intrinsic factor associated with abnormal brain imaging findings. Infants who were fed orally at the time of discharge were more likely to have normal neuroimaging findings.
|
2017-10-23T19:40:12.654Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "b498cfefbcfd22d1cfdde6c3595ceb26022fae57",
"oa_license": "CCBY",
"oa_url": "http://www.ajnr.org/content/ajnr/37/9/1745.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "aa8bbedd9393f9a48a33302ebdb240482299d0d2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
20771784
|
pes2o/s2orc
|
v3-fos-license
|
A Program for Weaning Children from Enteral Feeding in a General Pediatric Unit: How, for Whom, and with What Results?
Objective To describe a series of children who were hospitalized for a tube-weaning program in the general pediatric ward of a pediatric tertiary university hospital: describe our method, to determine the success rate of our inpatient pediatric tube weaning program, and search for relevant factors linked to its success or failure. Method We analyzed the medical files of consecutive children who were hospitalized for gastric-tube weaning over an 8-year period. We analyzed outcomes in terms of feeding and growth with at least 2 years of data. Success (weaning within 3 months) and failure were compared by characteristics of children. Results We included 37 children (29 females) with mean (SD) age 31.4 (21) months. Most had a severe medical history (30% prematurity; 50% intrauterine growth restriction, 50% neurological and genetic anomalies). The weaning program was successful for half of the children. Factors linked to success of the program were female sex (p = 0.0188), normal neurodevelopment (p = 0.0016), nasogastric tube (p = 0.0098), and with <24 months on EF before the stay (p = 0.0309). Discussion Comparing the efficiency of various methods and results among teams was difficult, which indicates the need to establish consensus about the outcome criteria. We confirm the need for these types of stays and programs.
inTrODUcTiOn The benefits of nutrition by a gastric tube in preterm infants unable to adequately feed without assistance have been known since the end of the 19th century (1). The first studies in the 1950s showed a positive impact of enteral feeding (EF) by nasogastric tube (NGT) on survival and growth in children (2), and the advantages of gastrostomy were reported in the 1960s (3). The number of children benefiting from EF by NGT or gastrostomy is increasing with a concomitant decrease in the mean age of infants treated, which is mainly due to advances in prenatal diagnoses and medico-surgical neonatal resuscitation (4).
Although these strategies brought considerable benefits to children, they are also associated with some deleterious side effects, particularly in terms of feeding and eating disorders. In fact, some children, despite successful treatment or improvement in medical condition, seem incapable of resuming normal oral feeding. These feeding and eating disorders, in particular avoidant/restrictive intake disorders, lead to tube dependency. They are associated with previous underlying physical causes and/or current functional and psycho-emotional causes. They are characterized, with variable levels of intensity, by an obvious disinterest in food, an attitude of opposition or avoidance to food, an aversion to food in general or at least all foods with the exception of one or two, a phobia of introducing food into the mouth or food of certain textures, a prominent pharyngeal reflex as well as effortless vomiting, and a hypersensitivity affecting the whole body. These features often lead to significant psychological problems affecting intrafamilial and social relationships (5,6).
In most cases, the care of these children during weaning can be provided as ambulatory care by the pediatric teams who previously treated the child, by progressively decreasing EF to lead to normal oral feeding. A multidisciplinary re-education approach, such as that provided by specialized ambulatory structures for children with development or psychological difficulties, is often required. Nevertheless, some children remain resistant to the care provided in the outpatient setting and require a full-time stay for more intensive support. Many specialized clinics exist worldwide, offering stays to assist weaning from tube feeding, the best known being in Europe, in Graz in Austria. Some specialized units also operate within pediatric hospitals, such as the Feeding and Swallowing Centre at the Children's Hospital, WI, USA (7)(8)(9)(10)(11)(12). In France, some structures providing post-acute and rehabilitation care recently started offering weaning stays, but there are few such service providers and limited available data regarding their methods and results. In public hospitals for children, general pediatric, pediatric gastroenterology, or child psychiatry wards sometimes provide care for children in this situation, but due to the functional structure of public hospitals in France, predominantly dedicated to short-term stays and acute diseases, the lack of medical team expertise for treatment has not led to well-structured strategies.
The general pediatric department of Necker Hospital in Paris encompasses a reference center for rare diseases dedicated to "Pierre Robin sequences and congenital disorders of sucking and swallowing. " For the past 15 years, it has offered children resistant to EF weaning access to an expert and streamlined in-patient program managed by a specialized multidisciplinary team.
The objectives of this study were to describe a series of children who were hospitalized for a tube-weaning program in the general pediatric ward of a pediatric tertiary university hospital, to describe our method, to analyze the results of our program with at least 2 years of follow-up, and search for relevant factors linked to its success or failure.
Patients
We selected all children hospitalized from 2008 to 2015 (which coincided with the implementation of the electronic health records system at Necker Hospital and launch of our specific reference center database) for EF weaning. The decision to hospitalize these children had been medically decided by one of the two main pediatricians of the ward, a few weeks beforehand if the tube weaning had failed or was resisted in ambulatory care after at least 1 year of attempt. These children were fed by NGT or gastrostomy. All children hospitalized for weaning stays were included in the study regardless of age, length of EF and initial pathology as long as they had safe swallowing abilities and a stable cardiopulmonary status. We excluded children whose stay was less than 5 days and those who died during the study.
Program
Each child benefited from a program of multidisciplinary management, which included sessions with physicians, nurses, a psychomotor (occupational) therapist, a speech-language pathologist, psychologists and dietitians, and was applied in the following steps: -Initial time set aside for observation and evaluation of causes for the resistance. -First appraisal by the multidisciplinary team.
-Preparation of a therapeutic contract with parents, initially for 1 week and renewable a maximum of two additional times, depending on the child's progress. -Reduction of EF during the first 3 days according to the child's capacity: -"Safe swallowing" when drinking water but eats nothing: restriction of fluids but not calories, transition to four feeds of high-energy foods contained in the smallest possible volume (1.5 kcal/ml). -"Safe swallowing" while eating a few spoonful of food and drinks water: fluid and calories restricted by approximately 25% while maintaining four feeds. -Able to eat the equivalent of 1 yogurt or 150 g of mixed foods orally per day: 50% withdrawal of EF while completing only two of the four meals by gastric tube.
During the stay, we reduced the EF meal by meal, stopping EF when oral intakes reach about 2/3 of the prescribed volume.
-Therapeutic principles and timetable. We endeavor to create a non-clinical atmosphere. The child shares a "mother-child" room with one parent, while the other parent is able to stay in the hospital family housing facility, if both parents choose to be present at Necker. Assessments including examinations and blood tests are performed before admission to best gain the child's confidence and avoid stress.
The timetable, displayed in the child' room, consists of: -daily medical meetings, -once or twice a week with the senior referring physician, -multiple times a week with the psychologist, -daily sessions with the psychomotor therapist alone or with the psychologist or speech-language pathologist, -once or twice-weekly individual sessions with the speechlanguage pathologist, -regular meetings with the dietitian.
Mealtimes are for the most part publicized; that is, they were taken in a room with an expert professional. Morning and evening meals can be taken with the parents and are more or less supervised by the nurse of the general pediatric ward assigned to the child. The dietitian meets with parents at the beginning of hospitalization to adapt meal trays, as best possible. Meal trays, known as "discovery, " are offered with recreational foods that are appreciated by children. Parents are allowed to bring in additional foods. In this case, the dietitian records what is ingested, calculates ingested calories, and adjusts subsequent meal trays accordingly.
The remaining time is free and can be spent with the early childhood educator in the unit's playroom or in the classroom with the teacher, depending on the child's age and neurodevelopmental capacities. Free time spent outdoors is also possible. Weekends are periods of rest, during which return visits to the family home are authorized as long as they are not likely to destabilize the child or alter the burgeoning relationship with the medical team.
Principles of management are based on a wide-ranging supportive, flexible policy but follow a relatively intense rhythm, which is reassuring on a medical level. The parents are free (when possible) from worries relating to risks associated with the withdrawal of nutrition, particularly with regard to weight loss and risk of deprivation. They are supported psychologically, heard by various team members, always ensuring that no guilt is assigned. Team members and parents both engage in the day-to-day difficulties encountered with the child, especially in the context of nutrition, but also in terms of education and behavior. Time set aside for medical observation also allows for reflection on the diagnosis, previous history, positive outcomes, and other elements, which help clarify the current situation and often lighten the load. The care has a psychomotor therapeutic approach, which includes activities that do not exclusively focus on nutrition but aim to develop multi-sensorial sensitivity and balanced emotional regulation. Particular attention is attributed to fostering interactive competences of the child, especially strengthening the parent-child bond as well as the family dynamic. Everything is focused on placing the child and the parents in an atmosphere of trust and consideration with regard to progress.
Patient analysis
To describe the cohort, the following data were collected from hardcopy records or computer files for each hospitalized child: -Personal data: age, gender, gestational age, existence of intrauterine growth restriction, profession, and socio-occupational categories of the parents according to nomenclature of the National Institute of Statistics and Economic Studies. We condensed the six categories into three groups: ⚬ Categories 1 and 2: agriculturalists, artisans, retailers, business managers. ⚬ Categories 3 and 4: executives, university educated professionals, middle management and "intermediary" occupations. ⚬ Categories 5 and 6: employees and manual workforce.
-Predominant diagnosis: the "predominant diagnosis" was most pertinent physical diagnosis that justified the EF. -Psychomotor or cognitive development: psychomotor development had been clinically evaluated during hospitalization while attempting weaning from EF or at a later stage during follow-up consultations. This clinical evaluation allowed for classifying children into two distinct groups, those with normal or suboptimal development and those with a deficiency. -Feeding history: age at first appearance of difficulties, age at EF implementation, type of nutritional support (NGT or gastrostomy), length and tolerance of treatment, age at admission, length of hospital stay, and quantification of EF supplementation at admission. -Anthropometric measures: the weight, length, BMI (body mass index) of each child were measured with the Sempé and Pédron's curves on the day of admission, every day during the stay, on the discharge day, and at each follow-up visit. We expressed them in mean, median, percentile, and z-scores.
We then analyzed the success or failure of tube weaning. Tube weaning was scored as "success" when it occurred during hospitalization or when it was unambiguously initiated and dynamically perpetuated and led to a complete and definite cessation of NGT or gastrostomy use within 3 months of discharge. Tube weaning was scored as "failure" for children weaned at a later stage after discharge (>3 months) or who did not wean during the timeframe of the study. The 3-months' postdischarge time was arbitrarily defined because it often reflected an effective outcome.
In children with successful weaning, the consequences of weaning on growth were studied. During follow-up consultations, we sought to emphasize changes to growth curve trajectories in terms of height and weight measurements taken at admission.
statistical analysis
We sought to identify factors associated with weaning "success" or "failure" of the hospital stay by comparing children's medical records and personal information in both "success" and "failure" groups. Since most of quantitative variables had a not normal distribution, data were described as median and inter quartile ranges (IQR 25-75). Qualitative data were described as number and proportions. Chi-square and Fisher exact tests were used to compare qualitative variables. Student's t test or the non-parametric Wilcoxon test was used to compare quantitative variables. Analysis involved use of SAS 9.4 (SAS Institute Inc., Cary, NC, USA). All tests were two-sided and p-values <0.05 were considered statistically significant. resUlTs characteristics of the cohort Between January 2008 and December 2015, 40 children were admitted to the general pediatric ward for an EF weaning stay. Three were excluded: the first died in the year following the stay due to an infectious event unrelated to the eating problem, the second stayed in the hospital for only 2 days because the parents were not satisfied with the program, and the third had weaned between the last consultation to prepare for the stay and the day of admission.
Predominant Diagnosis
The predominant diagnoses greatly differed among children, ranging from significant neurological pathologies to complete absence of underlying physical pathologies, neither criteria for exclusion. Half of the children had a neurological or genetic anomaly and only a few [n = 4 (11%)] had an isolated psychogenic eating disorder ( Table 2).
Psychomotor or cognitive Development
Fifteen children (40.5%) were considered deficient in cognitive development and 22 (59.5%) as having normal or suboptimal development.
eating history
For the 37 children, age at which eating difficulties became apparent was precocious (median = 0.2 months, IQR = 0.2-3 m), so, most presented a neonatal pathology with an age of EF implementation of 1 month (median, IQR = 0.2-6 m). In all, 23 children (62%) had gastrostomy, 17 (74%) associated with Nissen type anti-reflux surgery; the remaining children were fed by NGT. Thirteen (35%) children had problems tolerating EF, especially because of frequent vomiting; 9 children underwent gastrostomy, 7 of these associated with Nissen surgery, and 4 with NGT. The median (IQR) duration of EF was 29 (13-52) months.
Weaning success rate
For 19 children (51%), the stay was successful immediately or shortly thereafter: 11 were weaned during hospitalization and 8 in the following 3 months postdischarge. Another 8 children were weaned in the following 12 months, by staying at the "Centre des Côtes" (post-acute care and rehabilitation facility) or by following monitoring consultations. Ten children remained EF-dependent (27%).
growth Monitoring
On arrival, the BMI of the whole group was 14.9 (median: 14.8percentile 25), z-score = −0.18. At discharge, the BMI was 14.5 (median 14.5-percentile 25), z-score = −0.27. The median (IQR) weight loss during hospitalization was 2.7% (0-4.7%). Among the 27 weaned children, the mean follow-up of growth monitoring was 27.7 months (median: 22.4 months). At this term, their BMI was 14 (median 13.9-percentile 20, z-scores = −0.26). Twentythree children (85%) followed their initial percentile height and weight growth trajectories. One child dropped to a lower percentile growth curve in height, but he had severe scoliosis, which greatly affected his standing height measurement. One other child dropped to the lower percentile growth curve both in height and weight because of incorrect food intake. Two children were lost to follow-up.
comparison of "success" and "Failure" Success (= definitive tube weaning at discharge or within 3 months of discharge) was more frequent than failure (= weaning >3 months after discharge or no weaning at the end of the study) for girls (p = 0.0188) and for children with normal or limited cognitive development (p = 0.0016), with NGT (p = 0.0098), and with <24 months on EF before the stay (p = 0.0309). It was also more frequent with longer stay in the program (p = 0.02) ( Table 3).
DiscUssiOn
Here, we describe a series of children who were hospitalized for a tube-weaning program in the general pediatric ward of a pediatric tertiary university hospital. Hospital stays for EF weaning were beneficial because during a short stay of 2-3 weeks, half of the children showed rapid weaning and three quarters definitive weaning within a year after the stay, without decline of growth trajectory. Success was more frequent than failure for girls and for children with normal or limited psychomotor development, NGT, less EF time before the stay and more time in the program. Difficulties encountered in weaning from EF due to oral eating issues are a major problem for some children and their families, even when the predominant underlying cause requiring implementation of nutritional support is resolved. The frequency of cases resistant to weaning in the ambulatory care setting is difficult to evaluate, because it depends on the tolerance of families to the continuation of EF, processes available in ambulatory care to achieve weaning, and the team's awareness of the existence of these structures. The fact that in 8 years, 40 such children were admitted to a general pediatric expert ward confirms the demand for interventions, which requires increasing these weaning stays in expert centers throughout France when management in ambulatory care is failing.
For the most part, children in this series were born premature or with intrauterine growth restriction. Logically, these children are more difficult to wean in ambulatory care because parents and doctors particularly emphasize weight gain in this neonatal period with the objective to compensate for impediments to stature and weight growth, which may impose a more prolonged and intense period of EF. Moreover, such situations are often stressful to parents, and also, premature children are for the most part more vulnerable than term infants.
Families in this series were predominantly from disadvantaged socio-occupational categories. Their likely weak financial means cannot be the cause of the ambulatory care failure because in France, medical care is free for all children with severe chronic disease, especially those with home EF. Our children were often at the center of an untreated psychopathological family dynamic, which may explain the failure of ambulatory care and the benefits of a longer stay in centers for post-acute care and rehabilitation. The scope of this study did not allow for objective evaluation of the contribution of maternal psychopathology or the dysfunction of the motherchild relationship. This remains to be examined in future prospective studies, which should additionally evaluate previous maternal history of eating disorders, both with standardized scales.
The predominant diagnoses of children in our series were very heterogeneous. Nonetheless, only a few exclusively presented an eating problem of psychogenic origin. This finding can be explained by EF weaning in ambulatory care being less difficult for these children. Moreover, that most children presented a severe organic pathology shows that EF is not used excessively in the long term and indications for its use are initially fully justified. As a whole, the epidemiological description of our series showed rather severe conditions, which is explained by these children being the more resistant among the tube-dependent children of similar age (13).
Weaning stays offered in international clinics or specialized units use programs closely related to ours with a multidisciplinary approach (14,15). In a recent systematic review, Sharp et al. described eight studies describing inpatient stay, only two having similar multidisciplinary team of caregivers as ours (16). In fact, interventions by a psychologist, an occupational therapist, a physiotherapist, a dietician, and a speech-language pathologist, are useful in these complex situations. Our method of decreasing EF supplementation at the beginning of the stay is more progressive than other methods, with the decrease being 50% at the second to third day from admission in the Graz model. The intensity of management in terms of number of daily sessions is perhaps lower in a pediatric service than in a dedicated unit, in which all staff members are exclusively dedicated to weaning stays.
Our results show a short-term success rate in 51% of children admitted. This is lower than the rate reported by other studies. Indeed, Shalem et al. reported a success rate of 86% in Israel, with 24 of 28 children weaned (12). Hartdorff et al. also reported an 86% success rate in Amsterdam, with 18 of 21 children weaned (11). In the United Kingdom, Wright et al. reported a success rate of 78% but with a longer follow-up (8). Finally, Trabi et al. found a greater success rate in Graz, 92%, with 203 of 221 children weaned (7). Success rates between teams are not easily comparable. First, the definition of successful weaning itself is not identical among centers, in terms of the practical details of feeding regimes adopted once weaned from EF and also the timeframe considered between leaving the facility and weaning. We defined successful weaning as the discontinuation of NGT or gastrostomy but with the concomitant absence of a slowing growth rate and a rapid resumption of an ascending and regular weight gain rate as well as a return to a normal social lifestyle. These stipulations are not comparable between specialist centers or are poorly described in publications. Our pediatricians' insistence on growth, within a pediatric ward in a tertiary hospital for children, may be more stringent than that of a child psychiatry unit team. Moreover, weaning should be followed by a resumption of eating in a harmonious manner that is more or less balanced and without direct conflict. It is not about weaning "at any cost," the child must succeed in eating various foods with different textures and with a significant reduction in aversion to foods. Then again, what exactly is meant by "harmonious eating" is subjective and thus varies among teams and families and is poorly evaluated along the way. Therefore, this definition precludes a comparison of different methods and structures, especially those lacking a medium-to long-term follow-up. The pathologies in our series were for the most part severe in children with often compromised developmental level. Our recruitment is not comparable to that of some specialized clinics, which can in part explain the lower success rate of our weaning stays. Finally, the lack of funding for public hospitals and the inconvenience, in the general pediatric ward, of a team often solicited in emergencies and other related care as well as the suspension of weaning stays during the infectious winter season may help explain our results. This hypothesis justifies the development of weaning stays in structures equipped for medium-term stays, in a less clinical setting, and one better adapted to providing reeducation/rehabilitation type care as compared with services offered in a university hospital.
The consequences of weaning from EF on height and weight growth of children in this series are satisfactory because only one child dropped to a lower percentile growth curve in weight and height in the mid-term (2 years). Our data are comparable to the Wilken et al. study showing rapid weaning with no impact on z-scores, namely 2 SD from the mean (17), and to Wright et al. who reported a decrease in BMI at weaning in the absence of a concomitant decrease in height (8). However, the other studies did not present post-weaning growth data, although these are an important indicator of the quality of patient management.
We sought to identify factors that could predict weaning success or failure of our program. Because of the small sample, the analysis of our results requires caution. Factors associated with failure were male sex and reduced level of psychomotor or cognitive development. We do not have an explanation for the association with gender. In contrast, delays in development no doubt reduce a child's capacity to process what is being presented, which may indicate that these children would benefit from a more realistic dialog, with the setting of firmer strategies to prevent problems associated with oral feeding. EF by gastrostomy and EF of more than 24 months before weaning were also negative factors. These two factors are interconnected. Children with gastrostomy are in fact those with more serious underlying pathologies and require an extended period of nutritional support. Indeed, gastrostomy is more comfortable than NGT for children and frees the mouth for oral eating, but for children and their families, it implies longer term treatment, with weaning seemingly less urgent than with a NGT. Finally, a longer period of hospitalization seems to predict success but is biased on the fact that children for whom our treatment was not so effective during the first week left more rapidly than those for whom the third week of stay was important for success.
cOnclUsiOn
We offer children resistant to weaning from gastric tube feeding in ambulatory care an efficient program consisting of a short stay with multi-professional intensive management in a hospital structure, which ensures their medical safety in the short and medium term. To allow for valid comparisons between different teams, different strategies, and results of such programs, more precise and consensual definitions of the parameters of outcome and oversight are needed, especially long-term follow-up of growth and eating behavior in these children.
eThics sTaTeMenT
We only analyzed files of patients that were treated in our ward. This is in agreement with the policy of ethics of our institution.
aUThOr cOnTribUTiOns JM and BT collected the data and participated to the manuscript writing. GM, ML-G, CC, CF, AR, and AL are the pediatricians, the occupational therapist, the speech therapist, and the nurse who were in charge of the children during the stays. They all participated to the study design and corrected the manuscript. CV made the statistical analysis. VA is the head of the service, designed the study, and wrote the manuscript. All the authors validated the final version.
acKnOWleDgMenTs
We thank Laura Smales for her help in English language editing. We thank the families and children for their trust and collaboration.
|
2018-01-25T18:11:54.213Z
|
2018-01-25T00:00:00.000
|
{
"year": 2018,
"sha1": "896d647f2f9ab067ab586c24b199cf916fa1c02c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2018.00010/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "896d647f2f9ab067ab586c24b199cf916fa1c02c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59024628
|
pes2o/s2orc
|
v3-fos-license
|
Analysing the System of Settlements in Slovenia : Traditional and Alternative Approach
Introduction The Republic of Slovenia is characterized by a large number of settlements. Although the national territory extends on the 20,273 km2, there are more than 6,000 settlements registered. The number of settlements is approximately the same as in the Republic of Serbia (6,155 settlements – Tosić, 2009), but the Republic of Serbia is 4.4 times larger than Republic of Slovenia. Therefore, settlements are important landscape element of Slovenia that needs to be taken into consideration in geographical analysis of Slovenia and in shaping its regional and spatial development strategies. Settlements in Slovenia were of particular interest to geographers and other social scientists from the beginning of the modern geographical science in Slovenia. The analysis of settlements was included in first general monograph presenting the Slovenian territory by Melik (1935). Settlements and their system are presented also in monographs published after the independence in 1991 describing geographical characteristics of Slovenia: Lexicon of place names of Slovenia, Slovenia – regions and people, Geographical atlas of Slovenia, National atlas of Slovenia (Orožen Adamič, 2004). In 2004, Association of the geographical societies of Slovenia prepared the scientific monograph Slovenia – a geographical overview, where Kladnik (2004) and Ravbar (2004) published a short analysis of settlements and the system of settlements and urbanization in Slovenia. The system of settlements in Slovenia was studied carefully especially at the end of the 1960s while preparing the regional spatial plan of Slovenia. At that time geographers developed the concept of the polycentric urban system of Slovenia (Kokole, 1975) that is, although revised, still in operation. Kokole and Vrišer started also research on the central place system that was examined recently by Cigale and Benkovič Krašovec (Černe, et al., 2007). Vrišer (1980) studied the system of settlements in Slovenia in the system of settlements in Yugoslav federation. Urban and traffic system were analysed by Černe and Pelc (1992), and later by Černe and Kušar (2008). New categorization of settlements was prepared recently by Abstract
Introduction
The Republic of Slovenia is characterized by a large number of settlements.Although the national territory extends on the 20,273 km2, there are more than 6,000 settlements registered.The number of settlements is approximately the same as in the Republic of Serbia (6,155 settlements -Tosić, 2009), but the Republic of Serbia is 4.4 times larger than Republic of Slovenia.Therefore, settlements are important landscape element of Slovenia that needs to be taken into consideration in geographical analysis of Slovenia and in shaping its regional and spatial development strategies.
Settlements in Slovenia were of particular interest to geographers and other social scientists from the beginning of the modern geographical science in Slovenia.The analysis of settlements was included in first general monograph presenting the Slovenian territory by Melik (1935).Settlements and their system are presented also in monographs published after the independence in 1991 describing geographical characteristics of Slovenia: Lexicon of place names of Slovenia, Slovenia -regions and people, Geographical atlas of Slovenia, National atlas of Slovenia (Orožen Adamič, 2004).In 2004, Association of the geographical societies of Slovenia prepared the scientific monograph Slovenia -a geographical overview, where Kladnik (2004) and Ravbar (2004) published a short analysis of settlements and the system of settlements and urbanization in Slovenia.
The system of settlements in Slovenia was studied carefully especially at the end of the 1960s while preparing the regional spatial plan of Slovenia.At that time geographers developed the concept of the polycentric urban system of Slovenia (Kokole, 1975) that is, although revised, still in operation.Kokole and Vrišer started also research on the central place system that was examined recently by Cigale and Benkovič Krašovec (Černe, et al., 2007).Vrišer (1980) studied the system of settlements in Slovenia in the system of settlements in Yugoslav federation.Urban and traffic system were analysed by Černe and Pelc (1992), and later by Černe and Kušar (2008).New categorization of settlements was prepared recently by Černe and his colleagues (Černe, et al., 2007).The morphology of settlements was of geographers' research interest as well (Drozg, 2002).Rebernik (2004) was active in analysing socio-geographic structure of settlements, while Bole (2008) worked on economic transformation of urban settlements.Just recently, Zupančič (2007) opened a specific topic on spatial problems of the Roma settlements in Slovenia.As seen, the bibliography on settlements and the system of settlements in Slovenia is extensive; therefore, it is not possible to mention all relevant topics, authors and their work.
The aim of this article is to present basic characteristics and changes of the system of settlements in Slovenia at the beginning of the second decade of the 21st Century with special emphasis on the hierarchy and importance of settlements.To achieve the aim of the article different traditional approaches are used (the settlement network analysis, urban settlements, central settlements, planned polycentric urban system of settlements), but special emphasis is given to alternative approach for determination of the system of settlements in Slovenia trying to include several indicators.The latter approach was developed by Černe and the author of this article (Černe, et al., 2007).
Methods used in the survey are based on calculation of relative numbers and elaboration of the basic statistical analysis.The source of data is the database of the national statistical office, especially the register-based census data from 2011, analysis of geographical literature and spatial planning strategies.Results on the categorization of settlements in Slovenia as an alternative approach are presented and evaluated as well.
The first part of the article presents traditional approaches in analyzing the system of settlements: basic characteristics of the settlement network in Slovenia using the newest population statistics, analysis of central settlements and urban settlements together with the presentation of the planned network of settlements as defined in the national spatial development strategy.The second part of the article brings results of the categorization of settlements in Slovenia as an alternative approach in analyzing the system of settlements.The article concludes with a short summary of basic characteristics of the system of settlements in Slovenia, specific issues regarding the current and future development of the system of settlements in Slovenia and the possibilities for future research on the system of settlements in Slovenia.
Basic characteristics of the settlement network
According to data from the Register of Spatial Units, which is kept by the Surveying and Mapping Authority of the Republic of Slovenia there were 6,032 settlements in Slovenia on the 1st of April 2012 (Territorial units..., 2012).If there were 2.055,496 inhabitants living in Slovenia at that time (first quarter of 2012; SI-Stat, 2012), then there were 341 inhabitants living on average in each settlement.With exception of Ljubljana and Maribor other towns in Slovenia are small.But on the other hand there are many small rural settlements: typical Slovene settlement has less than 500 inhabitants.
Rank-size relationship of settlements in Slovenia shows primacy of Ljubljana, the largest settlement in the system of settlements in Slovenia.But on the other end of the line there can be seen the influence of nu- The second characteristic of settlement network in Slovenia is its dispersion.There are about 30 settlements for every 100 km2 on average therefore distances separating them are relatively small.The mean distance between settlements and their nearest neighbour (if we consider their random distribution) is slightly above one kilometre (1.203 km).The nearest neighbour index calculated from mean-distance between nearest settlements and mean-distance between randomly distributed settlements and their nearest neighbours (see Hammond, McCullagh, 1974) is surprisingly 0.8187 (p = 0,00) showing their (medium) clustering.Value of the nearest neighbour index must be interpreted through physical-geographical characteristics of Slovenia, because mountainous areas are less populated.Settlements in Slovenia are very dispersed especially in the pre-Alpine and the sub-Pannonian parts of Slovenia (central and eastern part of Slovenia), but it needs to be pointed out, that this is a unique characteristic of the cultural landscape of those areas.
Namely, the majority of Slovene settlements developed in the period when agriculture was the basic activity, which is also reflected in their location, density and size.There are still many isolated farmsteads, tiny hamlets, smaller or larger dispersed or compact settle-ments (Kladnik, 2004).Natural conditions were important for development of the network of settlements as well.Diverse relief of Slovenia was not suitable for centralization and growth of a single centre.Main administrative and economic centres on the territory of Slovenia which developed in the second half of the 19th century after construction of first railroads and especially after the Second World War with intensive industrialization became the backbone of the polycentric development that was later enhanced by official policy of polycentric development that highlighted the importance of regional centres and later of municipal centres.
Processes of deagrarization, industrialization, centralization and polycentric development policies from the 1960s on influenced on the development of some settlements, while others stagnated.The result is uneven distribution of population in Slovenia: sixth of the country has an above-average population density where four fifths of the population lives.Those areas are mostly in altitude belts below 400 meters, mainly around Slovenia's largest settlements, which are located on plains and along the shoreline of the Bay of Trieste.Almost tenth of Slovenia is unpopulated (Kladnik, 2004).
Areas around larger settlements (Ljubljana, Maribor, Celje, Koper, Kranj,...) experienced rapid development.Suburbanization occupies around one third of Slovenia's surface area.Suburbanized settlements include the extremely transformed villages found throughout the country.There are 280 settlements in the immediate suburban vicinity of larger settlements which are followed by another 660 very urbanized suburban settlements.In these two groups of settlements live more than 400,000 inhabitants.The next circle comprises 965 urbanized and semi-urbanized rural settlements with altogether 300,000 inhabitants.Other settlements are characterised as rural (Kladnik, 2004).
But even small rural settlements are modernised.Polycentric development of Slovenia and improvement in the quality of the road network enabled people living in the countryside in (transformed) rural settlements and commuting daily on their place of work.Therefore, small settlements in Slovenia are usually not typical rural settlements with agriculture as the most important economic activity of their residents: they have important dwelling functions, while rural centres also some services and manufacturing.Exceptions are found in less developed peripheral mountainous and border areas of Slovenia where centres of economic development are often too far or are less powerful.Depopulated less developed areas cover almost one-half of the national territory.
Urban settlements in Slovenia
When discussing the role of urban settlement in Slovenia, it needs to be pointed out, that level of urbanization reaches only 50%.The official level of urbanisation was around 50 % in 2011 therefore half of Slovenians live on the countryside.But a vast majority of them work in manufacturing and service sector, since agriculture employs only 4 % of active population in Slovenia.The countryside is urbanized, especially around larger settlements.Suburbanization and periurbanization are important geographical processes, because residents are leaving towns to live in the countryside with lower prices of dwellings and better quality of life.But this is possible only on areas with good infrastructure that links settlements with important economic centres.
Which Slovene settlement is a town?According to the local self-government act from 1994 a town is a larger urban settlement that in terms of size, economic structure, population density and historical development differs from other settlements.A town needs to have more than 3,000 inhabitants.A settlement obtains town status by decision of the National Assembly of Republic of Slovenia (Regulations…, 1999).
National Assembly of Republic of Slovenia confirmed the "town" status for 51 settlements in Slovenia.Some municipal councils declared another 7 towns.Therefore there are 58 settlements in Slovenia with the status of "towns" (Kušar, Pichler-Milanović, 2010).There are also some other (urban) settlements that don't have a "town" status despite fulfilment of criteria defined by the Local Self-government Act (Urban settlements…, 2004).
There are different concepts how to define urban settlements, namely, each social science (geography as well) uses its own criteria to define them.In order to prepare the unified list of urban settlements experts from the national statistical office decided to use the method that is entirely based on statistical data.The most recent list of urban settlements and settlements in urban areas was prepared in 2003 and is used exclusively for statistical surveys and analysis.Indicators that were used for their definition are joined into four groups of criteria: number of population, morphology (population density, built-up areas), functions (number of jobs, daily migrants, transport connections, services) and structural indicators (e.g.number of farms) (Urban settlements…, 2004).
According to all four above mentioned criteria the list of urban settlements and settlements in urban areas includes 156 settlements in 104 urban areas, which can each be represented by one urban settlement or an urban settlement with suburban settlements (Urban settlements..., 2004).All towns are also urban settlements except from Višnja Gora and Kostanjevica na Krki.Both settlements retained the status due to historic reasons.
Taking into account the 2011 register-based census data, when total population of Slovenia was counted to 2.050,189, a little more than one million of population was living in urban settlements.That gives the urbanisation rate of 49.9 %.
Local self-government act defines also municipalities (NUTS 5) with special status: "urban municipalities".Urban municipality is a densely populated settlement(s) of a unique territory inter-linked with daily migrations of population.The town is granted a status of urban municipality when having at least 20,000 inhabitants and 15,000 jobs of which more than half are in service sector and represent geographic, economic and cultural centre of the functional urban area.The status of urban municipality needs to be confirmed by the National Assembly of Republic of Slovenia after the local (municipal) referendum.Among 212 municipalities currently being in operation in Slovenia, only 11 municipalities have the status of urban municipalities: Celje, Koper, Kranj, Ljubljana, Maribor, Murska Sobota, Nova Gorica, Novo mesto, Ptuj, Slovenj Gradec, and Velenje.Urban municipalities need to have services which are typical for central places of higher hierarchical levels (Regulations…, 1999).
The system of central settlements in Slovenia
Central places are basic centres in the spatial organization of human society.They are usually determined on the basis of the importance of central place activities (selected service activities) and on the basis of determination of functional significance of central places within the network of all settlements through their gravitational influences.Central places are arranged in the hierarchy, so that central settlements of a higher hierarchical level have various more demanding functions along with basic ones (Černe, et al., 2007).
The network of central settlements in Slovenia has been influenced primarily by uneven distribution of population, the surface relief, historical development and the political and administrative division of the territory (Kladnik, 2004).(Černe, at al., 2007).
The latest research on central places in Slovenia was conducted by D. Cigale (published in 2002) and M. Benkovič Krašovec (published in 2006).These two researches are also the basis of the following short presentation of central places in Slovenia.
D. Cigale prepared the study of central-place relationship in Slovenia in 1999.He discussed only the role of central places of higher ranks.His method was very similar to previous ones: schools were asked to define where local population obtains different goods and services which were included in the questionnaire.On the basis of the questionnaire and statistical data about provision of central places with different services he distinguished two macro-regional centres (Ljubljana, Maribor), 17 mezzo-regional centres and 79 micro-regional centres.Beside this hierarchical structure Cigale (2002) pointed out, that there were also numerous smaller, less important centres on lower hierarchical levels (Cigale, 2002).
Lower hierarchical levels of central places in Slovenia were thoroughly studied by M. Benkovič Krašovec (2006).According to presence of selected service activities in settlements she defined 358 central places of the first degree (sub-local centres) and 132 local centres i. e. settlements of the second degree (Černe, at al., 2007).
The system of more than 6,000 settlements in Slovenia comprises less than 10 % (588 or 9.7 %) of set- tlements that have some role from the aspect of distribution of central place activities (Kušar, Pichler-Milanović, 2010).But these settlements are very important for shaping economic, social and spatial structure of the territory of Slovenia.
Planned (future) system of settlements in Slovenia
Basic strategic document on the national level that plans development of the system of settlements in Slovenia is Spatial Development Strategy of Slovenia (2004).It provides the framework for spatial development across the entire national territory and sets guidelines for development within the European space.It provides also the concept of spatial planning and management, land use and spatial protection.
One of the goals of Spatial Development Strategy of Slovenia ( 2004) is also polycentric development of the network of towns and other settlements therefore the strategy continues the paradigm of polycentric development that is primary guideline for development of settlements in Slovenia from 1960s on.Special attention is given to development of urban centres with national and regional significance as centres of regional territories.Strategy encourages also functional and infrastructural integration of cities, towns and other settlements.
The basic framework of the urban system is polycentric network of centres of national and region- al significance.At the local level, the centres of intermunicipal significance, important local centres and other local centres should be developed with efficient and balanced accessibility, appropriate distribution of functions, and transport interconnections (Spatial…, 2004).
Urban settlements with at least 10,000 inhabitants and potentials for development of services, supply and other activities to be provided to the population in their areas of influence should be, according to Spatial Development Strategy of Slovenia ( 2004), developed as centres of national significance.There are 15 centres of national significance planned of which Ljubljana, Maribor and conurbation Koper-Izola-Piran are due to their role, size and position national urban centres of international significance.Beside coastal conurbation there are also some other conurbations of functionally and spatially linked urban settlements, that can develop as a centre of national or regional significance as well.
Regional centres and inter-municipal centres are the second level of the system of urban settlements in Slovenia.Regional centres are urban settlements with at least 5,000 inhabitants, whose gravitation areas are remote, hilly, mountainous, or border areas, or are far away from main transport corridors.Inter-municipal centres should be given priority in development as well, because they are important for provision of services for peripheral or hilly areas or are spatially remote from other important centres (Spatial…, 2004).
Methods and data
The starting point for alternative analysis of the system of settlements in Slovenia is the notion that functions of settlements according to location of central activities are just one of the basic elements that are presenting a complex significance of settlements and their cohesion.Functional significance of a settlement is dependent also of other factors, for example its size, geographical position, economic power and transport location which may, in great deal, determine the significance of individual settlement in the system of settlements.Therefore we used not just the degree of provision of a settlement with different central place activities for the categorization of settlements, but also selected elements and processes which can determine relations between settlements within the system of settlements and their relations towards spatial development.
The importance of each settlement in the network of all settlements is based on its size, functional de- termination (degree of centrality or provision of settlements with central activities) and economic significance of the settlement (number of jobs in settlements).Categorization of settlements in Slovenia was therefore determined according to four indicators: 1. number of inhabitants; 2. degree of centrality of settlements; 3. number of working places; 4. share of working active population not working in the settlement of residence.
Categorization of settlements according to abovementioned indicators comprises a selection of 590 central places in Slovenia and 206 settlements which are not central places, but have more than 100 working places.Finally, 796 settlements in Slovenia were included in the categorization (13.2 % of all settlements in Slovenia).Other settlements have low economic importance with emphasized dwelling function.Categorized settlements are also population nodes.
Each settlement analysed in the categorization was classified into one of six classes in order to standardize data on the same measurement scale and to follow the logic of distribution of central settlements.The next step of categorization was typification of settlements based on the combination of all four indicators used in the analysis.The final result of the analysis was the categorization of settlements on 6 levels to keep the standardized measurement scale.Those levels were named using the most common terms for determining the system of settlements.
Results and discussion
Settlements used in the categorization were delineated on 2 macro-regional, 15 regional, 52 sub-regional, 142 local, 299 sub-local and 286 micro-local centres.
Ljubljana and Maribor are macro-regional centres.They are the most important central settlements, nodes of population and economic development.Apart for two macro-regional centres there are also 15 regional centres (Brežice, Celje, Jesenice, Koper, Kranj, Krško, Murska Sobota, Nova Gorica, Novo mesto, Postojna, Ptuj, Ravne na Koroškem, Slovenj Gradec, Trbovlje and Velenje), which are relatively evenly distributed throughout the national territory.Some of them are spatially close to each other (Ravne na Koroškem-Slovenj Gradec and Krško-Brežice), forming conurbations.Sub-regional centres are mostly sits of ex-municipalities or communes which were in place from socialist times until the reform of local self-governance in 1993.During the old municipal system they got many central functions (schools, health service, administration) and economic activities from secondary and service sectors.Therefore, their role as inter-municipal centres is still evident in the system of settlements in Slovenia.The network of local, sublocal and micro-local centres is denser in the eastern part of Slovenia, where population density is higher.However, the network of all categorized settlements is quite well dispersed.Local centres are often larger settlements with emphasised economic function, but not necessarily having central place functions.
Some of categorized settlements are spatially adjacent.They can be incorporated into agglomerations.Agglomeration is a group of neighbouring categorized settlements that are spatially adjacent with possible functional connections.The importance of agglomeration is determined according to the highest-ranked settlement in the agglomeration.Ljubljana agglomeration of macro-regional importance is the largest agglomeration in Slovenia covering majority of the Ljubljana functional region and stretching along the highway corridors.All regional centres with the exception of Novo mesto have its own agglomeration.Although they are small, agglomerations are formed also on local levels.Results of categorization show that system of settlements in Slovenia follows principals of dispersed decentralization and formation of urban regions which is a recent development pattern of the system of settlements in the developed world (Ravbar, 2004).
On the other hand, categorization of settlements in Slovenia points out, that hierarchy between settlements still exists.But the hierarchy inside the system of settlements is changing, especially on lower levels.For example, when Cigale (2002) compared results of his research on central-place hierarchy with findings of Vrišer from 1987, he pointed out numerous changes in the central-place relationship, especially on the micro-regional level, where due to the growing impor- tance of some of smaller centres the network of micro-regional central places has become denser.The importance of small centres, but centres of newly founded municipalities, was estimated to be higher as well (Cigale, 2002;Černe, Kušar, 2012).On the other hand, Ljubljana gained its position as administrative, political, cultural, commercial and educational centre of the Republic of Slovenia, but the categorization of settlements does not present this fact.
It is interesting, that all categorized settlements are not also transport nodes.The transport node is defined as the sum of nodes of all kinds of transport modes intersections and not just one.Only the syner-gy of different transport modes and nodes indicate the degree of connection between transport and the centrality of the settlement.The hierarchy of transport nodes is determined according to the weighted values of hierarchical transportation lines: roads and railways, number of bus lines, airports and ports.Their categorization was prepared using the same methodological approach as in the case of settlement categorization (Černe, Kušar, 2008).
From the spatial distribution of the six-grade hierarchical settlement structure and six-grade hierarchical structure of transport nodes it is evident that only at the highest hierarchical level this structure is iden- Source: adapted from Černe, Kušar, 2008 tical.At regional level, regional centres are often not also the location of regional transport nodes.Just 5 regional centres among 15 coincide with regional transport nodes.In the case of sub-regional centres and sub-regional transport nodes the ratio is even lower: the spatial distribution of 52 centres and 26 transport nodes is identical just in eleven cases.Therefore, we can conclude that spatial structure of the urban system and transport infrastructure, as far as the spatial distribution of its hierarchical elements is concerned, is relatively weak, mostly at lower levels.That means that categorized settlements in Slovenia don't have necessarily also the same position from the transportation point of view.
Conclusions
The system of settlements in Slovenia is characterized by a large number of settlements, their dispersion and low level of urbanization.There are only two larger towns, of which Ljubljana is much larger than Maribor, and many small towns (141 or 2.3 %).But a large share (48.7 %) of population resides in settlements with less than 1,500 inhabitants.Urban system of Slovenia is relatively weak from the European point of view.Only Ljubljana urban region exceeds half a million inhabitants.However, the system of the most important settlements in Slovenia is included on maps showing territorial potentials in the enlarged European Union.
Comparison of three approaches used to analyse the system of settlements in Slovenia (central settlements, planned polycentric urban system and categorization of settlements) indicates similar results.According to central settlements analysis and categorization of settlements there are 2 most important settlements (macro-regional centres) in Slovenia: Ljubljana and Maribor.Planned polycentric urban system adds to macro-regional centres also coastal conurbation Koper-Izola-Piran.On the second highest level (regional centres) same settlements are placed in all three cases except for Koper that is placed in the group of national urban centres of international significance in the case of the planned urban system as proposed by Spatial Development Strategy of Slovenia (2004).First differences between all three approaches appear on the sub-regional level, where categorization of settlements has 52 settlements, central settlements system 79 (but it includes 2 macro-regional and 15 regional centres, which follows the theory that all settlements of the higher level automatically reflect functions of central settlements of lower levels) and planned system of settlements 15 urban centres of regional significance and 20 urban centres of inter-municipal significance.
The system of settlements in Slovenia is, independently on the used categorization, governed by less than 70 settlements in which main economic functions and majority of population are situated.On the other hand, the vast majority of settlements in Slovenia is despite the polycentric spatial development paradigm form 1960s on and decentralized, even dispersed development of small economic activities before and after the independence in 1991 mainly without any significant economic functions that would be important for their development or development of other settlements.They are either small rural villages or larger settlements located mostly in suburban areas with emphasized dwelling function.Rural settlements are of vital importance in Slovenia since their number and number of population settling in them remains high.As important part of rural population commutes daily in economic centres to work it is expected, that pressures on building new houses in rural areas will continue.Although this might cause additional environmental pressures due to pollution of air, consumption of nonrenewable resources, building on agricultural land and further dispersion of settling the process will be important from the point of the national regional policy goals that emphasize polycentric development and prevention from depopulation.However, rural settlements around important economic nodes are in a better position, but on the other hand it is expected that the most peripheral settlements will be confronted with further depopulation.
The system of settlements in Slovenia is changing.It is witnessing partial disintegration of hierarchical levels on lower levels and the growth of importance in the highest levels, especially of Ljubljana.Surprisingly, the traditional and alternative approaches used to analyse the system of settlements in Slovenia are not reflecting these changes.Even the alternative approach that used more than one indicator to analyse the system of settlements is not bringing much value added to knowledge about the system of settlements in Slovenia.Therefore, future research on the system of settlements in Slovenia should be directed toward integrating traditional indicators and indicators presenting development dynamics together with European dimension of individual settlements.
Figure 1 .
Figure 1.Rank-size relationship of settlements in Slovenia Source: SORS -2011 Register-based Census rural) settlements.Distribution of settlements in Slovenia therefore shows unexpressive oligarchic pattern.
Figure 2 .
Figure 2. The network of settlements in Slovenia First studies of central places in Slovenia date in the late 1960s, when I. Vrišer and V. Kokole defined central places in Slovenia, their hierarchy and gravitational influence.V. Kokole prepared a new thorough research on central places in Slovenia in 1971 as well.The next important milestone in analysing central places in Slovenia was in 1987, when I. Vrišer prepared a new categorization of central places in Slovenia.He used almost the same indicators as in his previous research.He defined 600 central places in seven hierarchical levels.I. Vrišer prepared a new analysis of central places also in 1994 using the same methodology as in 1987.He defined 612 central places
Figure 3 .
Figure 3. City municipalities, towns, urban settlements and settlements in urban areas in Slovenia Source: Adapted from Urban settlements…, 2004
Figure 4 .
Figure 4. Central places in Slovenia in the middle of the first decade of the 21st centurySource: Adapted fromKušar, Pichler-Milanović, 2010
Figure 5 .
Figure 5. Polycentric Urban System in Slovenia Source: Spatial Development Strategy of Slovenia, 2004
Figure 6 .
Figure 6.Categorization of settlements in Slovenia
Table 1 .
Settlements with more than 10,000 inhabitants in Slovenia in 2011
Table 2 .
Groups of settlements in Slovenia according to the number of population in 2002
Table 3 .
The concept of the system of settlements in Slovenia as defined in Spatial Development Strategy of Slovenia
Table 4 .
Indicators used for the categorization of settlements in Slovenia
Table 5 .
Categorization of settlements and transport nodes in Slovenia: comparison of the structure
|
2018-12-19T08:37:41.732Z
|
2013-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "161abb5abca94529342694f23004a0e6bacf73cc",
"oa_license": "CCBY",
"oa_url": "https://scindeks-clanci.ceon.rs/data/pdf/0354-8724/2013/0354-87241301014K.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "161abb5abca94529342694f23004a0e6bacf73cc",
"s2fieldsofstudy": [
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
}
|
119303888
|
pes2o/s2orc
|
v3-fos-license
|
Twisted spacetime in Einstein gravity
We find a vacuum stationary twisted solution in four-dimensional Einstein gravity. Its frame dragging angular velocities are antisymmetric with respect to the equatorial plane. It possesses a symmetry of joint inversion of time and parity with respect to the equatorial plane. Its Arnowitt-Deser-Misner (ADM) mass and angular momentum are zero. It is curved but regular all over the manifold. Its Komar mass and Komar angular momentum are also zero. Its infinite red-shift surface coincides with its event horizon, since the event horizon does not rotate. Furthermore we extend this solution to the massive case, and find some similar properties. This solution is a stationary axisymmetric solution, but not Kerr. It explicitly proves that pure Einstein gravity permits different rotational mode other than Kerr. Our results demonstrate that the Einstein theory may have much more rich structures than what we ever imagine.
I. INTRODUCTION
Einstein's general relativity [1] is the standard modern gravity theory. Exact solutions take pivotal status in the theory. The well known three classical tests of general relativity depend on the Schwarzschildsolution [2], which play decisive roles in the early developments of the general relativity. Many exact solutions for Einstein equation have been found [4]. Among these exact solutions, two of the most significant solutions are Schwarzschild and Kerr [3]. The Schwarzschild solution describes the spacetime around a spherically symmetric star. The Kerr solution describes the spacetime around a rotating star. Actually, the Schwarzschild and Kerr solutions can describe the gravitational fields in vast ranges from millimetre to the clusters of galaxies (the Newton's law as a weak field limit).
It is generally believed that the Schwarzschild solution is the unique static vacuum solution, and the Kerr solution is the unique rotating vacuum solution on an asymptotic manifold in Einstein gravity after the proof of the uniqueness theorems, for reviews see [5]. Almost all the celestial objects are rotating, quickly or slowly. So the Kerr solution is extremely important in the studies of astrophysics. Unfortunately we cannot obtain a completely satisfactory interior solution matching to Kerr up to now, though great efforts have been evolved in this topic for more than 50 years. On the contrary, we obtain several negative results. For examples, a perfect fluid cannot be the source of Kerr [6], and the analytic approximate solution that describes the slowly rotating astrophysical object does not lead to Kerr [7]. The other problem of the Kerr solution is that it is unstable against linear perturbations in the interior region [8]. That means the result of collapsing of a rotating star may not be a Kerr black hole, even if the exterior region of the progenitor can be described by Kerr solution. So, theoretically the question is: Are there any rotating modes different from Kerr, more or less like the wave functions of the hydrogen atom with the same angular momentum?
At the first sight the answer is no, since the Carter uniqueness theorem forbids them [9]. Exactly, the Carter uniqueness theorem requires: a. the manifold is axially symmetric and stationary; b. the manifold is asymptotically flat, and the total mass and total angular momentum measured at the infinity, i.e., the ADM mass and angular momentum [10], are M, J respectively; c. the manifold is regular everywhere at the exterior region of the horizon (including the horizon). Under these conditions, the spacetime must be Kerr. This is a quite exciting but harsh theorem. It almost determines the metric around any celestial object (almost all celestial bodies are rotating), including planet, stars, galaxies, and cluster of galaxies. It is just Kerr. However, as we have mentioned, Kerr has some problems. Interestingly, by a careful analysis of the Israel uniqueness theorem [11] and the Carter uniqueness theorem, we find that the Einstein equation permits more vacuum asymptotically flat rotating solution other than Kerr. We explain the general idea for our approach on a rotating spacetime to evade the uniqueness theorems. We consider a rotating spacetime with J = 0. Since it is rotating the Israel uniqueness theorem say nothing about this case. At the same time its total angular momentum is zero, thus the Carter uniqueness neither requires it to be Kerr. Our key idea is that "rotating" does not contradict with J = 0, which is beyond the traditional lore. For example, we can consider the case that the space rotates at different directions in different spherical shells. The angular momentums in different shells are arranged to be exactly counteracted, and thus the total angular momentum vanishes. Without breaking the axial symmetry, we require the directions of rotation of the shells to be up or down. We find a twisted solution which is a little more complicated than this heuristic example.
The metric read, (1) where a is a constant, and (t, r, θ, φ) are spherical coordinates, which come back to the standard spherical coordinates in Minkowski spacetime when a = 0. The Ricci tensor for this metric vanishes, Thus it is a vacuum solution of the Einstein equation. It is a curved space and regular all over the manifold, since the only non-zero scalar polynomial of curvatures (the Kretschmann scalar) reads, It is easy to check that the spacetime (1) possesses a discrete symmetry of joint transformations of time inversion (t → − t) and reflection with respect to the equatorial plane (θ → π − θ).
The infinite red-shift surface dwells at And the event horizon f =constant satisfies, It is easy to obtain that the infinite red-shift surface coincides with the event horizon at r = a. We shall discuss the the reason for this coincidence later. It is a black hole, since it has an event horizon. However, it is very different from the ordinary holes, which have spacetime singularities. From (3) one sees that the black hole (1) is regular everywhere in the whole spacetime. The frame dragging angular velocity reads, Based on the detailed discussions on the rotating metrics, one finds that the frame dragging velocity can be treated as the angular velocity of the spacetime itself [12]. A simple explanation is that a co-rotating observer, will sense a time-orthogonal spacetime like a static one (without space-time cross term). At the horizon r = a, the frame dragging velocity vanishes. In this sense, the horizon is static. Thus the infinite red-shift surface coincides with the event horizon. That is different from the case of Kerr, for which the event horizon is rotating, so that the infinite red-shift surface is separated from the event horizon. The other important property of the frame dragging angular velocity is that it vanishes at the equatorial plane θ = π/2, and is antisymmetric with respect to the equatorial plane, i.e., Ω(θ) = Ω(π − θ). Fixing θ, for the large r approximation we have, Hence, Ω has at least one maximum for r ∈ [a, ∞). In this interval only one point satisfies, dΩ dr = 0, and d 2 Ω dr 2 < 0. .
From the previous discussions, we obtain the picture of this spacetime. It is a twisted spacetime. The spacetime rotates in opposite directions above and below the equatorial plane. The equatorial plane itself does not rotate. A sketch of this spacetime is shown in the fig. 1. If one inserts an elastic bar with finite thickness along θ = 0, it will be twisted to be something like a screw steel bar. Next we study the mass and angular momentum of this spacetime. First we show that the spacetime (1) is asymptotically flat. For the sake of canceling the frame dragging effects, we introduce the co-rotating coordinates, t = t,r = r,θ = θ, andφ = φ − Ωt.
With this new coordinate system, the metric (1) reads, ds 2 = (g 00 − g 2 03 /g 33 )dt 2 + g 11 dr 2 + g 22 dθ 2 + g 33 (dφ +tdΩ) 2 , where g µν represent the exponents of metric (1). Here they are treated as the functions of (t,r,θ,φ). One can show that (t,r,θ,φ) are (quasi-)spherical coordinates in the ADM formalism, which satisfies where η µν are the components of Minkowski metric in spherical coordinates. According to the standard formulae of ADM mass and ADM angular momentum, we obtain and The ADM formulism can only obtain the total mass and angular momentum. It cannot say anything about the spatial distributions of the mass and angular momentum. The distribution of mass and angular momentum of gravity field is a very intricate problem. A local mass seems necessary in the cases, for example, the propagation of gravitational waves from the source to the local observer. However, an energy-stress density is prohibited by the equivalence principle. Thus we turn to the quasi-local forms as an inevitable concession. After decades' studies we have several different definitions quasi-local gravitational mass and angular momentum. Usually, they are not equal to each other in the same 2-surface. Some of them are significant not only in the studies of mass of gravity, but also in other topics including thermodynamics, exact solutions etc [13]. Here we do not check them one by one for the metric (1). As an example we study one of the earliest form, the Komar integral, which is applicable in the stationary spacetime [14]. The Komar integral defines the gravitational mass by the imprints of gravitational effects on a 2-surface in a stationary spacetime. Originally, it is presented as the total mass for a space-like world sheet in an infinite 2-surface. In principle, it is also can be used to define a quasi-local mass in a finite 2-surface, where a star denotes the Hodge dual operator, and ξ is the lower index form of the time like Killing vector, For the metric (1), the Komar integral (16) in 2-surface of radius r presents, The Komar mass vanishes when r → ∞. This result is consistent with the ADM mass (14). The physical interpretation is that the kinetic energy (because of rotation) exactly counteracts the potential energy. For a finite 2-surface, the Komar mass always larger than zero. This means that the kinetic energy always larger than the potential energy. The Komar angular momentum reads, where In a 2-surface of radius r, the Komar angular momentum reads, This result is also consistent with the ADM angular momentum. Further, it presents a more fine result, which is independent of r. The Physical interpretation is that the angular momentum above the equatorial plane exactly counteracts the angular momentum below the plane. Of course, it embodies the property of antisymmetry of the frame-dragging angular velocity with respect to the equatorial plane. So, we call this solution rotating space without (total) mass and (total) angular momentum. There is only one parameter a in this solution, which denotes the angular velocity of the spacetime. In the following text, we shall introduce one more parameter into the metric (1), which can be interpreted as the total mass of a spacelike world sheet. Then we make a preliminary discussion about the motion of a test particle in the spacetime (1). First we consider a particle moving on the equator plane θ = π/2. For a time-like geodesics, we obtain at large r approximation, Here ρ = 1/r, and L is the angular momentum of the test particle. Note that L is different from the angular momentum of the spacetime J. Comparing to the equation of motion in the Schwarzschild spacetime, we find that the effective Schwarzschild mass is 2aρ, and the leading correction term becomes 1/r 3 rather than 1/r 2 .
For more realistic studies, we extend the solution (1) to a solution with one more parameter M , The metric (23) is a solution of vacuum Einstein equation. For it the Ricci tensor reads, It is easy to see that the metric (23) comes back to metric (1) when M = 0. Also one can check that (23) becomes Schwarzschild metric when a = 0. The infinite red-shift surface and event horizon coincide at, The metric (23) is still regular all over the spacetime, since the Kretschmann scalar reads, The properties of the frame-dragging velocity are also similar to the massless case. In the dragged frame (11), one can demonstrate the asymptotic flatness of the spacetime (23). Then the ADM mass and angular momentum read, and respectively. The Komar mass and angular momentum in a 2-surface of radius r read, and J Komar = a 3 (3r − M ) + r 2 a(3M − r) 2(a 2 + r 2 ) sin θ cos θdθ = 0, respectively. We see that M is the mass parameter, which reduces to the Schwarzschildmass when a = 0, and the total angular momentum is zero. The physical interpretations of these results follow the previous case. Mimicking the previous massless case, we obtain the equation of geodesics for a test particle moving on the equatorial plane at large r approximation, In this form, we recover all the terms for the Schwarzschild solution (the terms at the left hand side of the above equation). The correction terms relative to the Schwarzschild spacetime are casted to the right hand side of the (31). In summary, we obtain a vacuum stationary asymptotically flat axisymmetric twisted spacetime in Einstein gravity and study some preliminary properties of it. It describes a rotating spacetime with total angular momentum J = 0. The directions of the angular velocities of the spacetime are antisymmetric with respect to the equatorial plane. So its concrete image is a twisted spacetime. There is no true singularity in this spacetime. Its ADM mass and angular momentum, and total Komar mass and angular momentum are zero. We study the motion of a test particle on the equatorial plane and find the equation of motion. Finally, we present a massive version of this twisted solution. Usually, the Carter uniqueness theorem of black hole is declared to be "a stationary asymptotically flat axisymmetric solution of Einstein gravity must be Kerr." In this sense, we find a counterexample of this claim. Through detailed analysis of the theorem we find that our result in fact does not violate the Israel and Carter theorems. To find the observational indications is the future work. This Letter implicitly displays that the Einstein gravity may still hide more amazing structures beyond our present studies.
|
2016-09-29T07:49:26.000Z
|
2016-09-29T00:00:00.000
|
{
"year": 2016,
"sha1": "20a5a6f9d262114a84c499e8695bfb8897a2600d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8530e2aaa0de8eb0f13280904e65d1f2ea40591b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
26598055
|
pes2o/s2orc
|
v3-fos-license
|
The Efficacy of Neurofeedback Among Patients with Major Depressive Disorder : Preliminary Study
Introduction: Alpha asymmetry of the left and right frontal hemisphere is a potential biomarker for major depressive disorder (MDD). Neurofeedback (NFB) is a clinical intervention program for regulating brain activity and decreasing alpha asymmetry. The purpose of this study was to explore the efficacy of NFB among patients with MDD. Methods: Fourteen patients with MDD were randomly assigned to a NFB group that received neurofeedback training 1 hr weekly for 6 weeks and to a control group that was treated without training. A 5-min resting baseline of electroencephalogram (EEG) was recorded at F3 (left) and F4 (right) before and after NFB, and the alpha power was analyzed as an asymmetry index (A1). Results: The A1 of the control group decreased from preto post-interventions while the A1 of the NFB group increased from preto post-interventions. Anxiety and depression scores of the responder group decreased from preto post-interventions, while the scores of the non-responder group increased from preto post-interventions. Conclusion: Patients who respond to the NFB training showed a decrease in anxiety and depression scores compared to those who do not. This study indicated that NFB could improve left frontal hypoarousal or right frontal hyperarousal among patients with MDD.
Introduction
Major depressive disorder (MDD) is one of the common mental disorders that cause individual physical discomfort and emotional suffering, as well as interpersonal problems, and impaired social and career functioning.Depression is usually treated with antidepressant medications; however, recent research suggests that for children and teenagers with major depression, 13 out of the 14 antidepressant drugs do not work and can increase teenage suicide rates (Le Noury et al., 2015).Even in adults, selective serotonin reuptake inhibitors, such as Prozac, may not work any better than placebo; in addition, almost all studies that have demonstrated positive pharmaceutical effects have been directly or indirectly sponsored by pharmaceutical companies (Ebrahim, Bance, Athale, Malachowski, & Ioannidis, 2016).
To avoid antidepressant side effects and offer patients MDD treatment, neurofeedback (NFB) is a promising new approach.
Electroencephalographic (EEG) studies have found greater alpha power over the left than the right frontal regions among patients with MDD (Debener et al., 2000;Gotlib, Ranganath, & Rosenfeld, 1998).Davidson (1984) indicated that the frontal alpha asymmetry with an active left frontal lobe is related to the behavioral approach system (BAS), leading to more positive emotions, motivation, and behavioral approaches.On the other hand, an active right prefrontal lobe is related to behavioral inhibition system (BIS), leading to more negative emotions, escape-related motivation, and behavioral withdrawal.Davidson (1998) calculated the alpha asymmetry score (A score) from the alpha power at the left frontal (L or F3) and right frontal (R or F4) lobes.Baehr, Rosenfeld, and Baehr proposed two equations for the A score: A1 = log (R)log (L) and A2 = (R -L) / (R + L) (Baehr, Rosenfeld, & Baehr, 1997;Rosenfeld, Baehr, Baehr, Gotlib, & Ranganath, 1996).When the alpha power at the right frontal lobe is higher than that in the left frontal lobe, the A1 score is positive and is associated with more positive emotions (such as happiness and joy), motivation, and approach behavior from the BAS system.When the alpha power at the right frontal lobe is lower than that in the left frontal lobe, the A1 score is negative and is associated with more negative emotions (such as fear, disgust, and sadness), escape-related motivation, and behavior withdrawal from the BIS system (Davidson, 1984).The prefrontal alpha asymmetry suggests brain lateralization (hemispheric laterality; Davidson, 1984Davidson, , 1993Davidson, , 1998)).
Based on the concept that alpha asymmetry is a potential biomarker for depression, Baehr et al. (1997) developed a neurofeedback protocol, the Alpha Asymmetry (ALAY) protocol, to train patients to change the frontal alpha asymmetry as neurofeedback training for depression.The goals were to decrease the left frontal alpha power at F3 to improve positive emotion and increase the right frontal alpha power at F4 to decrease depression.Case reports found that the ALAY protocol could improve the frontal alpha asymmetry and decrease depressive symptoms (Baehr et al., 1997;Dias & van Deusen, 2011;Rosenfeld et al., 1996).The treatment effectiveness was maintained through 1to 5-year follow-ups (Baehr, Rosenfeld, & Baehr, 2001).
Several studies have shown that neurofeedback has long-term benefits and show significant improvement in the frontal alpha asymmetry and depressive symptoms.Choi et al. (2011) conducted a randomized sham control group study comparing a 1-hr twice a week for 10 weeks NFB group (ALAY protocol) with a psychotherapy group.The NFB group as compared to the psychotherapy group showed higher A1 scores, increased positive autonomic thoughts, decreased negative autonomic thoughts, and improved performance of the executive function tests, such as semantic and phonological fluencies.Peeters, Oehlen, Ronner, van Os, and Lousberg (2014) confirmed that the neurofeedback with the ALAY protocol did decrease the depression score, but did not decrease the A1 score after 10 sessions for decreasing left frontal alpha power.
Previous studies have found that the average number of sessions of the NFB protocol were between 10 and 36 sessions (Baehr et al., 1997(Baehr et al., , 2001;;Choi et al., 2011;Peeters et al., 2014;Rosenfeld et al., 1996).The average treatment sessions to complete the NFB protocol are 20-22 sessions (Hammond, 2005).After 3-6 sessions of 30-min each of NFB, patients may feel the difference between pre-and post-interventions.After 10-12 sessions, patients feel a significant improvement.For sustained, long-term changes and clinical benefits of neurofeedback, 30 to 60 sessions may be required, which depends on compliance and motivation (Linden, Habib, & Radojevic, 1996).Hammond (2005) indicated that patients might feel the differences after 3-6 sessions of neurofeedback; however, this has not been systematically explored.The purpose of this study was to examine: (1) a short-term six-session ALAY protocol of NFB among patients with MDD, and (2) the efficacy of the ALAY protocol in increasing the alpha asymmetry score and decreasing depressive symptoms among patients with MDD in Taiwan.
Participants
Fourteen patients with MDD were referred by psychiatrists based on the criteria of Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) at an outpatient clinic of Kaohsiung Medical University Hospital (American Psychiatric Association, 2013).Patients with MDD with serious physical and mental disorders (e.g., cancer, heart disease, kidney disease, Alzheimer's disease, Parkinson's disease, schizophrenia, bipolar disorder, substance abuse, etc.) were excluded from this study.The institutional review board approval was obtained from the ethics committee of the Kaohsiung Medical University Hospital (KMUH-IRB-20110063), and written informed consent was obtained from each participant before the experiment.All participants were assigned randomly to the NFB group and the control group.
There was no significant difference between the two groups in age, sex, Beck Anxiety Inventory (BAI), Beck Depression Inventory-II (BDI-II), and duration of disorder.Participants in both groups received medications by psychiatrists; however, there was no group difference in the categories of medications (Table 1).All participants in both groups received neither psychotherapy nor other nonpharmacological treatment at the same time.The equal group design was confirmed in this study.Participants in the NFB group received 1 hr per week for 6 weeks of neurofeedback that was based on the ALAY protocol (Baehr et al., 1997).The ALAY protocol included down-training of the alpha power (8-12 Hz) at F3 and up-training of the alpha power (8-12 Hz) at F4, and the treatment goal was to increase the A1 score (Baehr et al., 1997;Baehr, Rosenfeld, Baehr, & Earnest, 1998).Participants in the control group received the pharmacological treatment only.
Psychological questionnaires
All participants completed self-report questionnaires that covered demographic characteristics (such as age, sex, and years of disorder), BAI, and BDI-II at pre-and post-interventions of neurofeedback.The BAI and BDI-II are self-report questionnaires that measure the symptoms of anxiety and depression disorder over the course of a period, respectively.
EEG equipment and measurement
The BrainAvatar Version 4.0 (BrainMaster Technologies, Inc., Bedford, OH) EEG system was used for EEG data collection, with a 19-channel EEG cap which was placed on the participants' scalps, and linked-ear was the reference to collect EEG raw signals.All participants were measured for a 5-min baseline with eyes closed at pre-and post-NFB intervention.The BioGraph Infiniti Version 6.1.1 (Thought Technology Ltd., Montreal, QC, Canada) was used with a bipolar-channel at F3 and F4 based on the International 10-20 system, Cz was the reference, and the left earlobe served as ground (Baehr et al., 1997).
The impedances of the electrode were set below 5 kΩ; band-pass was 1-30 Hz, notch filter was at 60 Hz, and the sampling rate was 256 Hz.
Data reduction and statistical analysis
The EEG raw signal was analyzed using MATLAB R2008a (The Math Works, Natick, MA), and the EEG power spectrum was transformed to the alpha power (8-12Hz) by EEG insight software (Delorme & Makeig, 2004).The frontal alpha asymmetry score (A1 index) was computed with the natural logarithms (ln) by subtracting the left alpha power from the right alpha power, A1 = log (R)log (L) (Baehr et al., 1997).
The t-tests and c 2 test were used to examine the equal group design on the demographic characteristics.The paired t-tests were used to examine the differences in the A1 score, BAI, and BDI-II at pre-and post-intervention for the NFB group and control group.In addition, the changescore of A1 = (A1 at post-intervention -A1 at preintervention) was used to separate participants in the NFB group and control group into a responder group and non-responder group.Participants in the responder group showed an increased A1 score; the non-responders had no change in the A1 score.In addition, this study also examined the differences in the A1 score, BAI, and BDI-II at pre-and postinterventions between the responder group and the non-responder group.
Results
The treatment effectiveness of neurofeedback on the A1 score The A1 score was increased slightly in the NFB group from pre-intervention (M = 0.11, SD = 0.13) to post-intervention (M = 0.12, SD = 0.13); the A1 score was decreased slightly in the control group from pre-intervention (M = 0.04, SD = 0.07) to postintervention (M = 0.02, SD = 0.07; Table 2).Although an overall increase in the alpha power at F3 and F4 was observed from pre-to postintervention in both NFB and control groups.However, there was no significant difference between the two groups in the A1 score.The responders showed an increased A1 score as compared to the non-responders in the NFB group.
This study used the change-score of A1 to separate participants in the NFB group into a responder group and a non-responder group.Although the A1 score was not different between the two groups, four participants (57.14%) showed increased A1 score from pre-to post-interventions; however, three of them (42.86%)showed decreased A1 score after neurofeedback training (Figure 1).The responders showed a decrease in anxiety and depression scores as compared to the nonresponders in the NFB group.
In addition, we used similar analysis of the responder group and non-responder group for the control group.The results showed that in the control group, the scores of anxiety and depression decreased slightly from pre-to post-interventions in the responder group (BAI: 15.00 to 10.75; and BDI-II: 19.00 to 13.75, respectively); otherwise, anxiety and depression scores were increased or did not change from pre-to post-interventions (BAI: 17.33 to 19.33; and BDI-II: 28.00 to 28.00, respectively) in the non-responder group (Figure 2).
Discussion
Preliminary results show that patients with MDD showed increased A1 score after neurofeedback training in the NFB group.The responder group responded to the neurofeedback with more improvement in anxiety and depression than the non-responder group.By separating the responders and non-responders, the study points out that it is not the number of training sessions that is important, but whether the skill mastery has been achieved.Clinical conclusions should only be drawn from those participants who mastered the skill.These results suggest that six training sessions were either insufficient for the non-responders and they needed more sessions to develop mastery or that the training protocol was inappropriate and they needed a different type of intervention (Shellenberger & Green, 1986).
This study indicated that neurofeedback training increases the A1 score by improving frontal alpha asymmetry because of two reasons: (1) increased right frontal alpha power means decreased right frontal activity, and (2) decreased left frontal alpha power means increased left frontal activity.This result was consistent with previous studies that showed decreased depressive symptoms after neurofeedback training (Baehr et al., 1997;Baehr et al., 2001;Choi et al., 2011;Dias et al., 2011;Peeters et al., 2014;Rosenfeld, 2000).We found that patients with MDD with increased A1 score after neurofeedback training (responder) also showed decreased anxiety and depression; on the other hand, patients with MDD with decreased A1 score after neurofeedback training (non-responder) showed increased anxiety and depression.
However, this study found that not all patients with MDD increased A1 score after neurofeedback training.
Even though the ALAY protocol of neurofeedback was followed to address frontal alpha asymmetry, not all patients with MDD had A1 score lower than zero.Some patients with MDD had A1 score higher than zero.Overall, this study found that increased A1 score was related to decreased symptoms of anxiety and depression (r = -0.27).However, this result may due to the placebo effect and needs to be reassessed in the same experimental conditions.
Depressive disorder is a heteroscedastic disorder, the pathophysiological mechanisms of which are still controversial.
Some depressive disorders are comorbid with anxiety disorder or anxiety symptoms (Bruder et al., 1997), with melancholia or nonmelancholia (Quinn, Rennie, Harris, & Kemp, 2014), and psychomotor retardation (Cantisani et al., 2015).These heteroscedastic characteristics may reflect different EEG patterns in different brain regions.For example, patients with MDD with comorbid anxiety had higher activity in the right parietal-temporal lobe than those with MDD without anxiety (Bruder et al., 1997).
Several limitations should be noted in this study.First, only 14 patients with MDD were included in this present study.The insufficient sample size may decrease the statistical power and cause no significant finding.
Second, the neurofeedback protocol for patients with MDD needed at least 10-36 sessions in the previous studies (Baehr et al., 1997;Choi et al., 2011;Peeters et al., 2014;Rosenfeld et al., 1996); however, our participants in the NFB group only received 1 hr per week for 6 weeks of neurofeedback training which may limit the efficacy of neurofeedback.
More practice may improve neuroplasticity (Malkowicz & Martinez, 2009).Third, the pathophysiological mechanism of alpha asymmetry was not confirmed for all patients with MDD, even though there was higher alpha asymmetry score among patients with MDD than in the healthy controls (Debener et al., 2000;Kemp et al., 2010).Fourth, because of the heterogeneity of depressive disorders, some patients with MDD with comorbid mild anxiety symptoms in our study showed a BAI score of 17.86 in the NFB group and 16.00 in the control group.Previous studies found hyperactivity in the right posterior regions in patients with comorbid MDD and anxiety symptoms (Bruder et al., 1997;Heller, Etienne, & Miller, 1995), and the pathophysiological mechanism of EEG patterns may differ between MDD with anxiety and MDD without anxiety (Bruder et al., 1997).
Therefore, the individualized neurofeedback protocol should be set up for these patients.Fifth, depressive symptoms in some patients were caused by stressful life events or adjustment problems and some patients with MDD improved significantly after few sessions of neurofeedback; however, some of them showed increased severity of depression during neurofeedback training.Hammond (2005) indicated that not all individuals with frontal alpha asymmetry will be depressed, and some persons can experience negative life events and still become depressed in the absence of frontal alpha asymmetry.
Some patients did not improve significantly because of other stressful life events during neurofeedback training, such as a loss in their family (Hammond, 2005).Therefore, stressful life events may be a confounding factor in the outcome evaluation.Sixth, some studies examined the learning curve across and within neurofeedback sessions (Baehr et al., 1997;Zuberer, Brandeis, & Drechsler, 2015) to confirm a linear trend of EEG changes.However, this study did not measure the scores of frontal alpha asymmetry under each session, and hence, the trend of treatment effectiveness is still unknown.
In conclusion, there was partial support for the efficacy of neurofeedback among patients with MDD, especially for those who were responders.Patients in the responder group showed decreased symptoms of anxiety and depression, as well as improved frontal alpha asymmetry.However, some of them did not improve significantly and probably needed more training sessions.Finally, the sample size should be increased in future studies.
Figure 1 .
Figure 1.The A1 score at pre-and post-interventions between the responders and non-responders in the NFB group.Note: Change-score of A1 = (A1 at post-intervention) -(A1 at pre-intervention).
Figure 2 .
Figure 2. The scores of BAI and BDI-II at pre-and post-interventions between the responders and non-responders for the NFB and control groups.
Table 1
The Demographic Characteristics of All Participants
Table 2
The Changes in the Alpha Power from Pre-to Post-interventions in the NFB Group and the Control Group
|
2017-04-30T03:28:53.978Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "1342d913f2345fff34bccbb80e17fd1fcc73035c",
"oa_license": "CCBY",
"oa_url": "https://www.neuroregulation.org/article/download/16388/10599",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1342d913f2345fff34bccbb80e17fd1fcc73035c",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
34251971
|
pes2o/s2orc
|
v3-fos-license
|
Different mode of action between norepinephrine and phenylephrine on prostaglandin synthesis by dog renal inner medullary slices.
The dog renal inner medullary slices synthesized and released prostaglandin (PG) E2, PGF2 alpha, PGI2 (measured as 6-keto-PGF1 alpha) and thromboxane (TX) A2 (measured as TXB2). When incubated in the presence of norepinephrine, the synthesis of these arachidonic acid metabolites were stimulated about 2-fold. The norepinephrine effect could be antagonized by the addition of an alpha-adrenoceptor blocking agent, phenoxybenzamine, but not by a beta-adrenoceptor blocking drug, propranolol. Phenoxybenzamine at concentrations that block norepinephrine stimulation of prostaglandin biosynthesis did not suppress the increase in prostaglandins synthesized by exogenous arachidonic acid. By contrast, phenylephrine caused only PGI2 production without producing other prostaglandins and thromboxane. This phenylephrine effect could not be antagonized by either alpha- or beta-adrenoceptor blocking agents, but it was abolished by the specific PGI2 synthetase inhibitors 15-hydroperoxy arachidonic acid and tranylcypromine. These results suggest that norepinephrine-induced prostaglandin synthesis is mediated via an alpha-adrenergic receptor mechanism, whereas phenylephrine stimulation is primarily occurring at the step which follows the cyclooxygenase reaction in the metabolism of arachidonic acid.
Prostaglandins
(PGs) are synthesized in many different tissues, including kidney, by a cascade of enzymatic events (1). They play an important role in renal functions such as renal blood flow, natriuresis and renin release (2-4). PG release from tissues or organs can be elicited by various stimuli (5-9), including catecholamines.
Sympathetic nerve stimu lation releases large amounts of PGs in the effluent of isolated perfused organs such as the spleen (10, 11) and kidney (12). Nor epinephrine and epinephrine also stimulate the release of PG from isolated perfused rabbit kidney; phenoxybenzamine, but not propranolol, blocks this release (13). It is possible that these compounds stimulate PG release via receptor-mechanisms, and it is also possible that PG-like substances are released as a result of hemodynamic changes induced by the catecholamines.
On the other hand, stimulation of PG synthesis by the catecholamines may simply reflect their properties as cofactors for cyclo-oxygenation of arachidonic acid as shown by studies using microsomes prepared from seminal vesicles (14). It is difficult to differentiate between these possibilities with in vivo systems. Thus, renal inner medulla, in which renal PG biosynthesis is most active (15), was used in vitro for studying norepinephrine-evoked PG synthesis.
In the present studies, we examined the relationship between the a adrenergic receptor and PG synthesis in dog renal inner medullary slices in comparison A. FUJITA & S. SATOH with phenylephrine, a typical a-adrenergic stimulant.
The arachidonic acid was kept at -20°C and was diluted in ethanol just prior to experiments. The appropriate amount was added to the medium. The level of ethanol used (0.5% w/v) had no effect on PG production. Solutions of the a and 8 adrenergic agonists or antagonists were freshly made for each experiment in saline. None of the reagents at the concentrations used interfered with the radioimmunoassays.
Preparation and incubation of slices: Thirty-six mongrel dogs of either sex weighing 6.4-14.7 kg were used in these experiments. Both kidneys were removed under anesthesia with sodium pentbarbital (30 mg/kg, i.v.) and perfused with ice-cold Krebs-Henseleit solution to flush out any remaining blood. After the capsule was removed, the kidney was bisected in the colonal plane. The inner medulla was separated by careful dissection, sliced with a Stadie-Riggs microtome and pooled for each experiment. Slices (30-40 mg) were first incubated for 40 min in 3 ml of Krebs-Henseleit buffer containing 1 mg/ml glucose. After washing with prewarmed medium, slices were transferred to flasks containing 2 ml of fresh medium for the final 30 min. Both incubations were performed at 37 °C with 100 cycles/min agitation under 95%02-5%CO2 gas, unless otherwise indi cated in the Results.
Extraction and determination of PGs: Separation and quantitation of PGs formed by renomedullary slices were performed as previously reported by Salmon (16). Briefly, an aliquot (1 ml) of medium containing tritiated PGs (approximately 2,000 cpm, respectively) to monitor recovery during extraction and chromatography was first acidified to pH 3.5 and extracted with ethyl acetate (recovery approximately 85-95%, respectively). The extract was dried and redissolved in methanol. An aliquot of the solution was removed to determine the PGE2 content, while the rest was applied onto thin-layer plates (Merck, Kiesel gel 60), which were subsequently developed in the organic phase of ethyl acetate-trimethyl pentane-acetic acid-water (110:50:20:100, v/v/v/v). The fractions corresponding to PGs were extracted with ethyl acetate. The extracts were dried and reconstituted to 0.1 M phosphate buffer to determine the PG content and recovery. The final recoveries of PGF2., TXB2 and 6-keto-PGF,. were 50 60%, 50-60% and 45-55%, respectively. PGE2, PGF2., TXB2 and 6-keto-PGF,. were determined by specific radioim munoassay. Twenty pg of PGE2 inhibited the binding of [3H]PGE2 to PGE2-antibody by 50%; PGF20,, TXB2, PGA2 and 6-keto-PGF,0, crossreacted less than 0.2%. One-hundred and fifty pg of 6-keto-PGF,a inhibited the binding of [3H]6-keto-PGF,0, to 6-keto-PGF,0, antibody by 50%; other PGs and TXB2 crossreacted less than 4%. Sixty pg of PGF20, inhibited the binding of [3H]PGF20,to PGF20, antibody by 50%; other PGs and TXB2 crossreacted less than 5%. Two-hundred and fifty pg of TXB2 inhibited the binding [3H] TXB2 to TXB2-antibody; various PGs cross reacted less than 1%. PG-antibody was incubated with [3H] PG (about 10,000 cpm, respectively) at 4°C, overnight. Free and bound PGs were separated by the addition of dextran-coated charcoal. Tritiated PGs added for recovery did not affect the radio immunoassay.
Reported values were corrected for recovery and intraassay variance was less than 5%. Statistical differences were evaluated by the Student's t-test for unpaired values. Differences with P<0.05 were considered significant.
Results
In our system, the metabolites of PGs, 1 5-keto and 1 5-keto-1 3, 14-dihydro derivatives, were not found when slices were incubated for 30 min with tritiated PGs (data not shown). The effect of norepinephrine and phenylephrine on medullary PG synthesis was examined (Fig. 1). In the presence of norepinephrine (1-100 tiM) during the second incubation, a concentration dependent increase in PG synthesis was observed. Phenylephrine increased 6-keto PGF1. production dose-dependently, but it did not alter PGE2 production at the concentrations used.
The effect of these agents on other PGs and TXB2 production were examined (Table 1). Thin-layer chromatography of arachidonic acid metabolites released into the medium followed by radioimmunoassay for several PGs showed that dog renomedullary slices synthesized PGE2, PG F2',, PG I2 (measured as 6-keto-PGF,.) and TXA2 (measured as TXB2). The addition of norepinephrine (10 ILM) to the second incubation medium caused a significant increase in PGE2, PG F2. and 6-keto-PGF,.. TXB2 production was also enhanced, but not significantly. By contrast, phenylephrine at 10 W enhanced only 6 keto-PGF,. production without altering the production of the other PGs and TXB2; while with indomethacin (10 itM), basal levels of PG were markedly reduced and norepinephrine or phenylephrine induced stimulation was also inhibited, respectively.
Table 1 also shows that PG F2 is the predominant PG formed, whereas TXB2 is very low and often beyond the detection limits of the assay. However, the enzymatic pathway leading to the formation of PGF2 is poorly understood (17)(18)(19), and PGF2 has little effect on renal function (20,21). Further, the degree of stimulation or inhibition of other arachidonic acid metabolites, except for 6-keto-PGF,., was the same. Therefore, in all subsequent experiments, only the formation of PGE2 and 6-keto-PGF,. were determined. The effect of adrenoceptor blocking agents on norepinephrine-induced PG synthesis was examined (Fig. 2). Addition of phenoxy benzamine (1 W) or propranolol (11 /,,M) did not change the basal levels of PGE2 and 6-keto-PGF,.
(65.6±6.8 and 122.3±10.6 ng/g wet weight, respectively) released into the second incubation medium. Addition of 5 it M norepinephrine to the second incubation medium caused significant increases in PGE2 and 6-keto-PGF,cb (108.0±8.7 and 192.6±17.3 ng/g wet weight, respectively). However, this stimu latory effect of norepinephrine on PG pro duction was abolished by phenoxybenzamine, but not abolished by propranolol. (Fig. 3). At 1 PM, arachidonic acid caused a significant increase in PGE2 and 6-keto-PGF1a (178.4±16.7 and 254.0±28.6 ng/g wet weight, respectively), but this arachidonic acid-induced stimulation was not blocked by either a or 8-adrenoceptor blocking agents. The effect of adrenoceptor blocking agents on phenylephrine-stimulated 6-keto PGF1. synthesis was also examined (Fig. 4). The amounts of 6-keto-PGF1.
produced by phenylephrine at a concentration of 10 ItM was comparable to that observed with 5 uM norepinephrine. Therefore, this concentration (10 itM) of phenylephrine was used in order to estimate the effects of adrenoceptor blocking agents. In the presence of phenyl ephrine (10 /_,M) during the second in cubation, the synthesis of 6-keto-PGF1,, was stimulated from 117.6±10.6 to 200.3±13.9 ng/g wet weight. However, the stimulatory effect of phenylephrine on 6-keto-PGF1, production could not be antagonized by phenoxybenzamine or propranolol. Figure 5 shows (23), the PGs released into the medium probably reflect the amounts of PGs synthesized in the tissue. It has been shown that stimulation of PG synthesis by catecholamines reflect their properties as cofactors for cyclooxygenation in the metabolism of arachidonic acid (14). However, the possibility that norepinephrine induces PG synthesis in dog renomedullary slices simply by its participation as a phenolic cofactor in the cyclooxygenation can be disregarded because much a higher concen tration (0.5-1 mM) of phenolic compounds seems to be required for this reaction (24,25) than for the stimulation of PG synthesis in our system (5 ,uM).
In these studies, dog renomedullary slices respond to norepinephrine by increasing, to the same extent, the synthesis of all arachi donic acid metabolites made by the slices, and the ability of norepinephrine to increase PG synthesis can be abolished by an a adrenoceptor blocking agent, but not abolished by propranolol. These results sug gest that receptor-mediated stimulation occurs at the cyclooxygenation of arachidonic acid or deacylation of phospholipids. The results in Fig. 3, which show that an a-adrenoceptor blocking agent was unable to block the stimulation of PG synthesis induced by exogenous arachidonic acid, suggest that a-adrenergic regulation is not occurring primarily at the cyclooxygenase step in the metabolism of arachidonic acid. Activation of phospholipase A2 has been postulated as the common mechanism responsible for the action of several PG releasing materials (26). Therefore, the deacylation of phospholipids by stimulating a phospholipase pathway mediated through a-adrenergic receptors and the increased arachidonic acid availability for cyclooxygenase occurs during nor epinephrine-stimulated increase in PG synthesis.
Our results were in good agreement with that of Levine and Moskowitz (27) in dog renal cultured cell line (MDCK cell), indicating that norepinephrine-induced PG synthesis was inhibited by various a adrenoceptor antagonists, but not inhibited by propranolol. However, they did not further characterize the a-adrenoceptor involved. According to the apparent affinities to antagonists and agonists, a-adrenoceptors can be further subdivided (28). Initially, a, adrenoceptors were thought to be present mainly on effector cells, whereas a2 adrenoceptors seemed to be located exclusively on presynaptic nerve terminals. At present, however, both receptor-types have been shown to also exist on the membranes of target cells (29)(30)(31). In our systems, dog renal medullary slices respond to norepinephrine by increasing, to the same extent, the synthesis of all arachidonic acid metabolites made by the slices, whereas phenylephrine, a typical a,-adrenergic agonist, enhance only PGI2 synthesis and its stimulatory effect could not be antagonized by a or l3-adrenoceptor blocking com pounds. Therefore, these data also suggest that the a-adrenoceptor involved can be classified as an a2-adrenoceptor subtype.
The mechanism by which phenylephrine stimulates only PGI2 synthesis in dog renomedullary slices is not clear. However, the ability of phenylephrine to stimulate only PGI2 synthesis is inhibited by structurally dissimilar PGI2 synthetase inhibitors at con centrations that do not change the basal level of PG. Thus it seems likely, but not certain, that phenylephrine directly or indirectly activates PG12 synthetase, but does not activate not the phospholipase pathway or cyclo-oxygenase enzyme. This stimulatory effect of phenylephrine may be interpreted from the specificity of the PG12 synthetase. It has been shown that the PGI2 synthetase is easily deactivated by oxygen centered radicals formed as a result of the reductive breakdown of hydroperoxides or PGG (32)(33)(34) and also that phenol, me thional (35), MK-447 (36) and sulindac (37) scavenge these radicals, thereby promoting the formation of PGH2 or PG12. Thus phenyl ephrine may increase PG12 synthesis by acting as the scavenger of oxygen-centered radicals.
The differential effects of norepinephrine and phenylephrine on PG synthesis also have been reported in rabbit splenic fibroblasts (38), human platelets (39) and isolated rabbit myometrium (40). For example, in cultured splenic cells, which have no ability to synthesize PGI2, norepinephrine and epinephrine are potent stimulators of PG synthesis, whereas phenylephrine and imi dazolines are nearly ineffective or even inhibitors of this norepinephrine-induced effect. In isolated rabbit myometrium, epin ephrine also stimulates the release of PGI2 and PG F2 from the tissues, whereas phenyl ephrine increases only PGI2 production. These events support our results with phenyl ephrine. Possibly the diphenolic structure of catecholamines is necessary so that the catecholamines can have optimal activity for promoting the deacylation of phospholipase A2 via adrenergic receptors.
In conclusion, our results in vitro indicate that the stimulation of PG synthesis by norepinephrine is mediated through an a adrenergic receptor. However, since phenyl ephrine increases only PGI2 and its stimulatory effect can not be antagonized by adrenolytic compounds, this effect seems to be occurring at the step which follows the cyclooxygenase
|
2018-04-03T02:33:05.027Z
|
1983-01-01T00:00:00.000
|
{
"year": 1983,
"sha1": "d1b908f855f8f2807bf30b4092ae10dbbe980914",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jphs1951/33/3/33_3_563/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "db4b32e4cb00bc3bb5dcea5aed9bac4b799f3e93",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
257146283
|
pes2o/s2orc
|
v3-fos-license
|
Fatty Acid Indices and the Nutritional Properties of Karakul Sheep Meat
This study aimed to evaluate the fatty acid profile and health lipid indices of sheep meat (from 52 Karakul sheep from NE Romania). The effect of age at slaughter and the influence of muscle region were studied for nutritional parameters, especially the fatty acids from lipid fractions. Based on the fatty acid profiles and lipid contents, the sanogenic indices were determined for two sheep muscle groups. Thus, two different muscle regions from lamb and adult sheep were analysed from both genders, the Longissimus dorsi and Triceps brachii, to argue the advantages of each category and the rationalization, in terms of meat consumption, regarding their impact on human health. Sheep meat has many components with beneficial effects on human health. Apart from the fact that it is an important source of nutrients due to its high content of proteins, lipids, and minerals, it is also a product that can provide fundamental bioactive compounds for maintaining metabolic functions. The qualitative indices assessment revealed that lambs have meat with high PUFA content on Longissimus dorsi muscles (approx. 25% of total fatty acids), 0.68 for PUFA/SFA, with highest values for n-3 (approx. 8%) and n-6 (approx. 14%). Appropriate values can also be observed in Triceps brachii muscles from adult sheep. The sanogenic indices also presented good values for Longissimus dorsi from lambs and Triceps brachii from adult sheep (polyunsaturation index = 7.2–10.2; atherogenic index = 0.56–0.67; thrombogenic index = 0.78–0.96; hypocholesterolemic/hypercholesterolemic index = 2.4–2.7 (for Longissimus dorsi)).
Introduction
Over the past century, considerable progress has been made in improving human welfare worldwide [1]. However, the shifting of livestock production from the past century that aimed to ensure large quantities of high-value proteins and lipids to feed the population [2,3] is not sufficient for the future when considering the accelerated growth of the world population and increased life expectancy that could lead to a future "food crisis" [4]. Additionally, the impact of climate change is already being felt, highlighting the need for solutions for the sustainability of food and agricultural systems. The global population is predicted to reach more than 9.7 billion by mid-century [5], suggesting a necessity to respond with enough food to eradicate hunger by 2030 [1].
The domestic sheep (Ovis aries) is a very widespread ruminant species, a Pleistocene mammal, raised primarily for its meat, milk, wool, and hides [1]. Great interest has been shown in the lipid composition studies of sheep meat, being recognized as an important source of n-3 and n-6 fatty acids [6]. sanogenic lipid profile. Meat lipids consist of triglycerides (forming intramuscular fat) and phospholipids, which provide the two essential precursors and long-chain derivatives, particularly those of the n-3 series. The lipid content of meat and its fatty acid composition varies according to the anatomical location of the muscle and the animal species' diet. 18:2n-6 is the main PUFA present in the fat part of the meat (from 150 mg/100 g for veal to about 2800 mg/100 g for pork), with 18:3n-3 generally being in the minority, except in horse and rabbit meat. Meat also provides substantial amounts of long-chain n-3 PUFA (40 to 120 mg/100 g) [21]. The amount of fat in the human diet and especially the amount of saturated fatty acids have been considered a major risk factor for coronary heart disease [22]. Some PUFA (n-6, n-3, and conjugated linoleic acid) present in animal muscle tissue seem to play a favourable role through nutrition in preventing or reducing some human diseases. Therefore, the PUFA/SFA and n-6/n-3 fatty acids are considered important indices for the nutritional evaluation of fatty meat [4,23]. Carcass weight is an essential criterion for establishing the commercial category of lambs/sheep; an important factor that influences both meat quality [23] and consumer preferences [24].
Within the present context, our results contribute to the most detailed understanding of Karakul sheep meat (from 52 NE Romania Karakul sheep) regarding the implications of its fatty acid contents on human health. This study also analysed the effects of muscle region and slaughter age on nutritional parameters, with a particular focus on the fatty acids from lipid fractions.
Sample Collection
The animals used in this research were of the Karakul sheep breed, raised in a traditional system on a Horlesti farm located near Iasi city, in the NE of Romania. In the favourable seasons (spring, summer, and autumn), the animals were fed with a green meal, by grazing, and in the cold season, they were reared in stalls, being fed with hay and cereals. The sheep were exploited for milk, meat, and pelt production.
To obtain biological material, the animals were slaughtered in a slaughterhouse with a special slaughter line for swine in Bacau city approved by the EU; twelve sheep were slaughtered after feed withdrawal. The sheep were transported to the slaughterhouse in special animal transport vehicles. The transportation took around 3 h. The animals received for slaughter were left to rest for 8-12 h at the slaughterhouse in an especially organised sheep-holding pen occupying 447.3 m 2 . The unit has implemented and maintained a self-monitoring programme based on HACCP principles, being certified by the Food Safety Management System according to ISO 22000 and with a Quality Management System according to ISO 9001.
The investigated biological material was collected from sheep and grouped into the following experimental categories according to the weight at slaughter: lambs of the Karakul breed (n = 26, aged 10-12 weeks and carcasses weight at 11-13 kg) and adult sheep (13 male and 13 female) of the Karakul breed (n = 26, 20-24 months, carcasses weight (20-30 kg).
After slaughter, the carcasses were kept at a temperature of 10-14 • C for 6 h before refrigeration, then stored for 18 h at 4 • C. After 24 h of chilling, the carcasses were weighed and the samples of Longissimus dorsi and Triceps brachii were collected (three samples for each animal and each muscle). The samples were subsampled and the analyses for chemical components were performed in triplicates and for fatty acids in duplicates.
Collection, sampling, identification, and preservation of samples are very important steps with major influences on the results obtained from laboratory analysis. The Longissimus dorsi and Triceps brachii muscles were excised for analysis.
The Longissimus dorsi (L.d.) muscle is the major extensor muscle of the spine, occupying the entire vertebrocostal trough. To sample the Longissimus dorsi muscle, a section was taken between the 10th and 11th thoracic vertebra and between the 4th and 5th lumbar vertebra.
The Triceps brachii (T.b.) muscle is the strongest muscle of the thoracic limb, occupying the entire space between the caudal edges of the shoulder blade, humerus, and olecranon.
To sample the Triceps brachii muscle, a section was made across the calf from the midhumerus to the radius.
The samples were transported in a refrigerated state (4 • C) to the laboratory and immediately analyzed in duplicate to determine the chemical composition and the lipid amounts.
The Chemical Components and Energy Values Determinations
The major chemical contents (water, proteins, and lipids) were determined through the spectrophotometric method using the Omega Bruins Food-Check ® (NIR) spectrophotometer (Bruins Instruments ® GmbH, Puchheim, Germany). Thus, 100 g of finely ground and homogenized tissue samples were placed on the glass plate and inserted in the spectrophotometer. The reflection values were calculated from the ratio of the values read by the analyzer and the reference values, which represent the basis for the calculation of the analyzed parameters; the limits of the working spectral range were 730-1100 nm [25].
The total dry matter content was determined by drying about 5 g of fresh and ground homogenized meat samples in a forced air-drying oven (Biobase ® , Jinan, China) at 105 • C for 12 h.
Sample Preparation
To determine the fatty acids, 150 g of meat was sampled from the muscle tissue of interest, and the connective tissue was removed (so as not to remove intramuscular and infiltrated fat). A 10 g portion of the sample was vacuum-packed and frozen at −20 • C at the time of analysis. The freeze-drying of the meat samples was carried out in the following standard steps: the liquid material was frozen in a condenser of the Hetosicc CD4 ® freeze-dryer with ice to −50 • C; then, primary lyophilization of samples took place for 5 to 6 h; finally, the desorption phase was carried out at 35 • C for 12 h. After freeze-drying, the samples were subjected to hydric re-equilibration (by keeping them at atmospheric conditions for 12 h to absorb chemically bound water) and then ground to a fine powder using a blender.
Determination of the Fatty Acid Profile of Lipids
The detection of fatty acids was possible after the derivatization of the lipid extract with acetyl chloride and methanol, which liberates fatty acids from triglycerides with the formation of methyl esters, detectable by FID (flame ionization detector).
Total Lipid Extraction
Total lipids were extracted with a chloroform-methanol mixture (2:1 v/v; [27]) using the Ultraturax ® instrument. The transmethylation was made according to ISO 2000 methods [28] and the fatty acid methyl esters were determined using a flame ionization detector in a gas chromatography system (Carlo Erba Instrument, Carlo Erba Reagents, Milano, Italy).
Separation and Detection of Fatty Acid Methyl Esters by Gas Chromatography
Gas chromatographic conditions: Omega Wax 320 capillary column (30 m × 0.32 mm × 0.25 µm film thickness), fragmentation ratio 1:50, flame ionizing detector temperature and injector temperature were 260 • C. Running time was 43 min with the temperature ramp programmed from 160 • C to 260 • C, with an increase of 1 • C per minute for the first 26 min, 5 • C per minute for the next 16 min, and 1 • C for the last minute. The flow rate of the carrier gas (Helium) was 1.2 mL/min. The identification of fatty acid methyl esters was performed according to their retention time using the internal standard (IS). Nonadecanoic acid (C19:0) methyl ester was added as IS.
Experimental Design
This study aimed to evaluate the nutritional quality of sheep meat from animals traditionally raised in the NE area of Romania by providing information on the intrinsic quality of the lipid content in essential fatty acids. The nutritional evaluation aimed to establish the dry matter content and organic substances such as proteins and lipids, mineral substances, and energy value, serving to evaluate the possible differences in composition between the muscle regions of the studied sheep populations as a result of the imprinted variations in age and muscle region. This study also aimed to update the lipid nutritional indexing in sheep meat by exploring the nutritional properties by comparing the Karakul breed at different growth stages (lambs vs. adult sheep) and for each growth stage, the influence of the studied muscle regions (Longissimus dorsi × Triceps brachii) was analyzed. The essential fatty acid profile allows the identification of the rational use of lipid optimization in healthy human nutrition. Thus, it was intended to establish valid considerations about the nutritional effects of meat from different anatomical regions and from different age categories on human health to provide a useful assessment of Karakul genotypes and good practices.
Statistical Analysis
The data were processed using Microsoft Excel and the main statistical descriptors (mean values, variance, standard error of mean) and variance analysis using ANOVA, Single Factor, and Fisher tests were performed. The test consists of a comparison between calculated values and table values for Fα, where α takes values of 0.05, 0.01, and 0.001. If the value is lower than the table value for the 0.05 step, the differences between the values are not significant (n.s.). If the value is higher than the table value for the 0.05 step but less than the table value for the 0.01 step, the differences between the values are significant (*). If the value is higher than the table value for the 0.01 step but is less than the table value for the 0.001 steps, the differences between values are distinctly significant (**). If the value is higher than the table value for the 0.001 steps, then the differences between the values are highly significant (***).
Chemical Composition of Sheep Meat
Meat is the body's main energy-plastic material and is one of the most representative sources of nitrogenous material with biological value. Lipids and carbohydrates are the main sources of energy, and proteins are the main source of plastids. As meat is considered a biological product, intense biochemical processes take place from collection to individual consumption [29].
Nutritional value is reflected in the chemical composition, particularly in the proportion and quality of the constituent chemical elements that play a determining role in human nutrition (proteins, lipids, and minerals). In terms of meat quality, the main factor is the muscle itself and the interdependence that exists at a macromolecular and micromolecular level between the biochemical and physicochemical constituents of one or more muscles. Variations in chemical composition can be attributed to the combined effects of pre-slaughter factors such as gender, breed, age, and diet and also post-mortem factors such as chilling and storage conditions.
Fatty Acid Composition in Sheep Meat
Fats are the main compact energy source in the body and provide some important nutrients such as essential fatty acids, which act as structural elements in cell membranes, give palatability and flavour to meat, and in appropriate proportions, are the essential components of any balanced diet. Therefore, from a compositional point of view, a pragmatic analysis of these constituents is necessary to assess nutritional benefits.
Muscle samples from sheep carcasses presented lipids whose composition in saturated fatty acids showed increased values for C18:0 and C16:0 in all muscle groups that were analyzed. The mean values for C18:0 defined a lower limited range of 3.6 mg/g SU (Triceps brachii muscle/lambs). For C16:0, the limits of mean values materialized a difference of 14.4 mg/g SU (3.69 in Triceps brachii muscle, lambs vs. 18.1 mg/g SU in Triceps brachii muscles from adult sheep) (Table 2). Overall, with respect to saturated fatty acids, the specific composition of each muscle revealed that in the Longissimus dorsi muscle, the upper extremes of the means corresponding to each fatty acid were recorded in samples collected from adult sheep carcasses, except C12:0, C14:0, C15:0, C20:0, C21:0, and C22:0. In the Triceps brachii muscle, the upper extremes of the means corresponding to each fatty acid were recorded in most of the samples collected from carcasses of adult sheep except for C8:0 ( Table 2).
Comparing the results obtained for MUFA, it was observed that they fall within the range of 36.6-40.4 mg/g, as obtained in other research in the literature [24,30].
The content of PUFA in the sheep meat was described by a wide range of variation in levels, the polyunsaturated lipid fractions with higher values being represented by C18:2n-6, whose mean values defined a lower bounded range of variation of 3.87 mg/g TDM (Longissimus dorsi m., from adults) and upper of 8.58 mg/g TDM (Triceps brachii m., from adults) followed by C18:3n-3. The rest of the PUFA recorded subunit values except for C20:04n-6 and C20:5n-3 in Triceps brachii from adults (3.1 mg/g TDM and 1.21 mg/g TDM). Both Longissimus dorsi and Triceps brachii collected from adult sheep showed higher mean values for major fatty acids.
Most SFA, MUFA, and PUFA can be synthesized by the body, except n-3 and n-6, which are considered essential for the proper function of the body. Their supply to the human body is provided through the daily food ration. The two types of PUFA incorporated in muscle tissues play a favourable role in preventing or reducing human diseases [4].
It is observed that n-6PUFA recorded a higher content than n-3PUFA, which can adjust inflammation by competing with n-6 metabolites for the incorporation of phospholipids into the membrane of immune system cells. Triceps brachii from lambs revealed lower values of n-6 and n-3PUFA compared to adult sheep, except for the n-3PUFA values recorded in Longissimus dorsi, which were higher in lamb (8.07% in lamb vs. 7.07% in adult sheep).
Research results indicate that slaughtering sheep at different ages determines variable values of SFA, MUFA, PUFA, and the n-6/n-3 ratio in Longissimus dorsi and Triceps brachii muscles. Thus, it was observed that with advancing age for Longissimus dorsi, the SFA content remains relatively constant (39.66% in lambs and 39.5% in adults), with decreases in PUFA content (25.14% in lambs vs. 24.18% in adults) ( During the development of intramuscular fat deposits, it was observed that SFA and MUFA showed a more pronounced amplification compared to PUFA, highlighting repercussions for the decline in PUFA content and, therefore, the PUFA/SFA ratio. These differences are influenced by the genetic and dietary factors of the animals, which can be observed by the large range of variation in the values obtained for the fatty acid analysis of the intramuscular lipid constitution. The SFA content of total fatty acids in the samples collected from sheep carcasses had a lower limit of 39.5% TFA (Longissimus dorsi, adult sheep) and a higher limit of 45.57% TFA (Triceps brachii m., lamb meat). The MUFA content of total lipids in sheep meat ranged from 35.18% (Longissimus dorsi, lamb) to 36.59% (Triceps brachii, lambs). Lipid quality was assessed primarily by the percentage of PUFA (TFA), which ranged from 17.82% (Triceps brachii m.) to 25.14% (Longissimus dorsi m.), both extremes corresponding to samples from the lambs ( Table 2). The PUFA/SFA ratio, as well as that of n-6/n-3, are two important parameters in the evaluation of meat lipids, the sheep meat being characterized by lipids whose PUFA/SFA ratio varied between 0.4 to 0.68 and the n-6/n-3 ratio showed values in the range 1.91-3.43% ( Table 2). The results of this research describe meat with a higher mean PUFA content compared to those from sheep fed a PUFA-rich diet. Despite the perception that the lipid profile of the animals is largely made up of SFA, approximately 60% of TFA in the studied sheep meat is unsaturated ( [24,30]; in contrast, those for Longissimus dorsi showed a slightly lower range of values (39.5-39.67%), which is explained by the provided diet, which was a green meal. Thus, a decrease in the percentage of SFA and an increase in the PUFA percentage is observed, due to the abundant presence of linoleic acid (C18:3n-3) in the green mass, thus the sheep storing this acid in significant quantities in their tissues.
Regarding the content of sheep meat in essential fatty acids, it was observed that values of PUFA are in the range of 10.81-14.97%, with values showing significant variation within the range. The results obtained for n-3PUFA contained in Longissimus dorsi m. (from both categories of sheep taken in the study) show superiority (7.07-8.07%) to the quoted values (4.25-6.24%), and those contained in Triceps brachii m. from carcasses of adult sheep (5.77%) fall within the quoted range, except for the values obtained for the Triceps brachii collected from the lamb carcasses which were slightly lower (3.34%). Sheep meat has a high percentage of n-3PUFA, thus the consumption of this product is recommended as it has a positive impact on cardiovascular disease [14]. The important fatty acids with health benefits such as vaccenic acid (18:1n-7) [31] and long-chain PUFA n-3 appear in the Karakul breed. Other breeds studied by [32] do not have evidence of the presence of vaccenic acid.
The PUFA/SFA ratio shows values within the range of 0.12-0.54 specified in the literature [33][34][35][36][37]. Considering the recommended PUFA/SFA ratio is 0.45 [33], it can be argued that sheep offer meat that covers the requirements of a complete diet. The values obtained for the n-6/n-3 ratio in the range (1.91-2.5) are in accord with those obtained by researchers (1.03-2.49) such as [34][35][36][37]. The exceptions are those obtained for Triceps brachii m. collected from lamb carcasses, with a 3.43 ratio value, which through consumption, may be associated with an increased risk towards atherosclerosis and coronary heart disease.
The recommended value for the n-6/n-3 ratio is 2 [38]. n-6/n-3 ratio values of 1.91-2.55 were recorded for Longissimus dorsi m. (lambs and adult sheep), which is in agreement with those found by [34,39] who analysed meat from sheep fed with a green meal. This advantage of sheep meat from animals fed with a green meal is very important as it compensates for the low values of the PUFA to SFA ratio. Meat from ruminants and fatty fish are the only significant dietary sources of PUFA that have 20 and 22 carbon atoms in their structure [34,39].
Qualitative, Nutritional, Metabolic, and Energy Indexes
Based on the fatty acid values, four series of indexes were calculated including qualitative, nutritional, metabolic, and energy for the two muscular regions (Longissimus dorsi and Triceps brachii) from lambs and adult sheep.
The balance of n-6 and n-3 essential fatty acids is important in the prevention and treatment of coronary artery disease, hypertension, diabetes, arthritis, osteoporosis, autoimmune disorders, cancer, and mental health, together with the mechanisms involved [40]. Furthermore, food with a lower n-6/n-3 ratio is more desirable for reducing the risk of diseases mentioned before [14]. n-6 fatty acids are represented by linoleic acid (18:2n-6, LA) and n-3 fatty acids by α-linolenic acid (18:3n-3, ALA). Linoleic acid can be converted into arachidonic acid and α-linolenic acid in EPA and DHA, and existing competition between n-6 and n-3 fatty acids for enzyme desaturation-n-3 to n-6 for ∆-4 and ∆-6 desaturation is preferred [41].
The n-6/n-3 index shows that Triceps brachii meat from lambs and adult sheep (3.25 for lambs and 2.89 for adult sheep) has an equilibrated ratio of PUFA, whereas Longissimus dorsi has a much lower ratio (1.78 lambs and 1.95 for adult sheep) ( Table 3). Following this line of evolution (age at slaughter and muscle region in sheep) and analyzing the linoleic and α-linolenic acid index, it can be observed how they are influenced. LA/ALA highlight that Triceps brachii meat from lambs is more interesting in a strictly qualitative context. Linoleic acid and α-linolenic acid are essential PUFA with different effects on human health, which cannot be synthesized in the human body. Therefore, this index represents the first step in the formation of long-chain PUFA because they compete for the same metabolic pathway in desaturase and elongase reactions, which permits the synthesis of LC-PUFA [14]. The two indispensable precursors of PUFA, linoleic acid (LA-18:2n-6) and α-linolenic acid (ALA-18:3n-3) have functional properties which are particularly linked to long-chain PUFA derivates found in all cellular membranes (structural role), and the production of oxygenated molecules resulting from their metabolism, linear or cyclic molecules, exert multiple roles as bioactive lipid mediators (eicosanoids, hydroxylated fatty acids, and docosanoids).
The PUFA bioconversion pathway is essential when the diet is chronically unbalanced. For example, it can take place in cases of too low linolenic acid intake and too high linoleic acid intake, limiting the synthesis of DHA (22:6n-3). An increase in arachidonic acid (AA, 20:4n-6) took place from the effects of metabolic competition that can produce a situation of subdeficiency resulting in the reduction in tissue concentrations of DHA and the abnormal appearance of a fatty acid substitution from the n-6 series, docosapentaenoic acid (22:5n-6). Linoleic, arachidonic, and docosahexanoic acids are fundamental constituents of membrane phospholipids. Together with cholesterol, they modulate the activity of a large number of intrinsic enzymes, transporters, receptors, and ion channels involved in inter and intracellular signalling within cell membranes [42]. This effect may facilitate the conformational changes necessary for the activity of intrinsic proteins [43]. As a nutritional recommendation for consumers, it is important to have a definite knowledge of the minimum requirements of linoleic and linolenic acid, as well as the main long-chain PUFA, in the main foods.
Several studies highlight the cardioprotective effects of EPA and DHA fatty acids, and long-chain n-3PUFA, which lower the levels of plasma triglycerides and reduce levels of both proinflammatory cytokines and chemokines [41]. The EPA and DHA sum gives higher values to Triceps brachii meat from adult sheep highlighting very significant differences (p > 0.001) compared to muscle regions in the lambs (Table 3). Both analyzed muscle region samples from adult sheep (Longissimus dorsi and Triceps brachii) are rich in PUFA. Triceps brachii muscle from adult sheep can be consumed to assure human health, with a positive impact on the reduction in the risk of cardiovascular disease, limiting the thrombosis process, inflammation, and hypertension and improving reproductive functions [13]. In addition to nutrition, a sedentary lifestyle and exposure to noxious substances interact with genetically controlled biochemical processes leading to chronic diseases [40]. Triceps brachii meat represents a rich source of essential fatty acids, such as EPA, DPA, and DHA that serve important cellular functions. They are a necessary component of human nutrition because the body does not have the biochemical pathway to produce these molecules on its own, particularly LA and ALA, which are introduced through the diet.
The UI index indicates the degree of unsaturation of FA, providing useful information regarding the shelf life of meat [14]. It shows the importance of the establishment of oxidative stability in human food, defining some oxidative protection strategies. Oxidative stress can be associated with the formation of lipid peroxides, processes that contribute to the ageing process and some diseases, such as atherosclerosis [44,45]. In the current research, Longissimus dorsi meat from adult sheep showed a lower value (p > 0.001) of UI concerning Triceps brachii from adult sheep, indicating a lower risk of fatty acid autoxidation. To assess the NVI and consumer health, the nutritional, atherogenicity, and thrombogenicity indexes, as well as hypocholesterolemic/hypercholesterolemic, health-promoting index, and flesh lipid quality were determined. Previous studies reported that total essential fatty acids and total desirable fatty acids (18:00 + MUFA + PUFA) have an important role in the biological activity of meat [46]. Both indexes of atherogenicity and thrombogenicity are desired to present smaller values than 1.0 and 0.5 [47] to assure a protective potential for coronary artery health. In this study, it was observed that meat from lambs presented higher values for IA than those from adult sheep. It has also been highlighted that the Longissimus dorsi and Triceps brachii from lambs and adult sheep demonstrated IA values lower than recommended values. These aspects are desirable for human health. The values obtained for IT varied between 0.78 (for Longissimus dorsi-from lambs) and 1.22 (for Triceps brachii-from lambs). However, IT values are close to the expected values. The thrombogenicity index highlights the tendency to form blood clots in vessels. Both indexes indicate the potential for stimulating platelet aggregation [15]. All obtained values for IT were slightly higher than the recommended value of 0.5. This good result points out the relationship between SFA and MUFA, considering the previous as proatherogenic (favouring adhesion of lipids to cells of the immune and circulatory system) and the pioneer of the activation of immunological cells, and so they adhere to the vessel wall, whereas the others are antiatherogenic, and finally, the levels of esterified fatty acids, cholesterol, and phospholipids are diminishing, thus preventing the occurrence of micro and macro-coronary diseases [3]. Thus, the smaller values of IA and IT present a greater protective potential for coronary artery disease. For lambs, no significant differences (p < 0.05) were observed through the two regional muscles analyzed. On the other hand, very significant differences were evidenced in IA values in each muscle influenced by age at slaughter. The IT of the Longissimus dorsi was not influenced by age at slaughter, evidenced by the lowest values for the thrombogenicity index (0.78 and 0.79)-this being acceptable from a human health point of view. Triceps brachii from lambs present high values for IT, at 1. 22. The muscle studied in this research showed an IA and IT of 0.44-0.67 and 0.78-1.22, respectively. These results are in close agreement with those calculated based on the fatty acids in other studies for other species: for goose (IA = 0.36 and IT = 0.7), [15] for rabbit meat (IA = 0.9 and IT = 1.19) [48], for chicken (IA = 0.49 and IT = 1.14) [41], for turkey (IA = 0.47-0.78 and IT = 0.62-0.91) [24,42], for beef (IA = 0.6 and IT = 1.86) [49], and pork (IA = 0.47 and IT = 1.12) [50]. Compared to other kinds of lamb breeds (IA = 0.9 and IT = 0.87) [6], the values for the lambs in this study are much lower.
Hypocholesterolemic/hypercholesterolemic (HH) highlights the relationship between the fatty acid content of the meat and the plasma low-density lipoproteins relating to the hypocholesterolemic fatty acids (18:1 n-9 and PUFA) and hypercholesterolemic fatty acids (12:0, 14:0, 16:0). The HH index can serve to assess the cholesterolemic effect of lipids. In this study, the values of HH range between 2.17 and 2.67. Nutritionally higher HH values are considered beneficial for human health [3]. Since lipid deposits differ among breeds, this index seems to be mainly affected by the age at slaughter and muscle regions, highlighting very significant differences p > 0.001. Different values for the HH index indicate the variant effects on cholesterol metabolism: higher values are considered more beneficial for human health. Compared to those found in other studies (1.55-1.57) [6], it can be observed that the values from this study are the highest but close to those found by [3] for goose (2.6-2.82). In other studies [44], comparing the meat from four breeds of suckling lambs revealed values for H/H of between 1.88 and 2.36, close to those obtained in this study. Higher values for the HH index were obtained for duck meat (3.5 [51]), marine fish fillets (3.1 [52]), and crab fillets (3.4 [53]). The HH values obtained for lambs and adult sheep in this study were better regarding the risk of atherosclerosis in comparison to rabbit (1.2 [48]), chicken (1.8 [54]), bovine (1.8 [49]), and swine (2.4 [55]).
The age at slaughter revealed very significant differences (p > 0.001) for the ∆5-/∆6-Desaturase complex for Triceps brachii muscles, with superior values for the samples collected from adult sheep. Statistical analysis revealed significant differences (p > 0.001) according to the muscle region with higher values for Triceps brachii. Thus, it is observed that adult sheep synthesize larger amounts of LC-PUFA from precursors, arguably indicated by the most valid tool for verifying the capacity to synthesize LC-PUFA, ∆5-/∆6-Desaturase.
Triceps brachii represent a muscle region with a double efficiency in LC-PUFA synthesis, especially in adult sheep ( Table 3).
The health-promoting index (HPI) allows the identification of the effects of fatty acids on cardiovascular diseases, representing the inverse of the index of atherogenicity, thus highlighting inverse values compared to IA (1.5 to 2.27, respectively, in Triceps brachiiadult sheep and Longissimus dorsi-adult sheep) ( Table 3). FLQ allows the evaluation of the quality of EPA + DHA, calculated as the percentage of TFA compared to saturated fatty acids. The Triceps brachii of adult sheep showed an almost five times higher value than the same muscle from lambs. Longissimus dorsi muscle does not show a difference (p < 0.05) regarding age at slaughter. The muscle region highlighted very significant differences (p > 0.001) in both age categories at slaughter.
Based on critical reports from the scientific literature, other authors [14] elaborated on two indexes (healthy fatty index 1 and 2). The essence of those indexes was to give different weights to the various types of fatty acids, reporting on their healthy properties. HFI 1 expressed all fatty acids with less impact on human health, reported as the numerator and multiplied with empirical constants elaborated on by Ulbricht and Southgate [56], 8 for n-3, 4 for n-6, and 2 for MUFA, to consider the nutritional and health impact on various classes reported in total lipids. In this study, Longissimus dorsi from lambs and Triceps brachii from adult sheep showed the highest values for HFI 1 (20.99 and 24.74, respectively). Both muscle regions and age at slaughter showed very significant differences (p > 0.001) in HF 1 values (Table 3). From the analysis of the HFI 2, there is no doubt that Longissimus dorsi was not influenced by age at slaughter (p < 0.05). Analyzing lamb meat, it can be observed that Longissimus dorsi presents higher values for HFI 2 (p > 0.001) (2.76 vs. 1.7 on Triceps brachii muscle). The same aspects can be observed for adult sheep with higher values for Longissimus dorsi (p < 0.05) ( Table 3). HFI have higher values in very lean meat [14]. Flesh lipid quality, the health-promoting index, and the healthy fatty index make extremely rare appearances when sheep meat is discussed.
Palmitic and oleic acids represent the principal total fatty acids, the ratio of (C18:0 + C18:1)/C16:0 being an important criterion in the prediction of meat lipid quality defined as the nutritional value of meat [32]. Other authors [32,57] obtained slightly lower values compared to the results of this study, which ranged between 2.4 and 2.56. These same authors reported that the NVI found values in the literature ranging between 2 and 3 in lamb meat. The values found in this study are lower than those reported by [58] for Turkish lambs (2.49-2.67). The highest value of NVI was evidenced in the results of this study for the Longissimus dorsi muscle from adult sheep and Triceps brachii from lambs (Table 3). This was caused by the highest proportions of 18:0 and 18:1n-9, and the lowest values of 16:0.
The results of this study highlight how complex it can be to index fatty acids to estimate the nutritional properties and health benefits of sheep meat.
Conclusions
The Karakul breed has a significant role in sheep meat production, presenting high nutritional value. The research results indicate that qualitatively, Longissimus dorsi are superior to Triceps brachii due to a lower content in SFA (39.66% in Longissimus dorsi vs. 45.57% in Triceps brachii).
The n-6/n-3 index shows that Triceps brachii meat from lambs and adult sheep has an equilibrated PUFA ratio, whereas Longissimus dorsi has a lower ratio. Therefore, reporting antagonistic effects of n-3, n-6, and LC-PUFA in their proportional ratios with each other, which must be close to 4/1, Triceps brachii from lambs could play a significant role in regulating body homeostasis of inflammation and anti-inflammation, vasodilation and vasoconstriction, and platelet aggregation and anti-aggregation.
Triceps brachii meat represents a rich source of essential fatty acids, such as EPA, DPA, and DHA that serve important cellular functions. The data on sheep meat quality and sanogenic lipid indicators are important, offering a positive impact on developing marketing strategies that would allow both individuals/groups of farmers, processors, and local authorities to promote the Karakul breed based on objective analyses. Institutional Review Board Statement: No animals were used directly for the application of experimental factors. The study focused on the analysis of meat sampled from carcasses issued from farmed sheep designed for the meat market.
|
2023-02-24T17:08:45.642Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "54d3f55dbc06c5bf54f01ca6c65dd85e62a698a0",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9572f3e0074769a2dc45f40f8a29890b57c1ec1d",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
12561928
|
pes2o/s2orc
|
v3-fos-license
|
Penetrance estimates for BRCA1 and BRCA2 based on genetic testing in a Clinical Cancer Genetics service setting: Risks of breast/ovarian cancer quoted should reflect the cancer burden in the family
Background The identification of a BRCA1 or BRCA2 mutation in familial breast cancer kindreds allows genetic testing of at risk relatives. However, considerable controversy exists regarding the cancer risks in women who test positive for the family mutation. Methods We reviewed 385 unrelated families (223 with BRCA1 and 162 with BRCA2 mutations) ascertained through two regional cancer genetics services. We estimated the penetrance for both breast and ovarian cancer in female mutation carriers (904 proven mutation carriers – 1442 females in total assumed to carry the mutation) and also assessed the effect on penetrance of mutation position and birth cohort. Results Breast cancer penetrance to 70 and to 80 years was 68% (95%CI 64.7–71.3%) and 79.5% (95%CI 75.5–83.5%) respectively for BRCA1 and 75% (95%CI 71.7–78.3%) and 88% (95%CI 85.3–91.7%) for BRCA2. Ovarian cancer risk to 70 and to 80 years was 60% (95%CI 65–71%) and 65% (95%CI 75–84%) for BRCA1 and 30% (95%CI 25.5–34.5%) and 37% (95%CI 31.5–42.5%) for BRCA2. These risks were borne out by a prospective study of cancer in the families and genetic testing of unaffected relatives. We also found evidence of a strong cohort effect with women born after 1940 having a cumulative risk of 22% for breast cancer by 40 years of age compared to 8% in women born before 1930 (p = 0.0005). Conclusion In high-risk families, selected in a genetics service setting, women who test positive for the familial BRCA1/BRCA2 mutation are likely to have cumulative breast cancer risks in keeping with the estimates obtained originally from large families. This is particularly true for women born after 1940.
Background
Since the identification of the BRCA1 [1] and BRCA2 [2] genes a great deal of debate has focussed on the issue of breast and ovarian cancer risk associated with mutations in these genes. It is clear that calculated cancer risks are dependent on the method of ascertainment of the families studied. Thus, breast cancer risks in large familial breast cancer kindreds with BRCA1/BRCA2 mutations are substantially higher than risks derived from population based studies [3,7,8]. In the high-risk families that recruited to the Breast Cancer Linkage Consortium (BCLC) cohort, BRCA1 and BRCA2 mutations were estimated to cause a cumulative lifetime risk of breast cancer at age 70 years of 85-87% and 77-84% respectively [3,7,8]. However, estimates of breast cancer risks to age 70 years of age derived from previous population based studies to date are much lower at 28-60% [4][5][6] for BRCA1, and lower still for BRCA2. It has been suggested that even these studies may overestimate the effect of the BRCA1/2 mutation alone [9]. Whilst there is some evidence of variation of cancer risk by position of mutation within each gene [10][11][12], more variation occurs between families with the same mutation. Therefore it is likely that a substantial proportion of the breast cancer risk in strong familial clusters with a BRCA1/2 mutation (the group of families that are usually seen by a Cancer Genetics Service), might be contributed to by modifier genes [13]. Optimum clinical practice requires, that the cancer risks provided to families undergoing genetic testing are appropriate to the setting in which the mutation was detected. To determine the most appropriate risks for women attending clinical cancer genetics services we determined the cumulative risks of breast and ovarian cancer for 385 families with pathogenic BRCA1/2 mutations identified in North West and Central England covering a population of 10 million.
Index cases and relatives
Breast and ovarian cancer families have been tested for BRCA1/2 mutations (using a whole gene analysis including a test for large deletions) since 1996 in the overlapping regions of Manchester and Birmingham in midnorth England. All genetic testing is undertaken with informed consent and consent is also taken to confirm cancer diagnosis. The study was carried out with Local Ethical committee approval. Women who attend the specialist genetic clinics in these regions with a family history of breast/ovarian cancer have a detailed family tree elicited with all first, second and if possible third degree relatives recorded. If a BRCA1/2 mutation is identified, further extensive attempts are made to ensure that all individuals at risk of inheriting the family mutation are represented on the pedigree. All cases of breast or abdominal cancers are confirmed by means of hospital/pathology records, from the Regional Cancer Registries (data available from 1960) or from death certification. Once a family specific pathogenic BRCA1/2 mutation is identified predictive testing is offered to all blood relatives. Where possible all affected women with breast/ovarian cancer are tested to establish the true extent of BRCA1/2 involvement in the family. In many large families it is possible to establish "obligate" gene carriers by testing for the same mutation in different branches of the family, thereby establishing that intervening relatives carry the same mutation.
All female BRCA1/2 mutation carriers identified were included in this study, and their details, those of all tested relatives and first-degree untested female relatives were entered onto a Filemaker Pro 5 database. The initial individual in which a mutation was identified was designated the "index" case, with all other individuals being classified as to their position in the pedigree compared to a proven mutation carrier. All women reaching 20 years were entered if untested for a mutation. The exception was mothers of a mutation carrier when it was clear that the mutation was paternally inherited. 385 index cases were studied and from these extended pedigrees information on a total of 2466 females was collected. Information was entered on date of birth, date of last follow up, breast cancer status, ovarian cancer status, dates of diagnoses and date of death (if applicable), gene mutation carried in the family, their relationship to a known mutation carrier and their mutation status and date at which testing took place.
The proportions of unaffected first-degree relatives (FDRs) testing positive or negative was derived for each age cohort. Figures from this were used to estimate the proportion of untested relatives that were likely to test positive in each age group. The proportion of untested FDRs with breast or ovarian cancer that were likely to test positive was similarly estimated from testing that had taken place in each family. Penetrance analysis was performed by including all mutation positive individuals and appropriate numbers of untested FDRs on a proportional basis. Kaplan Meier curves were derived for breast and ovarian cancer incidence for each gene and by dividing each gene into the previously identified ovarian cancer cluster region (OCCR): exon 11 (nucleotides 2401-4190) for BRCA1 and exon 11 (nucleotides 3035-6629) for BRCA2. For BRCA1 we used the nucleotide range identified by the BCLC [11], although this is not traditionally called an OCCR it is the region published as having the greatest proportional risk of ovarian cancer. Individuals were censored at age of death, age of last follow up, age at appropriate cancer or age at appropriate risk reducing surgery (oophorectomy for ovarian cancer, mastectomy and oophorectomy for breast cancer). The Manchester scoring system was used to assess the strength of the breast/ovarian cancer history [14]. This system was devised to assess the likelihood of a BRCA1/2 mutation and scores breast and ovarian cancers individually in the family, giving a higher score the younger the age at diagnosis [14]. A combined score of 20 reflects a 20% likelihood of identifying a BRCA1/2 mutation.
Results
The 385 families consisted of 223 apparently unrelated BRCA1 and 162 BRCA2 families. Mutations were spread throughout the BRCA1 and BRCA2 genes with the commonest mutation being the Jewish exon 2 185 DelAG (31 families). There were also 20 families with single or multiple exon deletions or duplications in BRCA1 and 6 in BRCA2. These families contained 904 proven female mutation carriers (526 in BRCA1; 378 in BRCA2). There were 992 female FDRs of unknown mutation status: 554 in BRCA1; 438 in BRCA2 kindreds. Of these 244 had been diagnosed with breast cancer, 88 with ovarian cancer and 14 with both. 21/206 (10%) FDRs with breast cancer tested negative for the family mutation, but only 1/101 FDRs with ovarian cancer. The age distribution of the breast cancer cases testing negative for the mutation was identical to those testing positive. We therefore assumed that every tenth untested FDR with breast cancer (only) was negative for the mutation in each gene. All 21 individuals testing negative for the family mutation were also negative for the 1100delC mutation in CHEK2. As 99% of the ovarian cancers tested were positive we assumed that all FDRs with ovarian cancer were positive. The results for predictive testing of unaffected females for each gene are shown in Table 1. We assumed that similar proportions of untested unaffected female relatives would test positive for each gene. We therefore stratified these relatives by age and excluded an increasing proportion of the relatives as indicated for each age group. For the age group of 60 years and over we assumed that 10% would be positive for BRCA1 and 20% for BRCA2.
The proportion testing positive for each gene with Manchester scores [14] above and below 20 and 23 are pre-sented in Table 2. This shows a substantial effect of cancer burden for BRCA2 with high-risk families (scores above 20 points) having a much lower proportion of positive predictive tests after 50 years.
Overall, of the FDRs of unknown mutation status, 92/92 with ovarian cancer, 220/244 with breast cancer and 234/ 648 unaffected FDRs were included in the analysis. In total this amounted to 839 actual and presumed carriers for BRCA1 and 603 actual and presumed carriers for BRCA2. There were 243/839 (29%) BRCA1 individuals with ovarian cancer compared to 64/603 (11%) female BRCA2 carriers. 411/839 (49%) BRCA1 carriers and 355/ 603 (59%) BRCA2 carriers had developed breast cancer. Penetrance estimates for each gene are shown (Table 3; Figures 1, 2) for breast and ovarian cancer. The curves were remarkably similar for each gene, with breast cancer penetrance to 70 and 80 years of 68% (95%CI 65-71%) and 79.5% (95%CI 75-84%) for BRCA1 and 74% (95%CI 71-77%) and 88% (95%CI 85-91%) for BRCA2. Ovarian cancer risk to 70 and 80 years was 60% (95%CI 65-71%) and 65% (95%CI 75-84%) for BRCA1 and 30% (95%CI 25.5-34.5%) and 37% (95%CI 31.5-42.5%) for BRCA2. The penetrance for ovarian cancer was significantly higher for BRCA1 (p < 0.0001), but breast cancer incidence for BRCA2 was borderline significantly higher than for BRCA1 (p = 0.09). Indeed breast cancer penetrance estimates for BRCA2 after 60 years were significantly higher as was overall penetrance including the index case (p = 0.02). There was no significant effect of ovarian cancer cluster regions (OCCR, nucleotides 2401-4190 BRCA1 and nucleotides 3035-6629 in BRCA2) for either gene with lifetime ovarian cancer risks (to 80 years) of 65% for 573 BRCA1 carriers outside the OCCR and 70% for 266 women with mutations within the OCCR (p = 0.18). Similarly there was no effect of position for BRCA2 with lifetime risks of 37% for 373 BRCA2 carriers outside the OCCR and 41% for 230 BRCA2 women with OCCR mutations (p = 0.17). There was a 10% higher An estimate for breast cancer penetrance was also made for each 10-20 year birth cohort. A highly significant difference was identified with those born after 1960 having a breast cancer risk to 40 years of age of 40% compared to only 7.5% for those with a year of birth between 1900 and 1920 ( Figure 3: p < 0.00001). However, after exclusion of the index case the cumulative risk to 40 years dropped to between 21-23% for the birth cohorts after 1940. This was, nonetheless still a highly significant trend (p = 0.0005). After exclusion of the index case there was no significant birth cohort effect observed for ovarian cancer (p = 0.086). To assess the earlier age at breast cancer diagnosis on life expectancy we carried out a Kaplan-Meier survival analysis on the birth cohorts, again excluding the index case. There was no significant difference in survival from birth ( Breast cancer incidence was also assessed after family ascertainment. Incidence figures for breast/ovarian cancer are shown in Table 5. These reflect the incidence in unaf- fected women at the time of family ascertainment and follow up was censored at the time of risk reducing surgery (oophorectomy/mastectomy). As the index case was used to identify the mutation usually on surveillance the incidence rates for these cases are artificially high. Excluding the index cases there was an incidence of 2.5-2.7 per thousand for breast cancer in proven carriers. Even including 40% of the follow up time and 80% of the breast cancers from the FDR unknown category (Tables 1 and 2), this still gave an annual incidence of breast cancer of 1.98% for both BRCA1/2 mutation carriers (38/1917; 35/ 1763. 6). An annual rate of 2% averaged over the risk period of 30-79 years would if anything indicate a higher risk than those indicated by the Kaplan-Meier analysis.
Discussion
We present data on a large cohort of women identified as carriers or presumed carriers of BRCA1 and BRCA2 mutations in a large proportion of the UK population. The penetrance estimates derived from these women are very similar to those derived from the BCLC cohort of high-risk families with lifetime risks of breast cancer of close to 85% for both genes [3,7,8]. The estimate of ovarian cancer was also very similar with risks to 70 years of 60% for BRCA1 carriers and 33% as opposed to 27% [3] for BRCA2 carri- ers. It is possible that the higher overall breast cancer estimates for BRCA2 were related to competing mortality from ovarian cancer. Many risk factors for breast and ovarian cancer are similar (early menarche, late menopause, nulliparity) and women with these may have died from ovarian cancer before they developed breast cancer. This effect would be more prominent for BRCA1 and would potentially explain the higher breast cancer penetrance for BRCA2. The ratio of those testing positive:negative for the BRCA mutation whilst still unaffected also gives support to high penetrance. Of those women without an affected daughter, <10% of those aged over 60 years, tested positive for BRCA1 and <20% for BRCA2. The figures over 60 years are, nonetheless based on small numbers. The earlier drop in positive:negative ratio for BRCA1 almost certainly represents a higher combined risk of both breast Cumulative risk of breast cancer by age cohort for BRCA1 and BRCA2 combined after exclusion of the index case and ovarian cancer to 50 and 60 years. Another supportive feature is shown in Table 2. The typical families tested in our centre have a Manchester score of 20+ reflecting multiple early onset breast and/or ovarian cancer in the family. The less "high" risk clusters as evidenced by lower Manchester scores had a higher proportion testing positive >50 years. This suggests that Manchester score could be used as a bench-mark to predict penetrance particularly in BRCA2 families. Whilst all attempts to assess penetrance have their inherent biases and assumptions this cannot be said of the results of presymptomatic testing. The only potential bias would be if women had an inkling that they would test positive or negative prior to coming forward. This is not borne out by our results particularly accounting for Manchester score.
The previously reported positional effect of mutations for both BRCA1 and BRCA2 is not borne out by our analysis. No substantial effect of increased risk of ovarian cancer was seen in the respective ovarian cluster regions of each gene and only a borderline significant reduction of breast cancer risk was seen for BRCA2. Much of the OCCR association has been based on ratios of breast to ovarian cancer [10] or on the presence or not of ovarian cancer in the family [11]. Even this reliance on the presence of ovarian cancer for BRCA2 has been questioned by the report of 58% of BRCA2 related ovarian cancer families having mutations outside the OCCR [12]. Although the BCLC study on BRCA1 positional effect [10] included 356 families compared to our 223 families no absolute estimate of penetrance was made. Whilst the breast cancer incidence was lower in the central portion of the gene (nucleotides 2401-4190) (RR 0.71) in their analysis it was not possible to derive absolute risk figures for each portion of the gene.
Additionally it is likely that our more extensive testing of unaffected relatives may provide a more accurate overall picture as reported here. Accurate estimates of cancer risk are essential for families and individuals undertaking genetic testing. Based on our analysis, it is questionable whether any account should be taken of the OCCR in each gene or indeed any substantial positional effect in genetic counselling.
It is also clear that for individuals undertaking predictive genetic testing in the context of families ascertained from cancer genetic clinics as opposed to population testing that risk figures similar to those derived in our study or the BCLC is quoted in our own clinics and we recommend that penetrance estimates are derived for the population being counselled. Our data are nonetheless at variance to a similar analysis carried out in North America [15]. A series of 1948 families were tested for mutations in BRCA1/2 in eight centres. 283 families with BRCA1 mutations were identified and 143 in BRCA2. The authors used statistical modelling to arrive at penetrance figures by 70 years of 46% (95%CI 39-54%) for BRCA1 and 43% (95%CI 36-51%) for BRCA2. The authors did not appear to take advantage of any further testing of relatives in the family. Whilst they corrected for potential ascertainment bias, they did not allow for the effects of modifier genes in these families and purely looked at attributable risk from BRCA1 and BRCA2 mutations alone. This was based on the apparent lack of heterogeneity in another study of Jewish families from North America [16]. What is particularly concerning is the risk attributed to "non mutation carriers" to 70 years. A figure of 5% as a general population risk for breast cancer may have been correct 20-30 years ago, but is certainly not the risk faced by women in the US or the UK today. Breast cancer risk to age 70 is 7.6% in the UK [17] and nearer 8% in the US. A correction for this difference might give penetrance figures of nearer 74% for BRCA1 and 69% for BRCA2. The decision not to include any adjustment in these families for the effects of modifier genes is questionable. The difference in penetrance obtained from the BCLC and from population studies strongly suggests the presence of additional genetic factors in high-risk families. We have recently reported that those testing negative for a family BRCA mutation are still at 3fold relative risk of breast cancer [13]. This phenocopy effect was also seen in the Iceland data for their founder BRCA2 mutation, although to a lesser extent given the strong population based element of their analysis [18]. However, it is possible that modifier genes are more prevalent in some populations and that penetrance in North America is less affected by modifier genes than in the UK. The presence of these modifier alleles is now indisputable from recent genome wide association studies [19][20][21].
A potential criticism of our study is that we have not taken enough account of ascertainment bias and that additional adjustment maybe necessary beyond excluding the index case. An analysis using these adjustments was carried out in the North American study [15] and recent reports from the Cambridge group [22]. These studies did not take into account the widespread testing of relatives and as explained above the American study deliberately excluded any effect other than of the BRCA1/2 mutation. Whilst it is clearly interesting to know the effect of BRCA1/2 alone, women undergoing testing will want to know what their own specific risk of breast and ovarian cancer are, including that contributed by other potential "modifier" genes in their family. We must also acknowledge that confidence intervals in table 3 should also be wider due to forcing the data on unknown FDRs into a known category.
The high-risk women testing positive is also supported by the prospective part of our study. The 2-2.7% annual risk demonstrated is equivalent to the highest risk in a 10-year period (23% BRCA1; 30% BRCA2- Table 3). Although most of the breast cancers were detected by screening, only one was detected at a prevalence mammogram. These follow up risks are also supported by a similar follow up study in the Netherlands where 8 breast cancers occurred in 63 mutation carriers with a calculated annual risk of 2.5% [23].
Our own study and recent analyses from North America and Iceland demonstrate that women in the most recent birth cohort have a substantially higher risk of developing breast cancer than past cohorts [16,18]. The incidence of breast cancer in BRCA2 carriers has risen 4 fold in 80 years in Iceland (as has breast cancer in the general population) and we have observed a similar increase from <10% risk by 40 years in those born before 1930 to a 40% risk on those born after 1960, although this was less significant after allowing for ascertainment bias. It is, therefore, inappropriate to quote risks as low as 43-46% (based on population studies) for lifetime breast cancer risk to women in their twenties or early thirties if they test positive for a mutation in a high-risk family. Another potential effect of earlier breast cancer might be a reduction in life expectancy. With increasing survival from birth in the general population and improved survival from diagnosis of breast cancer we might have expected to see improved life expectancy. However, it would appear that these elements almost completely cancel each other out and there is no evidence for improved survival from birth in modern BRCA birth cohorts.
When discussing the higher risks of breast cancer in recent generations, it is nonetheless important to couch any discussion on risk in terms of future prospects for risk reduction by preventive measures. Increasing numbers of women are opting for risk reducing surgery particularly early RRO, which will substantially reduce the risk of both breast and ovarian cancer [24]. It is also likely that new treatments or substantial changes from the Western lifestyle may have a sufficient effect to help in risk management in the future.
Conclusion
We believe our results show that when counselling women on their risks of breast and ovarian cancer if they carry a family BRCA1/2 mutation the risks should reflect the context of cancer in their family and not just an average risk from possibly over-corrected penetrance estimates from population studies. Indeed a recent review in a prestige journal quoted "headline" risks for BRCA2 of only 40% and 8% for breast and ovarian cancer to 80 years [25]. Understandably many clinicians and counsellors may quote these risks. The use of family cancer burden in adjusting risks to carriers is already used in the BOADI-CEA programme [26] and the Manchester score could also be used as a bench mark of where in the range of 40-90% breast cancer risk a women should be steered, especially for BRCA2.
Publish with Bio Med Central and every scientist can read your work free of charge
|
2017-06-21T09:57:53.369Z
|
2008-05-30T00:00:00.000
|
{
"year": 2008,
"sha1": "1b5cae9167d6ac135254b6e94a2552a5143f1f41",
"oa_license": "CCBY",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-8-155",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2212db8917e656b0739eca739fbe951f604447ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
9903004
|
pes2o/s2orc
|
v3-fos-license
|
Quantitative detection of free 24S-hydroxycholesterol, and 27-hydroxycholesterol from human serum
Background Cholesterol metabolism is important for the maintenance of myelin and neuronal membranes in the central nervous system. Blood concentrations of the brain specific cholesterol metabolite 24S-hydroxysterol to the peripheral metabolite 27-hydroxycholesterol may be useful surrogate markers for neurodegenerative diseases including Alzheimer’s disease, Huntington’s disease, HIV-Associated Neurocognitive Disorders, and Multiple Sclerosis. However, current methods to isolate hydroxycholesterols are labor intensive, prone to produce variable extraction efficiencies and do not discriminate between free and esterfied forms of hydroxycholesterols. Since free hydroxycholesterols are the biologically active form of these sterols, separating free from esterfied forms may provide a sensitive measure to identify disease-associated differences in brain sterol metabolism. Results We found that average human serum concentrations were 12.3 ± 4.79 ng/ml for free 24(s)-hydroxycholesterol and 17.7 ± 8.5 ng/ml for 27-hydroxycholesterol. Conclusion Serum measurements of these biologically active oxysterols may be useful surrogate measures for brain health in a variety of neurodegenerative conditions.
Background
Hydroxycholesterols are mono-oxygenated derivatives of cholesterol (cholesten-5-3β-ol) that comprise a family of polycyclic compounds that contain a second oxygen atom as a hydroxyl group on the skeleton of cholesterol. In vivo, hydroxycholesterols are present as unesterified (biologically active) and esterified forms (largely biologically inert) [1] Hydroxycholesterols are formed through enzymatic conversion of cholesterol or by free radical autoxidation, and exhibit a short half life relative to cholesterol. Hydroxycholesterols are important intermediates in a number of catabolic pathways that regulate a variety of biological effects. For example, hydroxycholesterols are important for cholesterol transport from the periphery to the liver [2], modulate the expression of sterol sensitive genes involved in lipid and sterol biosynthesis [3,4], act as substrates for the formation of bile salts [5], serve as ligands that activate nuclear liver X receptorsα and -β [6], and are involved in the regulation of cholesterol and lipid metabolism and homeostasis [7,8]. In the central nervous system, hydroxycholesterols regulate arachidonic acid release, voltage-gated calcium channels, synaptic plasticity, induce IL-8, promote neurogenesis and induce apoptosis [9][10][11][12][13][14][15][16]. Consistent with these important roles for regulating biological functions, levels of free hydroxycholesterols are extremely low and tightly controlled, with the majority of hydroxycholesterols maintained in esterified forms [17,18].
Many cell types have the ability to oxygenate cholesterol by mechanisms that involve the cytochrome P450 family of oxidases (CYP). Cell type specific expression CYP subtypes results in the tissue-specific production of particular oxysterol species. Several CYP are present in the central nervous system including 24S-hydroxycholesterolhydroxylase (CYP46), a P450 family member that is expressed in neurons, glia and in endothelial cells of the blood-brain barrier [19]. 24S-hydroxycholesterol (Cholest-5-en-3β, 24(S)diol) is the most abundant hydroxycholesterol in brain and is the primary transport form of cholesterol from the central nervous system into the blood, with smaller amounts eliminated through cerebrospinal fluid [20]. It has been suggested by several studies that serum or plasma levels of 24S-hydroxycholesterol may reflect brain developmental and neuropathological changes associated with Alzheimer's disease (AD), Huntington's disease and Multiple Sclerosis [21][22][23][24][25][26][27]. 24S-hydroxycholesterol is often expressed as a ratio to 27-hydroxycholesterol (25R-Cholest-5-en-3β, 26-diol). 27-hydroxycholesterol is formed primarily in the periphery by the P450 enzyme sterol 27-hydroxylase (CYP27) [24]. CYP27 is expressed in arterial endothelium, macrophages and to lesser extents in other tissues such as cortex, spleen, liver, kidney, adrenal gland and heart [28,29]. 27-hydroxycholesterol can function as a ligand for nuclear receptors, liver X receptors (LXR) and farnesoid X-activated receptors (FXRs) [30]. 27-hydroxycholesterol can also regulate hydrocymethylglutaryl-CoA reductase [31,32], and enhances cholesterol efflux from the vascular endothelium [33]. Macrophages have the highest capacity to convert cholesterol to 27-hydroxycholesterol, which is then transported in blood to the liver where it is converted to bile acids [34].
Hydroxycholesterol detection and quantification has been accomplished in a variety of tissues by isotope-dilution gas-chromatography-mass spectrometry (GC-MS) [18,35], gasand high-performance liquid chromatography (HPLC)/mass spectrometry [36], HPLC with UV detection of cholesterol oxidation products in tissues [37], as Δ4-3-ketone derivatives by HPLC [38] and as derivatives of GP hydrazones [39]. Because "free" hydroxycholesterol levels are below the detection limits of many instruments, saponification and/or solid phase extraction techniques have typically been used to extract "total" hydroxycholesterols. The primary advantage of these methods is a high yield of hydroxycholesterols. The disadvantages include lengthy sample preparation times, sample loss, inconsistent yields, and the inability to discriminate between free and esterified hydroxycholesterols. In this study we developed a simple and direct extraction protocol and sensitive LC/ESI/MS/MS method for separation and simultaneous quantitative determination of free 24Shydroxycholesterol and 27-hydroxycholesterol in serum. dissociation of individual hydroxycholesterol species were then characterized by product ion scanning using the highest abundance product ions with adequate signal/noise ratios. For 24S-hydroxycholesterol and 27-hydroxycholesterol the molecular fragment identified was 385.3 m/z ( Figure 2B). For 24 (R/S) hydroxycholesterol (d6) the molecular ion identified was 426.6 m/z and the highly abundant product was 373.7 m/z ( Figure 2C,D). We optimized the following mass spectrometer parameters that were then used for all hydroxycholesterol species: Ion spray voltage (IS) 2500 eV, temperature (TEM) 250°C, nebulizer gas (NEB) 13 psi, curtain gas (CUR) 8 psi, collision activated dissociation (CAD) 12 psi, dwell time (DW) 150 msec, and entrance potential (EP) 10 eV. The declustering potential (DP), focusing potential (FP), collision energy (CE), collision exit potential (CXP), orientation of the electrospray needle and auxiliary gas flow were individually optimized for each analyte by both direct infusion and flow thorough infusion (FIA) to maximize accuracy and sensitivity (Table 1). We developed our method with two fragmented ions of 24 (R/S) hydroxycholesterol (d6) (426.6/373.7 & 426.7/391.6), and ultimately chose to use 426.6/373.7, as this transition displayed a high signal/noise ratio, a stable low baseline, was abundant, and stable compared to the 426.7/391.6 transition, which showed an unstable baseline (see in Figure 2D).
Optimization of HPLC conditions for separation of hydroxycholesterols
Hydroxycholesterols were separated by HPLC using a C18 column. The HPLC gradient conditions were optimized to obtain good separation between 24S-hydroxycholesterol and 27-hydroxycholesterol with a short running time (~12 min). The best signal to noise separation was observed using pure CH3OH containing 5 mM HCOONH4 as a linear mobile phase. Figure 2 Identification and fragmentation of 24S or 27-hydroxycholesterol and 24(R/S)-hydroxycholesterol (d6). Mass spectra for A) identification (Q1 scan) and B) fragmentation (product ion scan) of a purified 24S or 27-hydroxycholesterol standard (420.3 m/z), C) identification (Q1 scan) and D) fragmentation (product ion scan) of 24(R/S)-hydroxycholesterol (d6) (426.6 m/z) in positive mode using API3000 mass spectrometer.
Linearity, limits of detection and quantification
Defined amounts of both 24S-hydroxycholesterol and 27-hydroxycholesterol standards (10, 50, 100, 500, 1000 ng/ ml) were added to control serum samples prior to extraction. Standard curves were plotted as the ratio of the peak areas for 24S-hydroxycholesterol or 27-hydroxycholesterol to the peak area of the internal standard 24(R/S)-hydroxycholesterol (d6). Least-squares regression analysis for 24S-hydroxycholesterol and 27-hydroxycholesterol standard curves demonstrated linearity in the range of 10 -1000 ng/ml with a correlation coefficient of r2 for 24Shydroxycholesterol 0.9979 ± 0.0018 and 0.9940 ± 0.0018 for 27-hydroxycholesterol (Table 3, Figure 4A). The limit of detection was calculated using the signal to noise ratio. The lower detection limits for 24S-hydroxycholesterol and 27-hydroxycholesterol were nearly identical at 248 fmoles on the column. We then calculated the concentration of free 24(S)-hydroxycholeserol and 27-hydroxycholesterol in our human serum samples. The average serum concentrations were 12.3 ± 4.79 ng/ml for 24(s)-hydroxycholesterol and 17.7 ± 8.5 ng/ml for 27-hydroxycholesterol (Figure 4).
Discussion
The goal of this study was to develop an efficient and rapid extraction protocol for LC/ESI/MS/MS-based detection and quantification of free 24S-hydroxycholesterol and 27hydroxycholesterol from human serum. For clinical studies, and for the potential use of these hydroxysterols as surrogate markers, it is important that sample analysis be rapid and cost effective. Here we present a simple, inexpensive, and rapid protocol for the extraction of 24S-hydroxycholesterol and 27-hydroxycholesterol from human serum. In addition, the simplicity of the extraction method increases data reproducibility by decreasing variability of product yield. This low cost and rapid sample preparation coupled with the high sensitivity of LC/MS/ MS instruments and accurate quantification by MRM make this a potentially powerful approach for the highthroughput quantification of hydroxycholesterol species in clinical and experimental samples.
This extraction method does not use saponification and therefore measures free hydroxycholesterols. This is an important consideration since free hydroxycholesterols are the most biologically active form of these sterols [40,41]. In our healthy human volunteers, serum concentrations ranged from 4 to 21 ng/ml for 24S-hydroxycholesterol and 4 to 29 ng/ml for 27-hydroxycholesterol. These ranges are considerably lower than previously reported levels of total hydroxycholesterol in human serum that range from 60 to 83 ng/ml for 24S-hydroxycholesterol and 120 to 159 ng/ml for 27-hydroxycholesterol [18,24,42,43]. These data are consistent with findings that suggest more than 80% of 24S-hydroxycholesterol and 27-hydroxychoelsterol are maintained in an esterfied state [17].
Since this is the first report of this extraction method, it is not possible to determine if measuring free hydroxycholesterols has a diagnostic or experimental advantage over measuring total hydroxycholesterols. In this study we analyzed a small number of samples to validate the method and did not compare to a disease state. However, we have recently used this method to quantitatively measure levels of free 24S-hydroxycholesterol and 27-hydroxycholesterol in serum of subjects who later developed cognitive impairment, and found that increased levels of free 24Shydroxycholesterol and the 24S-hydroxycholesterol/total cholesterol ratio were associated with greater risk of impairment on tasks that assess psychomotor speed and executive functioning, while higher levels of free 27hydroxycholesterol and the 27-hydroxycholesterol/total cholesterol ratio were associated with greater risk of delayed memory impairment. These data were qualitatively different from previous reports that measured total serum levels of these hydroxysterols. For example, total 27-hydroxycholesterol to total cholesterol ratio was associated with a faster decline of immediate memory recall over six years of follow-up [44]. Although, a second reported study did not find an association between total serum 24hydroxycholesterol or total 27-hydroxycholesterol and cognitive performance [45], this study measured total hydroxysterols. To date, only a single study has directly compared free to total levels of 24S-hydroxycholesterol and 27-hydroxycholesterol in serum. In this study it was reported that 80% of 24S-hydroxycholesterol and 85% of 27-hydroxycholesterol is esterfied in healthy volunteers. Males had higher levels of total 27-hydroxycholesterol compared to females. They found no other demographic or age-related differences in total 24S-hydroxycholesterol or 27-hydroxychoelsterol, and did not determine if there were age-or disease-related differences in free vs. esterfied hydroxycholesterols. Since it is the free forms of these hydroxysterols that have biological activity, and free forms are less than 20% of total hydroxysterols, it is possible that measuring total 24S-hydroxycholesterol and 27-hydroxycholesterol could mask a biological or diseaseassociated effect. These findings suggest that there may be important differences in free vs. total 24Shydroxycholesterol and 27-hydroxycholesterols in relation to sex, age and neurodegenerative conditions that merit further study.
The ability to economically and efficiently measure 24Shydroxycholesterol and 27-hydroxycholesterol in serum may also be useful as surrogate measures for the effectiveness of chemotherapeutics. The approximate cost per sample for this rapid extraction method is $3.00. Saponification with solid phase extraction increases the approximate cost $18.00/sample. These costs are for sample processing and do no include mass spectrometry time that is equal for both methods. Likewise sample run times are 12 min/sample regardless of the extraction method. A number of sterol modifying agents are being tested as potential therapeutics for neurodegenerative disease [46][47][48][49][50][51][52], and it is possible that serum measures of 24S-hydroxycholesterol and 27-hydroxycholesterol may have utility as rapid and inexpensive surrogate markers to efficiently determine the effectiveness of therapeutics.
Conclusion
Serum measurements of these biologically active hydroxycholesterols may be useful surrogate measures for brain health in a variety of neurodegenerative conditions.
Extraction of 24S-hydroxycholesterol and 27hydroxycholesterol
We developed a single step direct extraction method for both 24S-hydroxycholesterol and 27-hydroxycholesterol. For extraction, 0.5 ml of Serum was transferred into a glass tube and 5 μl of 24(R/S)-hydroxycholesterol (d6) (internal standard) from 100 μg/ml stock was added, followed by 3 ml of pure ethanol and the mixture vortexed. Diethyl ether (4 ml) was then were added and the mixture vortexed and centrifuged at 4,000 g for 10 minutes. The supernatant was separated and the residue was re-extracted using the same volumes of solvents as was used in the initial extraction. Supernatants were mixed together, and dried under a stream of nitrogen or in a vacuum oven at 30°C (we did not observe any qualitative or quantitative differences when samples were dried under nitrogen compared with a vacuum oven). Dried extracts were re-suspended into 100 μl of methanol, vortexed, centrifuged and transferred to an autosampler vial insert where it was maintained at 4°C. Samples were injected into the HPLC using an Agilent 1100 series autosampler (Agilent Technologies, Inc., Santa Clara CA, United States).
Quantification of hydroxycholesterols by LC/ESI/MS/MS using multiple reaction monitoring (MRM)
Sample analysis was performed using triple quadrupole LC/ ESI/MS/MS API3000 mass Spectrometer (Applied Biosystems, Thornhill, Ontario, Canada). The HPLC consisted of an Agilent 1100 series with a quaternary pump, degasser, autosampler and thermostatted column. The column was a Luna 5 μM C18 100 Å 100 × 2 mm coupled to a guard column with packing material identical to the column (Phenomenex, Torrance, CA). Chromatography was conducted in gradient elution mode using solvent A (water with 5 mM ammonium formate) and solvent B (pure methanol with 5 mM ammonium formate) at flow rate of 0.3 ml/min. Hydroxycholesterols were separated using the following gradient conditions: 0.0 -0.3 min, 85% B; 0.3 -9 min gradient to 100% B; 9-12 min 0% B (Table 4). Injection volume of samples was 10 μl. Quantification was conducted by MRM using Analyst 1.4.2 software (Applied Biosystems).
|
2016-05-12T22:15:10.714Z
|
2014-12-24T00:00:00.000
|
{
"year": 2014,
"sha1": "842b77b185c752630f102c342bb8e9ac158f422b",
"oa_license": "CCBY",
"oa_url": "https://bmcneurosci.biomedcentral.com/track/pdf/10.1186/s12868-014-0137-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "842b77b185c752630f102c342bb8e9ac158f422b",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
245452026
|
pes2o/s2orc
|
v3-fos-license
|
Soft Magnetic Properties of Ultra-Strong and Nanocrystalline Pearlitic Wires
The paper describes the capability of magnetic softening of a coarse-grained bulk material by a severe deformation technique. Connecting the microstructure with magnetic properties, the coercive field decreases dramatically for grains smaller than the magnetic exchange length. This makes the investigation of soft magnetic properties of severely drawn pearlitic wires very interesting. With the help of the starting two-phase microstructure, it is possible to substantially refine the material, which allows the investigation of magnetic properties for nanocrystalline bulk material. Compared to the coarse-grained initial, pearlitic state, the coercivities of the highly deformed wires decrease while the saturation magnetization values increase—even beyond the value expectable from the individual constituents. The lowest coercivity in the drawn state is found to be 520 A m−1 for a wire of 24-µm thickness and an annealing treatment has a further positive effect on it. The decreasing coercivity is discussed in the framework of two opposing models: grain refinement on the one hand and dissolution of cementite on the other hand. Auxiliary measurements give a clear indication for the latter model, delivering a sufficient description of the observed evolution of magnetic properties.
Introduction
Severely drawn pearlitic steels have drawn the attention of scientists as well as engineers due to their exceptional high strength [1]. Going along with the mechanical characterization of this interesting material, substantial efforts regarding microstructural characterization after application of high drawing strains were undertaken [2][3][4][5][6]. Therein, the nanocrystalline microstructure, the dissolution of cementite (Fe 3 C), and a small tetragonal distortion (c/a < 1.01) of the remaining ferrite were revealed. Accompanying the nanocrystalline microstructure (see Figure 1a,b), an exceptionally high strength of almost 7 GPa was found for wires subjected to a true drawing strain ε of up to 6.5 [3,7]. This is the highest strength ever measured for a material produced by metal forming techniques. Furthermore, Li et al. [7] used atom probe tomography (APT) to show that subgrain sizes of 10 nm and below can be found in wires of such a degree of deformation, which is the smallest grain size produced by conventional metal forming techniques. Furthermore, the dissolution of cementite by severe wire drawing using APT is described: Fe 3 C is dissolved and carbon is mechanically alloyed into the ferrite but also decorating the grain boundaries.
Not only the mechanical properties but also physical properties are affected by the substantial microstructural refinement. It is well known that the microstructure of a ferromagnetic material has a strong influence on the magnetic properties. Consequently, if one alters the microstructure (e.g., by refining the grain size D) in a controlled manner, the coercive field strength H c can be modified correspondingly. Herzer showed for very Nanomaterials 2022, 12, 23 2 of 11 small grains from the so-called random anisotropy regime, where the magnetic exchange length L ex covers many grains, that coercivity collapses with D 6 for decreasing grain sizes [8,9]. From the seminal work on FeCuNbSiB it is well known that adequate annealing treatments of rapidly quenched, amorphous materials precipitate very small grains enabling the tuning of coercivity [8]. This process can be regarded as a "bottom-up" process, as small crystallites are generated from an amorphous phase. In contrast to this bottom-up process, strong or severe plastic deformation (SPD) methods can be regarded as "top-down" processes, where microstructural changes of coarse materials result in ultra-fine grained or nanocrystalline microstructures resulting in large changes of not only mechanical but also magnetic properties. Consequently, the magnetic properties of pearlitic wires experiencing high-drawing strain, featuring smallest microstructural sizes, are of large interest.
The following questions are going to be answered: Does the coercivity further decrease for very high applied strains? Does the random anisotropy model explain the decreasing coercivities with increasing drawing strains, or can the explanation also be found in other microstructural changes?
Materials and Methods
The hypereutectoid pearlitic steel containing 0.98 wt.% C was initially provided in the shape of a wire with a diameter d start of 540 µm featuring an average grain size of 23 µm, measured by electron backscatter diffraction. Upon severe wire drawing, thin wires with diameters of a few tens of micrometres were produced. Besides the nondeformed state (ε = 0 for d start = 540 µm), thin wires experiencing true drawing strains of 5.42 (d finish = 36 µm) and 6.23 (d finish = 24 µm) were investigated regarding magnetic properties such as coercivity and saturation magnetization.
A superconducting quantum interference device (SQUID, Quantum Design MPMS XL7), a very sensitive magnetometer for detecting tiny changes of magnetic moments in magnetic fields, was used to measure the volume saturation magnetization at very high fields up to 7 T and the variations of coercivity with changing temperature. Checking reproducibility, the saturation magnetization of two wires of 36-µm thickness was determined. SQUID measurements yield the total magnetic moment of the whole sample volume and for an accurate determination of the saturation magnetization, a normalization with respect to the sample volume has to be made. For the determination of the volume of the thickest wire, a conventional laboratory scale (Sartorius Secura225D-1S, Sartorius, Gottingen, Germany) and using the known mass density of pearlite (7845 kg m −3 [10]) provides sufficient accuracy. However, it is more difficult to determine the volume of short sections of very thin wires (24 and 36 µm). The sample volume was determined with a confocal laser scanning microscope (Olympus LEXT OLS 4100, Olympus, Tokyo, Japan). It scans along the complete sample and measures the sample height at each point. Under the assumption of cylindricity, the surface, and thus the volume of the wire can be precisely reconstructed. As an example, a small section is provided in Figure 1c. The surface profiles of the upper halves of the wires were fitted by a sequence of circles of varying diameter to determine the wires' diameter as a function of length. By integrating along the wire axis, the probed sample volume is obtained to a higher accuracy compared to only using the nominal diameter and the length of the wire. To give an idea on the probed sample sizes, the volume of the thinnest wire (see Figure 1d) was determined to be 8.9 × 10 −4 mm 3 .
This extremely small sample volume, albeit of ferromagnetic nature, generates a very small SQUID-signal. As the SQUID should detect only the tiny magnetic moment of the wire, it is necessary to compensate any predominant and inhomogeneous background, e.g., of the sample holder on which the sample is mounted. Following the idea of Topolovec et al., long adhesive tapes were used to fix the sample within the SQUID [11]. These tapes extend beyond the scan range of the second order gradiometer coils of the SQUID flux-transformer. Thus, it cancels the magnetic background signal, but leaves only the signal of the localized wire sample.
The room temperature coercivities of the wires were determined using a vibrating sample magnetometer (VSM, LakeShore 7404, LakeShore Cryotronics, Westerville, OH, USA). Due to the soft magnetic nature of the material, these measurements include the correction of the results for the remanence of the pole pieces, using a paramagnetic Pd standard, which was measured under the same conditions. For all magnetic measurements (SQUID and VSM), the wires were brought into the magnetic field with the long axis of the wire, being some millimetres of length, aligned with the magnetic field. Due to the large aspect ratio (specimen length/diameter), even for the thickest wire, the influence of shape anisotropy [12] can be considered to be small and almost identical for the measured magnetic properties.
wire, it is necessary to compensate any predominant and inhomogeneous background, e.g., of the sample holder on which the sample is mounted. Following the idea of Topolovec et al., long adhesive tapes were used to fix the sample within the SQUID [11]. These tapes extend beyond the scan range of the second order gradiometer coils of the SQUID flux-transformer. Thus, it cancels the magnetic background signal, but leaves only the signal of the localized wire sample. The room temperature coercivities of the wires were determined using a vibrating sample magnetometer (VSM, LakeShore 7404, LakeShore Cryotronics, Westerville, OH, USA). Due to the soft magnetic nature of the material, these measurements include the correction of the results for the remanence of the pole pieces, using a paramagnetic Pd standard, which was measured under the same conditions. For all magnetic measurements (SQUID and VSM), the wires were brought into the magnetic field with the long axis of the wire, being some millimetres of length, aligned with the magnetic field. Due to the large aspect ratio (specimen length/diameter), even for the thickest wire, the influence of shape anisotropy [12] can be considered to be small and almost identical for the measured magnetic properties.
Saturation Magnetization
Combining the measured volume and the magnetic moment from SQUID measurements, the volume saturation magnetization can be calculated and the results for all investigated wires are presented in Figure 2. The large increase in magnetization after wire drawing can be explained by the dissolution of cementite.
Saturation Magnetization
Combining the measured volume and the magnetic moment from SQUID measurements, the volume saturation magnetization can be calculated and the results for all investigated wires are presented in Figure 2. The large increase in magnetization after wire drawing can be explained by the dissolution of cementite.
In Figure 2, it can be seen that the saturation magnetization of all thin wires is markedly higher in comparison to the pristine one (540 µm). The magnetization at the highest applied magnetic field of 7 T is about 1.75 × 10 6 A m −1 for two of the thin wires, while it is slightly larger for one of the two 36-µm wires. The difference, however, can be regarded as a minute one since an uncertainty of only~2% of the wire's diameter could explain the Nanomaterials 2022, 12, 23 4 of 11 deviation. In contrast, a much lower value (1.57 ± 0.01) × 10 6 A m −1 was found for the cementite-containing, non-deformed wire.
1 Figure2 Figure 2. Volume saturation magnetization of all investigated wires, measured at applied fields ≥ 2 T. For visualization the repeatability of the measurements, the x-values are slightly shifted for repeated measurements. The error bars only consider the uncertainties of the SQUID measurement, but not the one of the calculated sample volume. The values of bulk bcc Fe and pearlite [15][16][17] are marked with dashed lines.
For comparison, the value 1.71 × 10 6 A m −1 (2.15 T [15]) for pure body-centered cubic (bcc) Fe is somewhat below all thin wires' magnetization values. Following the idea of Gorkunov et al. [18], one explanation of the increased values of the deformed wireseven in comparison to pure Fe-is an increased magnetic moment of Fe atoms with C in supersaturated solid solution. Medvedeva et al. [19] found a magnetic moment of Fe atoms of 2.35 to 2.45 µ B when including 1 C-atom in a matrix of 16 Fe-atoms. This calculated magnetic moment is considerably higher than the value for pure bcc Fe (2.17 µ B [15]). Cadeville et al. [20] found an increase of 0.02 µ B per at% C for splat-quenched Fe-C when C is in supersaturated solid solution. Considering the increase of 0.02 µ B per at% C and all available C atoms of the wire to be involved, this would lead to the expected saturation magnetization close to 1.74 × 10 6 A m −1 . This coincides very well with the measured values.
In contrast to the thin wires, the saturation magnetization of the thick wire agrees with the value of pure pearlite. Assuming that 0.77 wt.% C (=3.5 at%) are involved in Fe 3 C formation, 10.5 at% of Fe are included in the cementite and the rest of Fe forms the ferritic phase. Cementite is ferromagnetic at room temperature. Using the room temperature saturation magnetization of cementite,~62 emu g −1 [16,17] or 0.48 × 10 6 A m −1 , together with the above mentioned value for pure Fe and the individual contents of Fe and Fe 3 C, a saturation magnetization of 1.55 × 10 6 A m −1 for pearlitic material can be calculated, using a simple rule of mixture. The difference between calculated and measured value is below 2%, see Figure 2.
An advantage of pearlitic steels is their chemical composition. An important aspect of the formation of soft magnetic materials is maintaining a high saturation magnetization J s , which can easiest be achieved with high Fe-contents. When using the bottom-up process, several other-very often non-magnetic-elements have to be used to accomplish the goal of formation of nanocrystalline alloys from e.g., amorphous melt-spun ribbons. Suzuki et al. [21] found a correlation between saturation magnetization and additive content, resulting in a change of saturation magnetization by −0.05 T/wt.% Additives , independent of the type of the additive. Here it was demonstrated that pearlitic steel even increases its saturation magnetization upon severe deformation with the very small amount of C (in wt.%) being of no relevance in regard of the above-mentioned decrease of J s with additive content.
In short, while the thick, non-deformed wire shows the expected pearlitic behaviour in saturation magnetization, all thin wires show values near or even above pure Fe. The slight surplus in saturation magnetization is possibly due to C in supersaturated solid solution, which is typically present in the severely deformed wires [6,7].
Coercivity
For an ideal soft magnetic material, the coercivity should tend towards zero, easing the magnetization process and reducing the hysteretic heat losses for certain applications. The coercivity does not necessarily increase with decreasing grain size, as it is the case for "coarse" grained (D >~100 nm) materials [22]. For very small grains from the so-called random anisotropy regime, where L ex covers many grains, coercivity collapses with D 6 [8,9]. The relevant length scale, L ex , is derived from a competition between magnetic exchange and magneto-crystalline anisotropy energy K 1 and it is given by: with A being the exchange stiffness constant (Fe: 20.7 pJ/m [15], K 1 = 45,000-46,800 J/m 3 [23][24][25]. β takes into account the symmetry of the crystal, considering the distribution of the random anisotropy axes, which results in β = 0.4 for cubic crystals [26]. The opposing effect of D on the coercivity for D < L ex and D > L ex are described by Equations (2) and (3) [8].
J s is the magnetic saturation polarization in [T] (Fe: 2.15 T [15]), p c is a dimensionless pre-factor close to unity [8] and the average anisotropy constant <K> = β K 1 4 D 6 A −3 . Using L ex = D, the threshold grain size for Fe, giving highest H c based on above equations, is found to be close to 34 nm. Applying the given values of pure Fe using above equations, a grain size of 10 nm is needed to achieve a soft material with a coercivity of 100 A m −1 and about 5 nm for a very soft material with 1 A m −1 .
Focusing on pearlitic steel, References [18,[27][28][29] describe the interplay of coercivity and plastic deformation. In [18], the changes of the magnetic properties of patented steels containing 0.24 and 0.70 wt.% C are discussed. An increasing saturation magnetization with increasing true strain due to cementite dissolution was found. The coercivity increases at small drawing strains, which is followed by a constant regime for the material containing 0.24 wt.% (H c,const~1 200 A m −1 ). Contrarily, for the 0.70 wt.% material a peak at low strains (ε~0.4, H c~1 600 A m −1 ) is followed by a drop in coercivity, reaching about 1200 A m −1 at a true strain of~3.5. The decreasing coercivity is explained by a combination of volume reduction of cementite as well as rotation and fragmentation of cementite lamellae. These arguments are complemented by Ul'yanov et al. who found a low coercivity state (H c~8 000 A m −1 ) of cementite after strong deformation using ball milling [27].
The measured coercivities H c of the thin wires, as determined by VSM, are smaller (24 µm: 520 A m −1 , 36 µm: 610 A m −1 ) than that of the thick wire (540 µm: 1510 A m −1 ). The coercivity of the pristine state is in agreement with results of both, a cold drawn, 0.8%C steel "music wire" (H c~1 590 A m −1 ) [30] as well as with a steel containing slightly less C: 0.7%, found by Gorkunov et al., with H c~1 350 A m −1 [18].
In a previous work by our group, a strong, beneficial influence of a low temperature annealing treatment (150 • C, 1 h) on the coercivity of severe plastically deformed materials was found [31]. For different Cu-based materials with dilute Fe contents, the drop in coercivity upon annealing was up to 76%. Thus, there is a chance to further enhance the soft magnetic properties by annealing, but this treatment has to be performed in a careful way, since according to Equation (2) a slight increase in grain size already has a huge effect on coercivity. In [32], it was stated that after annealing at 150 • C for 30 min, the strongly deformed wires retain their strength and consequently their refined microstructure. This treatment was chosen for the 36-µm wire. VSM measurement shows that the coercivity drops by 76 A m −1 or 12% due to this annealing treatment.
If one strictly connects the grain size with coercivity, the magnetic measurements would indicate that there is hardly any further refinement of grain size when drawing the wire from 36 to 24 µm in diameter, albeit the reduction in diameter constitutes a substantial degree of further deformation. Along with the impart of additional strain, Li et al., observed a further strengthening of the material for increasing the strain from 5.1 (σ max~6 .2 GPa) to 6.52 (σ max~6 .8 GPa) [7]. If one were to consider that the strength of the material can only be influenced by the grain size, a contradiction arises between the magnetic measurements and the mechanical results. However, in the literature it can be frequently found that the strength is described as a combination of several contributing microstructural features such as dislocation density and extent of carbon dissolution, e.g., shown by Zhang et al. for wires of drawing strains up to 5.4 [33]. The magnetic measurements support this approach to explain the strength of supersaturated nanostructures.
Due to the findings of microstructurally refined and magnetically soft materials, Equation (2) is applied to draw the graph in Figure 3. It demonstrates that the combination of subgrain sizes found in literature [7] and measured coercivities of the thin wires is close to the result of Equation (2). As already mentioned, Li et al. found a size below 10 nm for equiaxed subgrains in the transverse cross section of the wire, which was subjected to ε = 6.52. It is important to note that the grain size is anisotropic with the grains being elongated along the drawing direction [7]. The larger effective grain size might be the explanation for the deviation of measured data from the trend line. (2) and the parameters for pure bcc Fe mentioned in the text, compared with the coercivity of thin wires. For the thinnest wire, the subgrain size of 10 nm [7] is taken as the grain size; for the 36-µm wire, the grain size is enlarged by 2 × ln(36/24) to 12.3 nm. The 540 µm wire with a grain size of 23 µm, does not follow this trend.
Regarding the thin wires, there is a small tetragonal distortion (c/a < 1.01) of the bcc crystal present in the severe plastically deformed material [34,35] and furthermore, it was shown that there is a clear difference in the hysteretic behaviour between bcc and tetragonally distorted Fe nanoparticles (c/a = 1.23) [36]. However, due to the much smaller tetragonal distortion found in pearlitic wires, the usage of the magnetic parameters for bcc Fe is justified.
Another proof for the dissolution of cementite to a large extent in the severely drawn wires is provided by temperature dependent measurements of coercivity. Using the SQUID, the coercivity as a function of temperature was measured for a 36-µm wire, which providesbesides the saturation magnetization-another strong hint for the predominance of the bcc Fe phase, but not of the cementite phase. This measurement involves recoil loops of going to the highest fields, saturating the sample and then accurately measuring the coercivity applying small fields. These loops are repeated in temperature steps of 20 K. No Pd-standard was available at that time for SQUID measurements, thus, the measurements of the coercivity always include a systematic offset due to the residual magnetic fields of the superconducting magnet [37]. To approximate the real values, the coercivity of both measurements (VSM, SQUID) at the highest temperatures are subtracted to calculate the offset, which is used to compensate for the systematic offset in SQUID measurements. When applying the same sequence for all measurements, it can be safely assumed that this systematic error (i.e., residual fields) is constant and the measurement can be corrected for the coercivity determined with VSM. Considering the grain size of the 36-µm wire, assumed to be 12.3 nm (see Figure 3), temperature dependent coercivities are calculated, based on temperature dependent magneto-crystalline anisotropies K 1 taken from literature [23][24][25]. A comparison of the calculated and measured values is given in Figure 4. There is an offset of measured data compared to literature data, which is easily explainable by the unknown actual grain size and its strong dependency (D 6 ) on coercivity. However, the trend in measured coercivities perfectly follows the ones in literature. While the magneto-crystalline anisotropy only drops by a bit more than 10% for bcc-Fe [23][24][25], the drop for Fe 3 C would be much larger, about 50% [38]. Taking these results and the values of the volume saturation magnetization, the deformed wires behave magnetically like bcc Fe. [23][24][25]. The slightly decreased coercivity of the stress-relieved 36-µm wire is also included.
The slightly smaller value of coercivity after the annealing treatment of 150 • C for 30 min is also shown in Figure 4. The decrease in coercivity can be explained by decreasing the residual stresses imparted by the drawing process. Residual stresses couple with the magnetostriction of the material and lead to magneto-elastic anisotropy, effectively increasing the magnetic anisotropy. While the pure bcc Fe phase was of advantage when considering the achievable values of saturation magnetization, it is of disadvantage from a magnetostrictive point of view. While e.g., Si can be added (FeCuNbSiB [9]) for reaching zero magnetostriction, pure Fe is linked to a non-zero magnetostrictive value.
Another aspect of the discussion of the coercivity of severely deformed wires is the already found trend of decreasing coercivity with increasing applied strain (up tõ 3.5) [18]. Therein, a pearlitic steel containing 0.7% C was subjected to wire drawing and its room temperature coercivity was determined. Figure 5 shows a comparison of the results from [18] together with a linear fit and the measurements described in this work. The trend in decreasing coercivity with increasing strain is clear and it is explained in [18] by volume reduction of the cementite, fragmentation and rotation of the remaining cementite lamellae in combination with a low-coercivity state of cementite after severe deformation [27]. The reasons for the small deviation of the linear extrapolation to the measured coercivity for 24-and 36-µm wires are manifold. The applicability of the linear extrapolation up to these large degrees of deformations is hypothetical, the materials slightly differ in C-content and experienced different drawing steps towards the final accumulated strain. Furthermore, there is a difference in the distance of the cementite lamellae in the non-deformed state. While this distance is 67 nm for the patented wire with 540-µm diameter [7], it is slightly larger (70-90 nm) for the initial state described in [18]. Already starting with a slightly smaller distance of lamellae could lead to a diminution of coercivity at higher strains. [18], showing the coercivity determined on a steel containing slightly less C. This data is compared with results for even thinner wires, presented herein. The linear fit neglects the point at zero strain.
When comparing the two different approaches for describing the decreasing coercivity with increasing strain, it has to be mentioned that wire drawn materials feature a very pronounced texture. In [39], the pronounced fiber texture of bcc-W was presented. The texture prevails after annealing treatments, even in the recrystallized state. The development of the same texture was found for pearlitic steels experiencing strains smaller than 1.65 [40]. In conjunction with a pronounced texture, the crystallographic orientation of a grain correlates with adjacent grains or subgrains-counteracting the random anisotropy model, which relies per definition on a random orientation of crystallites. Summarizing, there are two ideas, which describe the found trend in coercivity. The first one is based on the random anisotropy model, while the second is based on microstructural changes, especially taking place for the cementite. Due to the presented arguments the one relying on microstructural changes for the cementite phase is more likely to be applicable.
Conclusions
Pearlitic wires-ferritic wires in different wire-drawn states-were subjected to magnetic measurements, focusing on the volume saturation magnetization and coercivity.
According to the indirect phase analysis based on the evaluation of saturation magnetization data, the drawn wires with diameters of 24 and 36 µm behave-from a magnetic point of view-like pure Fe with cementite being dissolved. This finding is supported by an independent measurement of the temperature dependent coercivity, also fitting to the behavior of pure Fe.
The measured values of volume saturation magnetization of the thin wires, who experienced a drawing strain larger than 5.4, are slightly higher as it can even be expected for pure Fe. This can be attributed to Fe being supersaturated by C, as it enlarges the magnetic moment of Fe-atoms.
It was demonstrated that an annealing treatment has a positive effect to magnetically soften the wire. Annealing at 150 • C for 30 min slightly reduces the coercivity, which is most likely due to a reduction of residual stresses being present from the severe deformation process.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ongoing research activities.
|
2021-12-25T16:14:45.166Z
|
2021-12-22T00:00:00.000
|
{
"year": 2021,
"sha1": "b63457954399cfc635d6bafc954a190df07bc074",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/12/1/23/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "60f0bd394e91909b6ef7d269b41b449f938071ef",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
18688683
|
pes2o/s2orc
|
v3-fos-license
|
Introducing the “Bone-Screw-Fastener” for improved screw fixation in orthopedic surgery: a revolutionary paradigm shift?
Background Conventional screws used for fracture fixation in orthopedic surgery continue to rely on the historic buttress thread design. While buttress screws generally provide solid resistance against unidirectional axial loading forces, their design suffers from several limitations, as the buttress thread does not adequately resist multiaxial forces. Furthermore, the buttress screw is prone to stripping at the bone-screw interface and can cause microfracturing of the surrounding bone due to its thread design. Standard buttress screws are therefore at risk of adverse postoperative outcomes secondary to failure of bone fixation. A new patented Bone-Screw-Fastener was recently designed that is based on an interlocking thread technology. This new fastener provides distributive forces from the threads onto the bone and therefore resists loads in multiple directions. The underlying concept is represented by a “female thread” bone cutting technology designed to maximize bone volume, preserve bone architecture, and create a circumferential interlocking interface between the implant and bone that protects the thread from stripping and from failing to multiaxial forces. Presentation of the hypothesis We hypothesize that the new Bone-Screw-Fastener overcomes the classic shortcomings of conventional orthopedic screws with buttress threads by ease of insertion, improved bone preservation, increased resistance to off-axis multidirectional loading forces and to stripping of the threads. These advanced biomechanical and biological properties can potentially mitigate the classic limitations of conventional buttress screws by providing better resistance to implant failure under physiological loads, preserving bone biology, and thus potentially improving patient outcomes in the future. Testing the hypothesis The presumed superiority of the new fastener will require testing and validation in well-designed prospective multicenter randomized controlled trials (RCTs), using the conventional buttress screw as control. Implications of the hypothesis Once validated in multicenter RCTs, the new Bone-Screw-Fastener may drive a change in paradigm with regard to its innovative biomechanical principles and biologic bone preservation for surgical applications requiring screw fixation.
Implications of the hypothesis: Once validated in multicenter RCTs, the new Bone-Screw-Fastener may drive a change in paradigm with regard to its innovative biomechanical principles and biologic bone preservation for surgical applications requiring screw fixation.
Background: a brief history of bone screw design
Archimedes of Syracuse (287-212 BC) is considered the inventor of the first screw in ancient times [1]. His invention was initially designed to remove the bilge water from large ships using a water-pump based on a revolving screw-shaped blade inside a cylinder [1]. Archimedes' screw principle is still in use today for pumping water and transporting coal or grain. Interestingly, the introduction of industrial screws was delayed by a thousand years after Archimedes' invention, due to technical challenges in screw manufacturing [2]. In 1850, screws were applied for the first time in orthopedic surgery by the French surgeons Cucel and Rigaud, who used two wood screws and a leather strap to fixate an olecranon fracture [3]. In the early 20 th century, William O'Neill Sherman (1880-1979) was a pioneer of internal fracture fixation who modified conventional screw designs to orthopedic applications [4].
Of note, Sherman's screw design remained the "gold standard" in orthopedics until the introduction of the AO screw half a century later [5]. Stainless steel was introduced in the 1920s and allowed better biocompatibility of bone screws [5]. In the 1940s, the Belgian surgeon Robert Danis (the "father of modern osteosynthesis") further modified screw designs to applications specific to human bone by implementing the following three technical features [6]: 1. A change of the ratio from the exterior screw diameter to core diameter from 4:3 in industry metal screws, to 3:2 in orthopedic screws; 2. A reduction of thread surface area to 1/6, based on the notion that bone strength is about 1/6 of the strength of metal; 3. A change from the classic industrial V-shaped thread design to buttress threads ( Fig. 1), based on the postulated increased pull-out resistance of buttress threads.
Robert Danis' pioneering work on internal fixation, including improved screw design and plate technology, preceded the foundation of the AO ("Arbeitsgemeinschaft für Osteosynthesefragen") in 1958 in Switzerland [6,7]. One of the fundamental subsequent achievements of the AO was the global standardization of surgical principles and techniques, and the introduction of a uniform design for orthopedic implants and instruments [5].
Principles of screw threads
Screw threads are designed to optimize initial contact and surface area, dissipate and distribute stress forces at the screw-bone interface, and increase pull-out resistance to load [8]. The basic principles of screw thread geometry include thread shape, face angle, pitch, depth, and width. Thread pitch, depth and width are highly variable among the available orthopedic screws on the market. For example, cancellous screws have an increased thread depth compared to cortical screws, with the intent of increasing the surface area for improved screw purchase in lesser quality bone [9]. Thread pitch refers to the linear distance travelled by the screw after one full turn. In implants with equal length, a smaller pitch implies a higher number of threads. For example, locking head screws have a lower pitch than conventional cortical screws. Among the multiplicity of thread shapes, the buttress thread design remains the historic paradigm for the shape of current orthopedic screws. From the perspective of the face angle of buttress screws, the load-bearing face is typically perpendicular (or inclined up to 5-7°) to the axis of the screw, whereas the other face of the thread is angled at around 45° (Fig. 1c). The popularity of buttress threads in current orthopedic screw designs is reflected by the advantage of handling high axial thrust in one direction which leads to increased shear strength and improved unidirectional pull-out resistance compared to other conventional thread shapes [10][11][12]. However, orthopedic screws are typically not challenged by axial loading forces from physiological motion in vivo. Thus, standard buttress screws remain at a significant risk of failure when exposed to multidirectional loading forces [13]. In an attempt to address the physiological multiaxial loading environment, newer generation locking plates have been able to reduce the risk of implant failure, particularly in osteoporotic bone [14]. Locked plating technology relies on the benefit of a fixed-angle construct that does not rely on friction and compression forces between implant and bone. However, locking head screws have been shown to have their own set of shortcomings [14], including the stiffness of plate-screw constructs and increased cost, hence research continues towards more effective and equitable, cost-conscientious solutions to failures at the bone-implant interface.
In essence, until present, the historic buttress screw continues to represent the main pillar in orthopedic screw design, despite significant biomechanical shortcomings associated with high failure rates [13,15]. Fig. 1 The historically prevalent screw thread shapes include the V-shaped thread (a), the square thread (b), and the buttress thread (c). The buttress thread represents the current paradigm of screw design in orthopedics due to improved unidirectional pull-out resistance in bone
Limitations of conventional buttress screws
Most of the currently applied screws in orthopedic surgery utilize a form of the buttress thread [5,6]. However, the buttress thread suffers from several intrinsic limitations. The screws are hard to start within the bone interface. Buttress screws can miss the far cortex through the projected trajectory during insertion, which may lead to stripping out the near cortex. Even with adequate insertion, during the final screw tightening, the axial load and torque applied may overcome the bone resistance, which then results in the buttress screw stripping out of both cortices. In addition, as the buttress thread induces a radial force that is perpendicular to the screw's long axis, this increases the probability of creating a stress riser or an incidental fracture to the adjacent bone bridge. Traditional buttress threads are designed to resist unidirectional axial loads only. However, the physiological in vivo loading on orthopedic implants is known to be multiaxial and can, therefore, result in loosening. One manifestation of buttress thread failure is screw loosening and "toggling" which entails that the screw erodes through the bone and enlarges the hole within which the screw resides, which may lead to failure of fixation. In clinical application, orthopedic screws must resist dynamic forces generated during patients' daily activities. Unfortunately, current buttress screws are not designed to resist multidirectional force, which increases the risk of postoperative complications, including screw loosening and failure of fixation. Since the selected modifiable variables of buttress screw designs (thread pitch, depth, width and face angle) are interrelated, attempts to improve screw retention will increase the amount of friction and insertion torque, thus resulting in heat generation and potential heat necrosis to the adjacent bone. Excess heat during screw insertion will compromise the screw's purchase and retention of the thread interface at the site of necrotic bone. The conventional buttress cutting mechanism generally does not provide precise thread forming and therefore represent more of a "rough" cutting tool that leads to microfracturing of the bone around the threads. This bone debris accumulates along the thread teeth and increases insertion torque and friction which generates additional heat. The debris also makes the screw harder to insert and provides a poor interface between screw and bone. The "rough" or imprecise cutting mechanism of buttress threads is thought to represent one of the underlying root causes of bone-implant failures resulting from high compressive forces, increased insertion In summary, the essential shortcomings of buttress threads include the risk of stripping, screw loosening, induction of stress risers, bone microfracturing, heat necrosis, with subsequent failure of fixation and risk of creating fracture nonunions and malunions. These fundamental problems have remained unaddressed in past efforts aimed at improving the design of modern orthopedic screws due to the ongoing reliance on the conventional historic buttress thread concept.
The hypothesis
We hypothesize that the new Bone-Screw-Fastener overcomes the classic shortcomings of conventional orthopedic screws with buttress threads by ease of insertion, improved bone preservation, increased pull-out resistance to multidirectional loading forces and resistance to stripping of the threads. These advanced biomechanical and biological properties may mitigate the classic limitations of conventional buttress screws by providing better resistance to implant failure under physiological loads, preserving bone biology, and thus, improving patient outcomes in the future.
Presumptive advantages of the new fastener design
A new Bone-Screw-Fastener was designed based on an interlocking bone-implant interface technology that provides a distribution of forces from the implant onto the bone and subsequently resists loads in all directions. The new fastener consists of a "female thread" bone cutting technology designed to maximize bone volume, preserve bone architecture, and create a circumferential interlocking interface between the implant and bone, similar to a "nut-and-bolt" technology. The following intuitive advantages by the Bone-Screw-Fastener support the hypothesis of superiority to conventional orthopedic screws with buttress threads: 1. The new interlocking thread pattern is designed to resist multidirectional forces and bending moments to limit the toggling of the implant and minimize radial forces. These properties provide improved resistance to failure and decreased risk of creating stress risers and iatrogenic fractures to adjacent bone. 2. The interlocking thread pattern is designed to allow for higher finishing torque values compared to implants with buttress threads, and to resist screw stripping, even in lesser quality bone. 3. The new bone cutting mechanism is designed to curl the bone chips away from the cutting edges to create a solid bone-implant interface free of debris and to prevent iatrogenic bone destruction during screw insertion.
As demonstrated in the schematic drawing in Fig. 2, the various reference points operate in pairs such that the intersection defines centering points for load distribution and force equalization (e.g. points 19/20 and 21/ 22, respectively). Furthermore, an axial loading force #1 applied on the fastener induces centering point 38 unto 39, thus embedding the screw onto the bone. Likewise, when force #2 is applied, centering points 36 and 37 are embedded onto each other, preventing movements upon application of a horizontal force #3. The new fastener is designed to provide greater retention of bone structure, and the bone tooth volume (#25 in Fig. 2) can be optimized to adapt to varying physiological conditions and to specific anatomic locations and differing bone quality. The fastener threads allow for improved distribution of forces across the multiple thread faces after axial loading (Fig. 3b), compared to a standard screw with buttress threads (Fig. 3a). This important biomechanical property particularly applies to off-axis loading scenarios, where force distribution is significantly improved in the fastener (Fig. 3d) compared to the conventional buttress screw (Fig. 3c).
Finally, an additional relevant benefit of the new Bone-Screw-Fastener is bone preservation. The fastener's new thread cutting mechanism provides a superior thread forming tool that prepares the bone for implant placement by cutting precise "female threads" into the bone. With the new interlocking thread technology, the cut bone chips curl away from the cutting edges of the fastener, and are fed forward into the predrilled pilot hole (Fig. 4a). The underlying mechanism relies on the flute being "left-handed" on a right-handed threaded fastener. In other words, as the fastener advances, the flute forces the bone chips forward into the pilot hole ahead of the screw. This cutting mechanism results in bone clearance and formation of "bone teeth" in the tissue that engages the threaded fastener (Fig. 4b). The interface between the bone and fastener is then free of the cuttings, provides "healthier" bone tissue adjacent the fastener, and prevents iatrogenic microfracturing of the bone, as seen with the use of standard buttress threads (Fig. 4b).
Clinical pilot series
The new Bone-Screw-Fastener (SMV Scientific, Austin, TX) received FDA clearance on June 23, 2015, for clinical use in 510(k) #K150981 for the following indications: "The SMV Scientific 3.5mm and 4.0mm Bone Screws are intended for fixation of fractures, osteotomies and non-unions of the clavicle, scapula, olecranon, humerus, radius, ulna, pelvis, tibia, calcaneus, femur and fibula in adults and in both children (2-12 years) and adolescents (12-21 years) in which growth plates have fused or in which growth plates will not be crossed by screw fixation".
The fastener was approved for clinical use at Denver Health, the regional academic level 1 trauma center and safety-net hospital in Colorado, in July 2015. The 3.5mm fastener was subsequently used in selected surgical applications per standard of care as a substitute to standard 3.5mm cortical buttress screws, 4.0mm cancellous bone screws, and 3.5mm locking plate constructs. The fastener was either used as an independent lag screw or positioning screw, or in conjunction with small-fragment stainless steel plates. During a time-window from August 24, 2015, until December 31, 2016, the first author (P.F.S.) performed 30 surgical procedures in 30 patients that included implantation of at least one Bone-Screw-Fastener. Surgical indications were placed by standard of care in all patients. During the initial pilot phase until January 31, 2016, the fasteners required pre-tapping, Fig. 5 demonstrates a schematic drawing of the self-tapping fastener thread configuration. The retrospective analysis of this observational cohort study was approved by the Colorado Multiple Institutional Review Board (COMIRB) at the University of Colorado (Protocol # 16-0297). The study was determined to meet criteria for full waiver of consent due to the retrospective study design. One 10-year old patient with a displaced Salter-Harris III type ankle injury was excluded from analysis per study protocol as a minor of less than 18 years of age. The remaining 29 patients were included in the retrospective observational cohort analysis. These 29 patients had a total of 123 Bone-Screw-Fasteners implanted during the observational study time-window. Of these, 84 fasteners required pre-tapping, and 39 fasteners were self-tapping.
All fasteners used in this study were 3.5mm stainless steel implants. The first 3.5mm Bone-Screw-Fastener was applied as part of a bimalleolar ankle fracture fixation in a 24 year-old male patient on August 24, 2015. The patient demographics, fracture classification, and respective procedures performed are shown in Table 1. All patients followed up for a minimum of 3 months, with an average follow-up time of 10 months (±3.5 SD; range 3-15 months). There were no intraoperative or postoperative complications in the 29 patients included in this pilot series. No technical problems were noted with insertion of the fasteners, and no radiographic signs of implant loosening were noted in any of the 29 patients. All fractures healed clinically and radiographically within an average of 3 months of follow-up. Figure 6 demonstrates a representative example of a 65 year-old lady whose unstable SER-4 equivalent ankle fracture was shown to be healed in anatomic position at 6 months follow-up after fixation by an antiglide plate construct with four Bone-Screw-Fasteners. Six patients (20.7%) required a return to the operating room for removal of symptomatic implants. Of these, 5 patients had healed ankle fractures with symptomatic plates around the distal fibula, and one patient required a plate removal after temporary bridging of the acromioclavicular joint (Table 1). No technical problems were noted with removal of the 21 fasteners that were removed in these six patients at an average of 9.5 months (± 3.5 SD; range 5-15 months).
Testing the hypothesis
The intuitive biomechanical and biological advantages of the new Bone-Screw-Fastener, in conjunction with the safety and feasibility for clinical application demonstrated in our early experience in 29 patients, provide the rationale for future testing of the fastener in well-designed prospective and controlled studies. Ideally, these studies should be designed as multicenter randomized controlled trials (RCTs), with the conventional buttress screw serving as the control group. The a-priori null hypothesis may postulate equality between the new fastener and the conventional buttress screw in retaining surgical fixation in a specified subset of frequent fractures amenable to screw and plate fixation per standard of care. Rejection of the null hypothesis may prove superiority of the fastener based on defined primary and secondary outcome measures, including failure of fracture fixation, necessity of unplanned surgical revisions, incidence of malunions and nonunions, and patientreported outcome metrics, using the patient-reported outcomes measurement information system (PROMIS). The methodology should include concealed allocation to treatment cohorts, intention-to-treat analyses, and apply to general requirements by the CONSORT statement [16].
Implications of the hypothesis
Based on the intrinsic limitations and historic shortcomings of the conventional buttress screw in fracture care, proving superiority of the new patented Bone-Screw-Fastener in well-designed future RCTs may drive a change in paradigm in screw technology. It is conceivable that the insights from orthopedic fracture care may be safely extrapolated to other indications, including maxillofacial surgery, dental implants, spine surgery, joint replacement surgery, and sports surgery, to name a few intuitive surgical disciplines. As our initial pilot study was purely observational and exclusively designed as a "proof of concept" safety and feasibility study, the superiority of the new fastener over standard buttress screws requires validation in future high-quality RCTs.
|
2017-06-27T20:23:16.254Z
|
2017-03-20T00:00:00.000
|
{
"year": 2017,
"sha1": "647de3934e95c4b7296cadb8efecf376c93aa2fa",
"oa_license": "CCBY",
"oa_url": "https://pssjournal.biomedcentral.com/track/pdf/10.1186/s13037-017-0121-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "647de3934e95c4b7296cadb8efecf376c93aa2fa",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2584985
|
pes2o/s2orc
|
v3-fos-license
|
Prevalence and characteristics of tremor in the NARCOMS multiple sclerosis registry: a cross-sectional survey
Objectives (1)To describe the prevalence and severity of tremor in patients with multiple sclerosis (MS) registered within a large North American MS registry; (2) to provide detailed descriptions on the characteristics and severity of tremor in a subset of registrants and (3) to compare several measures of tremor severity for strength of agreement. Setting The North American Research Committee on MS (NARCOMS) registry. Participants Registrants of NARCOMS reporting mild or greater tremor severity. Outcome measures We determined the cross-sectional prevalence of tremor in the NARCOMS registry over three semiannual updates between fall 2010 and fall 2011. A subset of registrants (n=552) completed a supplemental survey providing detailed descriptions of their tremor. Outcomes included descriptive characteristics of their tremors and correlations between outcome measures to determine the strength of agreement in assessing tremor severity. Results The estimated prevalence of tremor in NARCOMS ranged from 45% to 46.8%, with severe tremor affecting 5.5–5.9% of respondents. In the subset completing the supplemental survey, mild tremor severity was associated with younger age of MS diagnosis and tremor onset than those with moderate or severe tremor. However, tremor severity did not differ by duration of disease or tremor. Respondents provided descriptions of tremor symptoms on the Clinical Ataxia Rating Scale, which had a moderate to good (ρ=0.595) correlation with the Tremor Related Activities of Daily Living (TRADL) scale. Objectively scored Archimedes’ spirals had a weaker (ρ=0.358) correlation with the TRADL. Rates of unemployment, disability and symptomatic medication use increased with tremor severity, but were high even among those with mild tremor. Conclusions Tremor is common among NARCOMS registrants and severely disabling for some. Both ADL-based and symptom-descriptive measures of tremor severity can be used to stratify patients.
BACKGROUND Associated with multiple sclerosis (MS) since the earliest descriptions of the disease, tremor is noteworthy for its strong association with disability 1 and resistance to symptomatic treatment. 2 The exact prevalence of tremor in the MS population is unknown; however, estimates from two prior studies suggest a prevalence range of 25-60%, with severe tremor in 3-15% of patients. 3 4 Yet, the modest size of these studies (100 and 200 patients) and the populations they describe (a clinic-based cohort, and a communitybased cohort with overall mild disease course 5 ) allows for the possibility that their prevalence estimates might not generalise to the wider MS population.
A major challenge to accurately estimating the prevalence and severity of MS tremor arises from the difficulty in obtaining a large, representative cross-section of the MS population. The North American Research Committee on MS (NARCOMS) registry is a unique resource, containing over 36000 participants with MS, of whom more than 13 000 submit semiannual updates on various aspects of their disease, 6 including the NARCOMS Tremor and Coordination Scale (TACS), which has been validated against clinically assessed measures of tremor. 7 The size of the NARCOMS registry Strengths and limitations of this study ▪ This cohort represents the largest descriptive study of multiple sclerosis (MS)-associated tremor to date. ▪ The data presented are mostly derived from patient-reported scales, which have been used in previous descriptive studies of MS tremor. ▪ Results of the study could not be objectively validated; however, findings are similar to previous descriptive studies which did include objective validation. ▪ These data reinforce previous smaller studies describing the disproportionate effect of tremor on employment, function and overall quality of life for patients affected with MS.
allows for the identification of large subsets of patients with less common manifestations, such as severe tremor. This work represents the largest descriptive study of tremor in MS to date, estimating its prevalence and severity in the NARCOMS registry and providing a detailed, patient-reported summary of the effects of tremor on daily function and quality of life (QOL). While there is no gold standard for the measurement of tremor severity in patients with MS, better normative data about how tremor severity relates to QOL in patients with MS could be applied in trial design for potential therapeutic interventions.
METHODS
Tremor prevalence NARCOMS distributes semiannual surveys to all of its registered members asking a wide range of MS-related and QOL-related questions. Tremor and ataxia are assessed in these surveys by the TACS, which has been shown to have construct and criterion validity. 7 On the scale, respondents score their tremor severity from 0 to 5 indicating absent, minimal, mild, moderate, severe or totally disabling tremor. Minimal indicates the presence of tremor without functional impairment, while the other levels indicate increasing impact of tremor on function.
In order to estimate tremor prevalence within NARCOMS, any TACS score of 1 or greater was counted towards the tremor prevalence. To approximate the methods used in prior prevalence studies of MS tremor, the prevalence of severe tremor (including totally disabling tremor) was also determined.
Surveyed cohort
With prior approval from the local IRB, we designed a supplemental printed survey that was mailed to a subset of NARCOMS registrants who were using MS disease modifying drugs (DMDs) and who indicated at least a mild effect of tremor on function on a NARCOMS update. While some demographic information and details of respondents' MS was extracted from the master NARCOMS data set, the survey requested additional information specific to the experience of tremor, including handedness, distribution of tremor in the body, duration and family history of tremor, use of symptomatic medications to suppress tremor, and several scales designed to assess tremor severity (table 1). As part of the tremor severity assessments, respondents also drew Archimedes' spirals with each hand using a pen provided with the survey. The pens were provided to minimise variability between respondents that might be introduced by differing writing instruments. Respondents were also asked to indicate current use of DMDs.
Each of the questionnaires listed in table 1 has been previously utilised in studies of MS-related tremor. 4 7-10 Only the Clinical Ataxia Rating Scale (CARS) was used out of its original context: the CARS is intended for use by clinicians to objectively score overall tremor severity through a ranked scoring system to describe dysmetria, dysdiadochokinesia and gait impairment. For the purposes of this study, the language for the CARS was altered to be made understandable to non-clinicians, so that respondents could report ataxic symptoms in a descriptive manner approximating a clinical evaluation.
Patients who received the survey were invited to participate as part of a separate study evaluating whether DMDs change tremor severity over time. 11 At a minimum, all patients must have indicated a mild or worse tremor between fall 2010 and 2011, and the current use of an approved MS DMD. Respondents taking natalizumab were deliberately oversampled, while patients taking the other approved MS DMDs were selected at random.
Surveys were mailed in March 2012 and returned over a 2-month period. Data from the supplemental survey and the 2011 semiannual surveys (demographics, MS-related disability and QOL) were compiled and de-identified. MS disability in NARCOMS is scored using the Patient Determined Disease Steps (PDDS), 12 and general health-related QOL is gauged by the Short Form (SF)-12 physical and mental composite scores. 13 Archimedes' spirals were scored by three raters (two primary, one secondary) with experience in the treatment of movement disorders. Each Archimedes spiral was assigned a score between 0 and 10, with 0 representing no evidence of tremor and 10 indicating inability to complete the spiral. Scores were assigned according to a scoring system published by Bain and Alusi. 8 When the primary reviewer scores were identical or differed by 1 point, a mean of the two primary reviewer scores were used for that spiral. When scores differed by >1 point, the secondary reviewer's score was incorporated into the mean score. A subset of spirals were duplicated and randomly distributed within the scored spirals to test intra-rater reliability. Each respondent's left-handed and right-handed mean spiral scores were summed to generate a sum spiral score which was used for analysis. In order to account for spiral differences related to handedness, respondents were asked in the survey to selfidentify their dominant hand.
Statistical methods
Statistical analysis was conducted using JMP 10.0.0 (SAS Institute Inc., Cary, North Carolina, USA). To determine the prevalence of tremor in the NARCOMS registry, we calculated the proportion of patients indicating a TACS score >0 relative to the total number of completed surveys across three consecutive semiannual survey periods: fall 2010, spring 2011 and fall 2011 (the 'Tremor Prevalence Cohort'). Three periods were included in order to maximise the number of eligible respondents to whom we could send the paper tremor survey, which then defined the 'Surveyed Cohort'.
The cohort was stratified by tremor severity as measured by the TACS, as this scale alone is used to identify tremor on the semiannual NARCOMS updates distributed to active participants. Owing to the small number of respondents indicating totally disabling tremor on the TACS, these respondents were combined with those reporting severe tremor for the purpose of determining severe tremor prevalence.
Descriptive characteristics of the cohort were reported as frequencies, means with SDs, or medians with IQRs. Comparisons between groups were performed using analysis of variance for continuous outcomes (eg, age), and χ 2 tests were used to compare nominal data (eg, sex). Intra-rater and inter-rater reliability of the spiral ratings were determined by calculating κ scores and Spearman correlations for the three raters.
Spearman's correlations were calculated for the various tremor and disease severity scales, to determine strength of agreement. Missing data were handled through listwise deletion.
Tremor prevalence
The mean prevalence of tremor among NARCOMS respondents was approximately 45% for each of the three update periods (the 'Tremor Prevalence Cohort', table 2). Approximately one quarter of respondents experienced tremor with at least some impact on function (mild or worse tremor). In table 3, the NARCOMS Prevalence Cohort is compared to the London, UK 4 and Olmsted County, USA 3 prevalence studies on MS tremor.
Among the approximately 3600 NARCOMS respondents with functionally significant tremor (TACS≥2), 777 were mailed supplemental surveys, and of those returned, 552 were included in the final database (the
Demographics and distribution of tremor
Demographic characteristics of the Surveyed Cohort are reported in table 4. Women comprised a greater proportion of those with mild tremor, while men were more likely to report severe or totally disabling tremor. Tremor severity did not differ between white and non-white respondents; however, NARCOMS is known to underrepresent minority ethnic groups. 14 Current age, MS onset age and tremor onset age all differed between TACS severity groups; however, neither duration of tremor nor duration of MS were significantly different between tremor severity groups. Figure 2 summarises the patient-reported descriptions of body regions affected by tremor. Similar to previous reports 3 4 the upper extremities were most commonly involved, and among those who reported unilateral tremor (n=208), respondents were twice as likely to report tremor in their dominant arm (n=140, 67.3%) as compared with their non-dominant arm (n=68, 32.7%).
Tremor severity measures
Both subjective and objective instruments have been developed to measure tremor severity. Subjective instruments generally relate tremor severity to QOL and interference with routine activities, while objective instruments rely on direct observation of tremor by a clinician expert or indirect rating of the effects of tremor, such as accelerometers, Archimedes' spirals or handwriting samples. 9 10 At present, there is no widely accepted 'gold standard' for assessing tremor severity in MS. As such, we chose to include a range of instruments including QOL-based measures (TACS, Tremor Related Activities of Daily Living (TRADL), SF-12 Health Survey), objectively scored Archimedes' spirals and the descriptive (but not objective) CARS.
For the scoring of Archimedes' spirals, κ and Spearman statistics were determined for inter-rater and intra-rater reliability. The κ scores reflect the consistency with which raters assigned identical scores to individual spirals, while the Spearman scores reflect how consistently spirals were ranked in order of severity. Intra-rater κ scores ranged from 0.48 to 0.56 (moderate agreement) and Spearman coefficients ranged from 0.79 to 0.89 (good agreement), while inter-rater κ scores ranged from 0.30 to 0.38 (fair agreement) and Spearman coefficients ranged from 0.81 to 0.85 (good agreement). While the κ scores indicate broad inconsistency in assigning the same score to the same spiral, the Spearman scores suggest good consistency in ranking spirals by order of severity. The correlation between spiral sum scores and the CARS was weak (ρ=0.288, p<0.001). A summary of the Spearman correlations between the objective/descriptive instruments (spiral scores and CARS) and the subjective/QOL instruments is depicted in table 5. For all of the subjective QOL measures the correlation with the CARS was stronger than with spirography scores. The strongest correlation was between the CARS and the TRADL. The strength of this correlation between ADL impairments and specific abnormalities in body movement suggests patient-provided descriptions of tremor (on the CARS) correspond reasonably well to tremor impacts on activities of daily living (the TRADL). Correlations were weakest with the physical SF-12, which is neither MS-specific nor tremor-specific.
In an effort to improve the correlation strength between spirography and TRADL, we limited the correlation to those respondents indicating involvement of the dominant upper extremity. Even using this limited data set, the strength of the correlation only improved to 0.381. Figure 3 depicts the association between tremor severity and unemployment, disability, and use of symptomatic medications (to suppress tremor severity). The intention was to 'anchor' the inherently subjective measures of severity to outcomes less prone to responder opinion. For comparison purposes, we included the TACS (since it is regularly included in NARCOMS updates) and the TRADL (owing to its detailed inventory of tremor-related QOL scenarios). As the level of tremor severity increased, an increasing proportion of those in each TACS level reported being unemployed and disabled. With respect to the use of symptomatic medications, the TRADL quartiles appeared to better correspond with use of tremor-suppressing medications, although the most severely affected patients on both scales appeared less likely to use symptomatic medications.
DISCUSSION
This study describes the largest cohort of patients with MS-associated tremor to date. In our study, the prevalence of both overall tremor and severe tremor fall between that described in the London clinic-based prevalence study 4 and the Olmsted County populationbased study. 3 By utilising patient-reported data, our study reflects the characteristics of a large, heterogeneous patient population across a wide geographic area. While registry-based studies by nature are limited by the inability to objectively confirm the data being provided, the size of our cohort should alleviate some concerns about whether these results are indicative of the general MS population. Indeed, an advantage to our approach is that we are less prone to the bias towards greater disease severity often encountered in studies based in specialised academic centres, and less prone to the population homogeneity of a single-region population-based study.
Another aim of this study was to introduce an objectively scored measurement of tremor (the Archimedes' spirals) into an otherwise subjective battery of outcomes. However, the correlation of spirography scores with subjective measures of tremor severity were at best only fair, suggesting the task of drawing spirals does not correspond with daily function. Likewise, drawing spirals would not be expected to capture resting or postural tremors, or tremors affecting parts of the body other than the distal upper extremities. We were also unable to control for patients bracing their drawing hands to improve the spiral quality, although the surveys did include instructions not to do so.
In contrast to the Archimedes' spirals, the modified CARS, which asked respondents to put themselves in the place of an observant clinician and rate the severity of their tremor based on manoeuvres commonly performed in a neurological examination, yielded the strongest correlations with the tremor-specific (TRADL, TACS, visual analog scale) and the MS-specific (PDDS) disability measures. These results suggest patient-report measures of tremor need not be limited to daily function and QOL-anchored scales, although formal validation would further strengthen this conclusion.
By including multiple measures of tremor severity (eg, TRADL, TACS) we were also able to compare measures of tremor severity to end points such as employment and disability. Despite the limited scoring range of the TACS, it performed similar to the TRADL in stratifying respondents by tremor severity.
Additional interesting observations also bear mentioning. In all three prevalence studies on MS tremor, upper extremity tremor was by far the most common (>90% in London and Olmsted County studies, 77.1% in this study). 3 4 In contrast, NARCOMS respondents were much more likely to report tremor in the lower extremities (68.5%) than in either of the two earlier studies (17-24%). The rates of head and truncal tremor were similar across all three studies. Since we know from our results that respondents were more likely to report tremor in a dominant upper extremity than in a nondominant upper extremity, suggesting a reporting bias, they may likewise misidentify symptoms such as ataxia, clonus or spasticity in the lower extremities as tremor. This study also found patients with MS with tremor report a high rate of tremor in their family history (15.5%), which was also observed by Bain and Alusi, who reported a rate of 7%. 4 In comparison, the most common form of tremor among adults, essential tremor, 15 affects approximately 0.9% of the adult population, and between 4.6% and 6.3% of the population older than 65 years. 16 Whether this represents a bias towards increased recognition of tremor among affected families versus a biological predisposition towards tremor is unclear.
While our results in large part confirm or expand on prior observations regarding tremor in MS, our study does have limitations. As a registry-based study, our reports of tremor severity and other aspects of patients' illness cannot be independently verified. However, past validation studies on the NARCOMS registry have largely supported the validity and generalisability of its findings. [17][18][19] Another potential criticism of this study is that participant selection for the Surveyed Cohort was not conducted completely at random. The selection criteria for the cohort were developed to answer a question related to the effects of DMDs on tremor over time, thus all patients who were not using DMDs were excluded, and only those with mild or worse tremor (indicating at least some interference with function) were invited to complete the detailed survey. By limiting the cohort to those patients with functionally relevant tremor, the correlations in this study are weakened by the absence of those patients with absent or very mild tremor. This excluded part of the tremor population also makes the correlations presented in this paper unsuitable for comparison to similar prior studies using the same instruments in a population containing the full range of tremor severity. In addition, since our population oversampled patients taking natalizumab as their DMD, the sample may be biased towards patients with more severe disease, since natalizumab is commonly reserved for those patients with very active or severe MS.
In summary, this study confirms in a large patient registry that tremor is common among patients with MS, and that tremor can markedly detract from patient QOL and daily function. Better understanding the prevalence and severity of tremor in MS and how it impacts daily function could inform trial design for potential therapeutic interventions.
|
2017-04-29T22:50:26.946Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "4399012fdb5336d2986395a598da903f67091f6c",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/5/1/e006714.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5b9771199f5f97ec01de9e2c246dfe4314a0db1a",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250686140
|
pes2o/s2orc
|
v3-fos-license
|
In situ characterization of phase transformations in a magnetic field in Fe-Ni alloys
We have investigated the effect of magnetic field on the austenite(γ)-ferrite(α) equilibrium in Fe-xNi alloys with x = 0, 2, 4 wt%. The α→γ and γ→α transformations have been followed as a function of applied magnetic field by a laser dilatometer installed in a 16T superconducting magnet. In addition, magnetic measurements at high temperature have been used to follow the magnetic behavior of each alloy composition during a complete heat treatment. We observe a shift of the phase diagram to higher temperature as the magnetic field is increased. We also find that the α-phase is either in the paramagnetic or in the ferromagnetic state as the transformation proceeds, depending on the amount of Ni. This results in an increase of the transformation temperature which is respectively proportional to the magnetic field if ferrite is formed in the ferromagnetic state and proportional to the square of the magnetic field if ferrite is paramagnetic.
Introduction
Modification of thermal treatments by an applied magnetic field in the processing of iron based alloys has been growing substantially in the last decade [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. The goal of such processing is to achieve major improvements in material properties that cannot be obtained through conventional thermomechanical treatments. The relevant advances in magnet materials and design technologies can now provide magnetic fields of sufficient magnitude for new effect to be highlighted. Among these effects, modifications of phase stability [1][2][3][4], phase diagram boundaries [5][6][7], and transformation kinetics [8][9][10] have been observed when the involved phases exhibit different magnetization behaviors. Several papers report on theoretical approaches to study the impact of external magnetic fields on γ /α phase equilibria in Fe-C, Fe-X and Fe-C-X alloys [10][11][12][13][14][15]. These studies use the Weiss Molecular Field (WMF) assumptions [16] together with the Curie Weiss law to evaluate the change in the Gibbs Free energy of the individual phases and to calculate a field-modified phase diagram. All these studies result in a similar estimation of the magnetic field effect on the γ/α equilibrium temperature. According to them, the equilibrium temperature is raised from 1 to 3°C per Tesla depending on the alloy composition and magnetic field intensity.
Among various Fe based alloys, Fe-Ni alloys with low Ni contents are of special interest because the Curie temperature of the α phase (T C ) can be tuned by the alloy composition. Concomitantly, the γ→α and α→γ transformation temperatures are also changed. As an example, for Ni contents above 3%, the Curie point of the α phase becomes higher than the structural transformation temperature [17]. Therefore we can expect that the magnetic field dependence of the transformation temperature (T T ) will differ depending on the value of T C -T T . In the present work, we examine the α→γ and γ→α
Experimental procedure
Ingots of Fe-Ni alloys with content of 0, 2 and 4 wt% are prepared by induction melting in a cold crucible under argon atmosphere and cast into a 10 mm diameter copper mold. The alloys have been treated for 48h at 700°C under vacuum for homogenization. Finally the ingots have been cold worked down to a diameter of 4 mm and cut into cylindrical specimens of 10mm in length. Transformation temperatures in magnetic fields have been measured by dilatation measurements. This home-made in-situ measurement setup, together with the furnace, is installed in the room temperature bore of a 16T superconducting magnet [19]. In this system, homogeneous temperature and field are applied to the 1 cm long sample placed in the magnet center region. The dilatation of the specimens is measured by a Michelson interferometer across an airtight pyrex window. The beam is reflected by a spherical mirror fixed on the top of an alumina sample holder.
Specimens have been heated up to the maximum temperature with a heating rate of 3°C/min and this temperature has been kept constant for 30 min. The same rate has been applied upon cooling. The maximum temperature has been set at 30°C above the upper boundary of the two phase α+ γ region [17]. This correspond to 940°C for pure Fe, 900°C for Fe-2Ni wt% and 850°C for the Fe-4Ni wt% alloys.
The magnetic behavior of the involved phases has been determined by the Faraday's method using a magnetic balance. The measuring device together with the furnace are inserted in the vertical room temperature bore of a 7 Tesla superconducting magnet [19]. The sample temperature is measured with Pt/Pt-10%Rh thermocouples in direct contact with the sample. The maximum magnetic field obtained at the specimen location is about 4T with a magnetic field gradient of about 1.5T/cm.
The sample magnetization has been measured up to 1000°C with heating and cooling rates of 2°C/min. The magnetic field at the specimen location has been set to 2T with a magnetic field gradient of about 0.7T/cm.
Results
The dilatation signal of the Fe-2Ni wt% sample in a magnetic field of 16T is shown in Figure 1. The α→γ phase transformation appears as a sharp contraction of the sample. The γ→α phase transformation They are calculated as the mean value of each T E and T S . The relative change of the transformation temperature due to the magnetic field (B), ∆T α→γ and ∆T γ→α are defined as: As a reference, the transformation temperatures measured without any magnetic field T α→γ (0) and T γ→α (0) are given in Table 1.
The field dependences of ∆T α→γ and ∆T γ→α for the three alloys are plotted with solid marks in Figure 2(a) and 2(b) respectively.
Obviously, T α→γ and T γ→α increase with increasing the magnetic field intensity for the three alloys.
The increase of T α→γ amounts to 17°C in pure Fe, 24°C in the Fe-2Ni wt% alloy and 29°C in the Fe-4Ni wt% alloy in a magnetic field of 16T. This increase seems to be proportional to B² for the three alloys compositions. A similar increase is observed for the T γ→α temperature. This increase seems to be also proportional to B² for the pure iron and the 2wt% Ni alloys composition, whereas it tends towards a linear behavior for the 4wt% Ni alloy. Figure 3 shows the magnetization curves as a function of the temperature obtained for the three alloys upon heating in Figure 3(a) and cooling in Figure 3(b). Upon heating, a sharp decrease in the magnetic signals is associated to the respective Curie points of pure iron (770°C), Fe-2Ni wt% (764°C) and Fe-4Ni wt% (759°C) given by [13]. At higher temperature, the α→γ transformation is observed as a small decrease in the magnetic signal. This effect can hardly be seen in the Fe-4Ni wt% alloy because the magnetic transition occurs in the same temperature range as the α→γ transformation. Upon cooling, the γ→α transformations and the magnetic transitions are well dissociated in pure Fe. In the Fe-2Ni wt% and Fe-4Ni wt% alloy composition, ferrite appears in its ferromagnetic state thus leading to a sharp increase in the magnetic signal.
Discussion
The effect of a high magnetic field on the respective stability of the ferrite (α) and austenite (γ) phases can be qualitatively explained in term of the magnetic contribution to the total Gibbs energy of each phase. When a magnetic induction, B, is applied, the respective Gibbs free energy of ferrite and austenite are lowered due to the contribution of a magnetic energy term, ∆E m , to the total energy of each phase: where M is the field dependant magnetization projected along B. These respective contributions are proportional to the field dependence of the magnetization of each phase. As ferrite is more magnetic than austenite, this phase is stabilized, so that a shift in the equilibrium temperature is observed towards higher temperatures. Moreover, the magnetic contribution of the austenite phase to the Gibbs free energy can be neglected due to the very low magnetization of this paramagnetic phase.
The shift of the γ→α transformation temperature was evidenced for all the alloy compositions studied in this work. However and depending on the Ni content, the variation of the equilibrium temperature shows two different behaviors with the magnetic field intensity: the transformation temperature increases either linearly or follows a quadratic behavior with the applied magnetic field. These different variations are linked with the Curie temperature of the considered compound. When the γ→α transition occurs below the Curie temperature of ferrite, the magnetization of the ferrite phase, M, is rapidly saturated so that the magnetic energy is roughly linear with the applied magnetic field value. On the contrary, when the γ→α transition occurs above the Curie temperature, the ferrite is formed in the paramagnetic state and hence M is almost linearly dependent on B. Since the magnetic energy is proportional both to the applied field and to the magnetization, the temperature shift is roughly proportional to the square of the field value. In this particular case, a more accurate analysis should use the Weiss Molecular Field model as the transformation occurs close to the Curie temperature of the ferrite phase. These two different behaviors have been recently reported in steels and cast irons [19] as well as in Fe-Co alloys [21].
Conclusion
In this work, a new experimental method based on in-situ measurements has been used to monitor the α→γ and γ→α transformations in Fe-Ni alloys as a function of applied magnetic field intensity. The transformation temperature is shifted towards higher temperatures in a magnetic field. This shift follows either a linear or quadratic behavior with the magnetic field intensity. Experimental determination of the magnetization of the ferrite phase at high temperature in various field values is in progress and will allow direct comparisons between WMF predictions and experiments.
|
2022-06-28T01:33:15.581Z
|
2009-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "7e83b0199546a6a6e4a7a6fd16b819f264beb703",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/156/1/012010",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7e83b0199546a6a6e4a7a6fd16b819f264beb703",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233422070
|
pes2o/s2orc
|
v3-fos-license
|
Piptadenia moniliformis Benth. seeds subjected to accelerated aging
The accelerated aging test is efficient for evaluating many species' seed vigor, but few studies are on forest tree species. This study aimed to determine the adequate exposure time of Piptadenia moniliformis Benth. seeds to the accelerated aging test to evaluate the vigor of seeds classified by color and size. Ten sub-lots were obtained by separating the seeds into two tegument colors (light and dark) and five size classes (11, 12, 13, 14, and 15), using round wire sieves. Before the accelerated aging (AA) test, the seeds were treated with concentrated sulfuric acid for 20 minutes to overcome dormancy. Next, the sub-lot seeds underwent different aging times (0, 24, 48, 72, 96, and 120 hours) at 45oC. The AA test, conducted at 45oC for 72 hours, is adequate to evaluate P. moniliformis seeds' quality in terms of color and size.
Introduction
The accelerated aging (AA) test is one of the most common vigor tests to assess seeds' physiological quality. It consists of exposing seeds to high temperatures and relative humidity, the main factors involved in the process, generating a high respiration rate and consuming much of their reserves, and accelerating the metabolic processes that lead to their deterioration (Marcos Filho, 2015). Thus, the AA test accelerates the natural aging of seeds (Delouche & Baskin, 1973).
Seed vigor is defined by AOSA (1983) as one of the seed properties that determine its potential for rapid and uniform emergence under different environmental conditions, with the capacity to develop healthy seedlings.
The accelerated aging test is efficient in assessing quality differences between seed lots with similar germination, ranking them by quality levels. It also creates stress conditions for seeds and is efficient in evaluating their resistance to these adverse conditions, resulting in less and slower germination and a rise in abnormal seedlings (Marcos Filho, 2015).
In recent decades, researchers have focused on determining the quality of a seed lot, given that one of the objectives of vigor tests is to provide information about the ability of one or more seed lots to excel in the field when exposed to adverse conditions. Thus, the AA test can identify the best performing batches in field due to its properties.
Given that high-quality seeds are a prerequisite for obtaining vigorous and uniform seedlings, impacting the yield and quality of the harvested product, it is essential to have accurate information to evaluate them.
In the accelerated aging test, less vigorous seeds show a pronounced decline in viability, unlike their more vigorous counterparts, which maintain higher viability and the capacity to produce healthy seedlings (Marcos Filho, 2015). Due to the ease of application and interpretation of its results, several researchers involved with forest seeds have adopted this test, obtaining good results, as observed in studies carried out by several authors on different species (Padilha et al., 2018;Carvalho et al., 2016;Aquino et al., 2018;Araujo et al., 2017;Lima et al., 2014;Guareschi et al., 2015). Piptadenia moniliformis is a species belonging to the family Fabaceae and the Caatinga biome. It is highly valued for beekeeping, and being indicated for soil recovery, combating soil erosion, and for the first phase of forest restoration. It also helps in the growth of other species, including F1-grade (high quality) trees. This species has a twisted trunk with thin, rough bark, bipinnately compound leaves, and flowers arranged on cylindrical spikes. Its 13 cmlong brown fruit is a flat pod that opens on only one side, exposing its oval white seeds (Maia, 2004).
The AA test is standardized to classify seed vigor for several agricultural species but is practically unknown for forest species such as P. moniliformis.
There are few tree seed studies involving vigor tests, such as accelerated aging. Only one study on P. moniliformis was found, but it did not consider seed size and color. As such, the present study was carried out to evaluate the effect of exposing P. moniliformis seeds to different accelerated aging times at 45ºC on seed vigor classified according to color and size.
Material and Methods
This study was carried out in the Seed and Forest Improvement Laboratory of UNESP/Jaboticabal-SP, Brazil. The Piptadenia moniliformis Benth. seeds were collected from at least 15 mother trees located in the municipality of Campo Grande do Piauí, Piauí (PI) state and sent to UNESP/Jaboticabal, where they were stored in plastic packages in a cold chamber (8 ± 2°C, 60 ± 5% relative humidity), until the experiment was conducted.
Before the accelerated aging test, the seeds were scarified with concentrated sulfuric acid for 20 minutes (Azeredo et al., 2010), rinsed in running water for five minutes, and placed on paper sheets for three days.
Accelerated aging test (AA): The AA test was conducted at 45ºC with four replicates of 25 seeds. The seeds were arranged in a single layer on an aluminum surface, inside transparent plastic germination boxes with lids (11 × 11 × 3 cm) and 40 mL of distilled water under the screen, and kept in biochemical demand oxygen (B.O.D.) chamber for 24, 48, 72, 96 and 120 hours, along with seeds that were not submitted to the AA (0 h).
Moisture content was determined before and after the AA, by drying seeds at 105 ± 3ºC/24 h (Brasil, 2009) in two replicates of 20 seeds for each treatment.
Germination test: after each aging period, the seeds were lightly rinsed in running water for two minutes due to the incidence of fungi and then submitted to germination in a B.O.D. chamber, at 25ºC and photoperiod of 8 h, inside transparent plastic boxes. The seeds were placed on two sheets of filter paper and covered with another sheet. The substrate was moistened with distilled water equivalent to 2.5 times the mass of the non-hydrated paper. Primary root emission with positive geotropic curvature was adopted as the germination criterion. The counts were made daily over 21 days.
Healthy seedlings: at 21 days after planting, the normal seedlings of each repetition and each treatment were counted, according to Brazilian criteria (2009).
The germination speed index was determined based on daily counts of the number of germinated seeds, from the first to the 21st day after planting, using the methodology proposed by Maguire (1962).
Moisture content data were not submitted to analysis of variance. The behavior of each sublot (treatment) during the aging period was analyzed by up to 3 rd degree polynomial regression, selecting the highest degree equation with statistical significance at 5% using the F-test.
Because of the negative estimates that occurred in some sub-lots for the traits evaluated, the data of these treatments were analyzed in Origin 6.0 graphing software, using a nonlinear logistic model, according to the methodology proposed by Pôrto et al. (2006), who adopted a logistics 1 model: where: y = characteristic value for a given value of x (aging time); a = maximum value of trait y; e = Napierian logarithm base, k = relative growth rate (of y reduction in the present case); xc = x value (aging time), which reduces the maximum value of the trait by 50% and corresponds to the aging time at the inflection point of the curve. Statistical analyses were processed in ESTAT/Jaboticabal software and Microcal Origin 6.0.
Results and Discussion
The moisture content of the treatments, resulting from seed classification related to color and size, before and after AA, is shown in Figures 1 and 2. The initial moisture content of lightcolored seeds (Figure 1) ranged between 12 and 13% and increased after AA up to 96 h for seeds retained in sieves 11, 12, and 13 and up to 120 h for sieves 14 and 15. The initial content of darkcolored seeds (Figure 2) varied between 11 and 13%, increasing after accelerated aging up to 96 h. Figure 1. Moisture content (%) of light-colored Piptadenia moniliformis Benth. seeds (L) classified in different-sized sieves (S11, S12, S13, S14, and S15), before and after accelerated aging at 45°C. Font: Azerêdo & Paula (2020). Figure 2. Moisture content (%) of dark-colored Piptadenia moniliformis Benth. seeds (D) classified in different-sized sieves (S11, S12, S13, S14, and S15), before and after accelerated aging at 45°C. Font: Azerêdo & Paula (2020).
After AA for 24 h, the moisture content for all treatments was higher than 35%. Moisture content increased up to 96 h of aging, reaching values above 50%, and in some sub-lots, increased up to 120 h.
The increase in seed moisture content was higher after 24 h of aging for all treatments. Although moisture content also rose after other aging time, the increase was non-significant.
In terms of moisture content, several species' seeds can behave differently depending on the aging time and temperature. The moisture contents of the different treatments were relatively high (greater than 50% after 96 h of AA), when compared to other studies. Thus, seed moisture content stabilization in the AA test depends on the species under study.
Tabernaemontana fuchsiaefolia A. DC. seeds (Moraes et al., 2016), for example, showed a rise in moisture content after the aging periods, using the traditional method. Similarly, Carvalho et al. (2016) showed a considerable increase in mahogany seeds' moisture content after the AA test. These increases were observed mainly in the first 24 h, when moisture content nearly doubled at the two highest temperatures tested (43 and 45°C). In Mabea fistulifera Mart. seeds, AA showed a marked increase in seed moisture after 24 h of aging using the traditional method. The initial seed moisture content of 12.6% rose to 21.5%, remaining nearly constant after this period, reaching 23.96% after 96 h of accelerated aging, at all temperatures tested (41, 43 and 45°C) (Gomes Júnior & Lopes, 2017).
A slight increase in germination was observed for some treatments after 24 h of AA at 45ºC. This rise was not found only for lightcolored seeds from sieves 11, 14, and 15 ( Figure 3A) and dark seeds from sieve 11 ( Figure 3B). At the end of 120 h of AA, the light-colored seeds from sieve 14 did not germinate, and the highest germination values were observed in sieves 11 and 13 ( Figure 3A).
The dark seeds showed a decline in germination ( Figure 3B) from 48 h of AA onwards, albeit not as pronounced as observed in light-colored seeds ( Figure 3A).
The slight increase in Piptadenia seed germination after 24 h of AA at 45ºC can be attributed to the increase in moisture content after aging, due to the conditions promoted by AA (high RH and temperature), which activated seed metabolism. These seeds would likely have already started germinating with water addition because of the three-phase pattern of water absorption (Bewley & Black, 1994).
Germination declined for all treatments, regardless of seed size or color (Figures 3A; 3B). The dark-colored seeds ( Figure 3B) exhibited a decline from 48 h of AA onwards, but not as marked as observed in their light-colored counterparts ( Figure 3A). The difference in vigor of dark-colored seeds is also not as evident as in those with lighter coloring, where the regression curves are closer in the former ( Figure 3A) than the latter ( Figure 3B), in which size classification may not be essential.
Mulungu (Erythrina velutina Will.) seeds aged 24, 48, 72, and 96 h at 41 and 45°C exhibited reduced viability and vigor (Guedes et al., 2009). These results agree with those obtained in the present study with P. moniliformis seeds, in which 24 hours of aging did not promote a significant change in the germination of Piptadenia seeds compared to controls (nonaged seeds), but with a progressive decrease starting at 48 h of aging; the 120 h period did not wholly inhibit Piptadenia seed germination.
According to Marcos Filho (2015), enzymes such as catalase, dehydrogenase, and glutamic acid decarboxylase decrease during the deterioration process, thereby causing a decline in the energy supply to the germinating seed, given that changes in the enzyme structure contribute to reducing their efficiency.
The stress conditions imposed by the AA test increase seeds' metabolic activity and consume their reserves faster (Bewley and Black, 1994). Also, the disorganized membrane system makes them more susceptible to the harmful effects of O2 that cause compound oxidation and enzyme activation (Ferreira et al., 2004), resulting in a gradual drop in germination and vigor. These events may be related to the reduced germination capacity of Piptadenia seeds after 48 h of AA. The lower percentage of normal seedlings (NS) was more pronounced for light-colored seeds ( Figure 4A) than their dark-colored counterparts ( Figure 4B). There was no NS formation in lightcolored seeds from sieves 13, 14, and 15 after 48, 24, and 72 h of aging, respectively. The dark seeds from sieves 14 and 15 ( Figure 4B) showed higher NS values up to 48 h of aging with a 50% decline after 72 h and no NS formation after 96 and 120 h. For dark seeds from sieves 11, 12, and 13, NS production decreased less than AA times. ) obtained from light (A) and dark-colored (B) Piptadenia moniliformis Benth. seeds of different sizes (sieves 11, 12, 13, 14 and 15: S11, S12, S13, S14, and S15, respectively), after accelerated aging at 45°C, for different time periods. Font: Azerêdo & Paula (2020).
Relating to the number of NS, dark seeds were more vigorous than light-colored seeds, especially sizes 13, 14, and 15. Vigorous seeds exhibit greater germination capacity ( Figure 4A) and produce NS after being submitted to AA stress, while those with less vigor display significantly lower germination (Marcos Filho, 2015). Carvalho et al. (2016) found that increased exposure to higher temperatures may result in more marked deterioration of the integument and cell membranes of mahogany seeds submitted to accelerated aging concerning seeds not early aged or aged at milder temperatures (39 and 41°C).
GSI-assessed vigor decreased for all treatments, regardless of seed color and size. However, for light-colored seeds, this decrease was much more pronounced after 24 h of aging. Accelerated aging hastens seed deterioration, whose symptoms can be observed during germination and the initial phases of seedling development. Among the most evident manifestations are reduced germination speed and capacity in laboratory conditions and slower seedling emergence and growth (Marcos Filho, 2015). According to Lima et al. (2015), the aging test, accelerated by the traditional method, was useful in classifying Crambe seeds batches concerning their physiological quality. It was observed that the longer the seed exposure to AA, the faster the deterioration rate. The light-colored seeds were more sensitive to the stress caused by the AA, especially sizes 14 and 15. Before aging, the initial quality of sub-lot seeds (light and dark colors) was high and similar, making it impossible to differentiate between them with the germination test because it is conducted under favorable conditions, causing the seed lots to reveal their full germination potential. When the sub-lots were submitted to the AA test, at high humidity and temperatures, slight and marked differences were detected between them, demonstrating their good performance or not in the field, where seeds are exposed to different environmental conditions. The low vigor exhibited by light-colored seeds belonging to sieves 14 and 15 is likely related to the ripening process, which may not have terminated satisfactorily, thereby causing reduced vigor. After 48 h of aging, there were differences between the sub-lots and a significant decrease in the evaluated traits, indicating that the seeds could no longer repair the damage caused by exposure to high humidity and temperature, culminating in the total loss of germination capacity, which is the final effect of deterioration (Binotti et al., 2008).
Conclusion
The accelerated aging test carried out at 45ºC for 72 h is efficient in evaluating the seed quality of Piptadenia moniliformis Benth., regardless of seed color and size.
|
2021-04-28T12:21:45.516Z
|
2021-03-26T00:00:00.000
|
{
"year": 2021,
"sha1": "50a9b7e418a41f3e911e47e21dbed5b028313b27",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.journals.ufrpe.br/index.php/JEAP/article/download/3463/482483997",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "50a9b7e418a41f3e911e47e21dbed5b028313b27",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
3421052
|
pes2o/s2orc
|
v3-fos-license
|
Age is the work of art? Impact of neutrophil and organism age on neutrophil extracellular trap formation
Neutrophil extracellular traps or NETs are released by highly activated neutrophils in response to infectious agents, sterile inflammation, autoimmune stimuli and cancer. In the cells, the nuclear envelop disintegrates and decondensation of chromatin occurs that depends on peptidylarginine deiminase 4 (PAD4) and neutrophil elastase (NE). Subsequently, proteins from neutrophil granules (e.g., NE, lactoferrin and myeloperoxidase) and the nucleus (histones) bind to decondensed DNA and the whole structure is ejected from the cell. The DNA decorated with potent antimicrobials and proteases can act to contain dissemination of infection and in sterile inflammation NETs were shown to degrade cytokines and chemokines via serine proteases. On the other hand, overproduction of NETs, or their inadequate removal and prolonged presence in vasculature or tissues, can lead to bystander damage or even initiation of diseases. Considering the pros and cons of NET formation, it is of relevance if the stage of neutrophil maturation (immature, mature and senescent cells) affects the capacity to produce NETs as the cells of different age-related phenotypes dominate in given (pathological) conditions. Moreover, the immune system of neonates and elderly individuals is weaker than in adulthood. Is the same pattern followed when it comes to NETs? The overall importance of individual and neutrophil age on the capacity to release NETs is reviewed in detail and the significance of these facts is discussed.
Introduction
Neutrophils, polymorphonuclear cells (PMNs), are the first leukocytes to reach the site of inflammation where they perform their effector functions, phagocytose microbes and kill them intracellularly. Alternatively, neutrophils fight pathogens extracellularly in either of two ways: upon discharge of potent antimicrobials and proteases from their granules or on release of neutrophil extracellular traps (NETs; Kolaczkowska and Kubes 2013).
The first report on NETs revealed that neutrophils stimulated by agents such as lipopolysaccharide (LPS), interleukin 8 (IL-8) or phorbol 12-myristate 13-acetate (PMA) form and release structures similar to the network, hence their name (Brinkmann et al. 2004). Detailed studies of NETs by electron scanning and confocal microscopy as well as proteomic analyses showed that NETs are composed of thin chromatin fibers that are decorated with some 30 neutrophil proteins, including neutrophil elastase (NE), bactericidal/permeability-increasing protein (BPI), defensins, cathelicidin (LL-37), proteinase 3 and cathepsin G of granular origin and nuclear histones (Brinkmann et al. 2004;Urban et al. 2009) (Fig. 1). NETs can take different forms, from a band form, by a cloud-like structure, when the NET is fully hydrated, to a network-like shape, exceeding 10-15 times the volume of the releasing cell (Brinkmann et al. 2004;Brinkmann and Zychlinsky 2012). More recent studies, applying atomic force microscopy to reveal their nanoscale properties, reported that NETs are branching filament networks with a substantially organized porous structure and with openings in the size range of small pathogens (Pires et al. 2016). Importantly, proteases attached to NETs secure assembly of the whole structure and its mechanical properties. While such a structure increases the efficiency of catching pathogens, it can also favor collateral damage (Pires et al. 2016). The latter observation directly relates to pros and cons of NET formation.
Ying and yang of NETs
There are multiple reports on NETs being able to capture, immobilize and neutralize pathogens. The microbes caught by NET include both Gram-positive (e.g., Staphylococcus aureus) and Gram-negative bacteria (e.g., Salmonella typhimurium and Shigella flexneri; Brinkmann et al. 2004), fungi (e.g., Candida albicans; Urban et al. 2006) and viruses (Saitoh et al. 2012;Jenne et al. 2013). More controversial is their capacity to kill trapped pathogens. As NETs are decorated with antimicrobial proteins and proteases, their killing potential seemed to be unavoidable and in fact it was repeatedly reported to occur (Brinkmann et al. 2004;Urban et al. 2006;Guimarães-Costa et al. 2009). However, some studies ruled it out (Gabriel et al. 2010). A recent paper by Menegazzi et al. (2012) challenged the technical approach applied in the majority of the studies, most of which were performed on isolated neutrophils and revealed that the results depended on the chosen strategy; i.e., incubation with DNase prior or post-addition of bacteria to the NET forming neutrophils. Overall, the study concluded that NETs entrap but do not kill microbes (Menegazzi et al. 2012). This is in line with some in vivo studies showing that, after intravascular application of DNase, colony-forming units (CFUs) of S. aureus do not increase despite strong deposition of NETs in the vasculature of mice with S. aureus sepsis (Kolaczkowska et al. 2015). But even if NETs indeed do not kill pathogens, their role in capturing and immobilizing microbes should not be underestimated as NETs prevent microbial dissemination throughout the body. This was, for example, shown in the course of Escherichia coli sepsis (McDonald et al. 2012). Moreover, one can speculate that NETs can indirectly contribute to pathogen killing, as immobilized microbes are exposed to microenvironmental immune factors present in serum or tissues as well as cytotoxic leukocytes (macrophages and NK cells). In addition, by means of proteases attached to NETs, virulence factors of pathogens can be shed from their surface limiting their virulency, e.g., IpaB on S. flexneri is being removed by NE decorating the traps (Brinkmann et al. 2004). Another important, anti-inflammatory function of NETs comes from studies on sterile inflammation, as during gout, serine proteases attached to NETs were shown to degrade pro-inflammatory cytokines and chemokines contributing to the resolution of the immune response (Schauer et al. 2014).
The importance of NETs is further strengthened by four facts: (1) their evolutionary conservation, (2) release by multiple populations of leukocytes, (3) release of the NET backbone (DNA) from either nucleus or mitochondria and (4) strategies of pathogens developed to escape from NETs. It turns out that DNA decorated with antimicrobials and proteases is preserved in evolution; not only do all vertebrates (only data on amphibians are missing) release extracellular traps (ETs; Brinkmann et al. 2004;Alghamdi and Foster 2005;Palić et al. 2007;Pijanowski et al. 2013;Reichel et al. 2015) pathogen-associated molecular pattern, DAMP damage-associated molecular pattern, PMA phorbol 12-myristate 13-acetate, ROS reactive oxygen species, NO nitric oxide, PAD4 peptidylarginine deiminase 4, MPO myeloperoxidase, BPI bactericidal/permeability-increasing protein, LL-37 cathelicidins cathelicidin but also invertebrate species (Ng et al. 2013;Homa et al. 2016) and even plants (Wen et al. 2009(Wen et al. , 2017 and social amoebae ) do so. Moreover, although not all cells releasing ETs are leukocytes or leukocyte-like, they all seem to perform a kind of defense function, including root border cells of plants (Hawes et al. 2000) and sentinel cells of the multicellular slug stage of the social amoeba functioning as a primitive innate immune system (Chen et al. 2007). Thus, it is not surprising that, in vertebrates, as depicted in detail in mammals, ET formation is universal among innate immune leukocytes and also characterizes monocytes (Granger et al. 2017), macrophages (Chow et al. 2010;Liu et al. 2014), eosinophils (Yousefi et al. 2008), basophils (Morshed et al. 2014) and mast cells (von Köckritz-Blickwede et al. 2008). Furthermore, the source of DNA can vary since neutrophils and eosinophils not only eject DNA of nuclear but also of mitochondrial origin (mNETs; Yousefi et al. 2008Yousefi et al. , 2009). The studies on neutrophils revealed that DNA of mNETs indeed contains mitochondrial (e.g., Cyb) and not nuclear (e.g., Gapdh) genes (Yousefi et al. 2008). Interestingly, mNETs are released by vital neutrophils and they prolong survival of the releasing cells (Yousefi et al. 2009). Finally, different strategies of pathogens to avoid trapping by NETs, or to escape from the released chromatin fibers, have been described. Streptococcus pneumoniae and S.aureus are good examples of bacteria armed against NETs but fungi Rocha et al. 2015;Johnson et al. 2016) and parasites (Guimarães-Costa et al. 2014) have also developed such mechanisms. S. pneumoniae possesses the ability to form polysaccharide capsules protecting them from binding to NETs (Wartha et al. 2007) and their endonucleases degrade the network (Beiter et al. 2006). Moreover, S. pneumoniae can change the electrical charge of their membrane to positive, by incorporation of D-alanine residues into LTAs (lipoteichoic acids). This strategy protects them against positively-charged residues on NET antimicrobials and proteases preventing the trapping (Beiter et al. 2006). S. aureus also releases nucleases but not only to desintegrate NETs (Berends et al. 2010), as they also degrade NET-DNA to intermediate products that are converted to 2′-deoxyadenosine. The latter deoxyribonucleoside induces apoptosis of macrophages that otherwise could phagocytose pathogens immobilized in NETs (Thammavongsa et al. 2013).
Having described the adventages of NET release, one must also acknowledge the side effects of their formation leading to either initiation of bystander damage or even diseases. Numerous studies have reported that uncontrolled and/or excessive release of NETs, as well as their inadequate removal, leads, or at least contributes, to various pathological conditions, including rheumatoid arthritis (RA; Sur Chowdhury et al. 2014;Carmona-Rivera et al. 2017), systemic lupus erythematosus (SLE; Lande et al. 2011;Villanueva et al. 2011), atherosclerosis (Knight et al. 2014;Wang et al. 2017), vasculitis (Kessenbrock et al. 2009;Söderberg and Segelmark 2016), thrombosis (Gould et al. 2014;Martinod and Wagner 2014), sepsis (Kolaczkowska et al. 2015) and cancer (Berger-Achituv et al. 2013;Tohme et al. 2016). SLE and sepsis are representative examples of excessive/inapropiate NET release and inadequate removal, respectively. SLE is manifested by benign skin lesions to life-threatening symptoms resulting from overproduction of autoantibodies and loss of tolerance to their own antigens (Crispín et al. 2010;Dörner et al. 2011). The autoantibodies, anti-neutrophil cytoplasmic antibodies (ANCAs) are directed against PR3, MPO, NE and the anti-nuclear antibodies (ANAs) against DNA and histones, all of which are components of NETs (Fauzi et al. 2004;Yu and Su 2013;Gajic-Veljic et al. 2015). Characteristic for SLE NETs is the presence of LL-37 and human neutrophil peptide (HNP). The DNA/LL-37/HNP complexes activate plasmacytoid dendritic cells (pDCs) resulting in increased production of IFN-α (Lande et al. 2011), which plays a central role in the pathogenesis of SLE by promoting immune system activation that contributes to tissue and organ inflammation and damage (Crow 2014). In addition, NETs of SLE patients are inadequately degraded as they are protected by DNase inhibitors ) but also complement C1q bound to NET (Leffler et al. 2012), while LL-37 can protect DNA from degradation (Lande et al. 2011). Of importance, during SLE, numbers of circulating immature neutrophils are elevated (Bennett et al. 2003).
Correspondingly, during sepsis, NETs contribute to bystander damage of endothelium due to activity of histones (Xu et al. 2009;Saffarzadeh et al. 2012;McDonald et al. 2017) and NE (Kolaczkowska et al. 2015) of NETs that are not timely removed. Also, sepsis is characterized by a rapid recruitment to blood of immature neutrophils (Mare et al. 2015) and not fully mature neutrophils are also present in tumors where they display a pro-tumorgenic phenotype (Sagiv et al. 2015). These data suggest that the age of neutrophils might not only impact the phenotype of neutrophils but also their contribution to disease pathology.
On how NETs are created
Thirteen years into NET research and still we know little about the mechanisms of NET formation, although numerous studies have been published on this topic. Not to underestimate any of the studies, one must keep in mind that, to our estimation, approximately 90% of studies on NETs are performed on isolated neutrophils or tissues collected post-mortem. This does not reflect on a complex in vivo milieu and behavior of neutrophils and other leukocytes in situ, in blood or tissues. However, the main concern is that most of what we know on the mechanisms of NETs come from studies in which PMA was used a sole stimulant. PMA is a syntetic phorbol 12myristate 13-acetate, a robust activator of two of the three families of protein kinase C (PKC; Liu and Heckman 1998;Neeli and Radic 2013) and, as such, enforces particular signaling pathways. A recent paper re-examing kinetics and signaling pathways of NETs induced by various agents concluded that BPMA stimulation should be regarded as mechanistically distinct from NET formation induced by natural triggers^ (van der Linden et al. 2017).
Very early in NET research, dependence on reactive oxygen species (ROS) generated by the NADPH oxidase pathway was reported to be a prerequisite for their formation (Fuchs et al. 2007). The studies were subsequently strongly supported by observation that patients with chronic granulomatous disease (CGD), with impaired NADPH oxidase activity, did not release NETs but that this could be restored by a targeted gene therapy (Bianchi et al. 2009). Subsequently, the Raf-MEK-ERK pathway was identified as being involved in NET formation through activation of NADPH oxidase (Hakkim et al. 2011). But then numerous studies reported ROSindependence of NET formation, which resulted from both in vitro (Gabriel et al. 2010;Byrd et al. 2013;Pijanowski et al. 2013;Mejía et al. 2015) and in vivo studies (Chen et al. 2012;Kolaczkowska et al. 2015;Barth et al. 2016a) utilizing NADPH inhibitors and knockout mice. This discrepancy in the data on ROS involvement in NET release is difficult to explain at this stage. It might be resulting from the experimental milieu or the nature of NET-inducing factors as not all agents activate NADPH oxidase (Farley et al. 2012). The latter study reports on an interesting discrepancy: PMA but not platelet-activating factor (PAF), generated ROS but the NADPH oxidase inhibitor (DPI) reduced NET release by both PMA and PAF. These data indicate that, once again, results from PMA studies should be carefully reviewed unless supported by data from concominant studies applying pathogenor immune response-related agents to induce NETs. However, most importantly, the study suggests an interesting explanation of ROS involvement in NET formation as DPI also inhibits a range of flavoenzymes including mitochondrial oxidase and nitric oxide synthase (Stuehr et al. 1991;Li and Trush 1998), which could Bsubstitute^for phagosomal ROS. Thus, in some circumstances, NET formation might depend on phagosomal ROS (NADPH-dependent; e.g., Fuchs et al. 2007) but also on mitochondrial ROS (as shown in Lood et al. 2016) or NO (as reported in Patel et al. 2010) or none. It is also of note that the only family of endogenous inhibitors of NETs known to date does not inhibit ROS formation and instead blocks PAD4-dependent citrullination (see BNET formation in neonates^) .
Another mechanism putatively involved in NET formation is autophagy. This process is critical for the turnover of damaged organelles and proteins during homeostasis but, during infection, plays a role in the killing of phagocytosed pathogens and down-regulation of inflammasome activation (Birmingham et al. 2006;Jabir et al. 2014). The majority of studies showing involvement of autophagy in NET formation applied pharmacological inhibitors of key pathways or molecules involved in this process that however, were also inhibiting ROS (Remijsen et al. 2011;McInturff et al. 2012;Kenno et al. 2016;Ullah et al. 2017). Recently, the involvement of autophagy in NET release was studied in transgenic mice with conditionally deleted atg5 (its product is critical for autophagosome formation) in either neutrophils or eosinophils (Germic et al. 2017). The study ruled out a role of autophagy in NET formation. A similar controversy concerns the involvement of necroptosis (a programmed necrosis-like cell death), which is well illustrated by two contradictory papers published recently head-to-head (Amini et al. 2016;Desai et al. 2016).
However, there are two enzyme-based mechanisms of NET formation that were confirmed to operate independently of the in vitro or in vivo settings and the inducing agents. These include the involvement of NE and peptidylarginine deiminase 4 (PAD4) (Fig. 1). PAD4 belongs to the group of Ca 2+ -dependent enzymes and is located in the nucleus and granules of neutrophils (Asaga et al. 2001;Nakashima et al. 2002;Kearney et al. 2005). The enzyme is involved in catalyzing the citrullination of histones H2/H3/H4, which is a post-translational modification converting the methylarginine residues to citrulline to form a carbonyl group Arita et al. 2006;György et al. 2006). The conversion of positively charged methylarginine to neutral side chains of citrulline affects protein (histone)-DNA stabilization and leads to chromatin decondensation and NET release (Neeli et al. 2008;Wang et al. 2009). Studies on PAD4 knockout mice (PAD4 −/− ) showed impaired ability to form NETs in comparison to WT animals independently of stimuli, be it LPS or ionomycin (Martinod et al. 2013). Similarly, the PAD4 inhibitor (Cl-amidine) also diminishes NET release both in vitro (Li et al. 2010;Kusunoki et al. 2016) and in vivo (Knight et al. 2013(Knight et al. , 2014. However, recently, PMA-induced NET formation was reported not to be connected with histone deamination (no citrullinated H3 histones were detected in PMAinduced NETs), which was explained by the fact that PMA activates the PKCα isoform that inhibits PAD4 while it is the PMA-irresponsive PKCζ that activates deamination (Neeli and Radic 2013). Nevertheless, there are also studies reporting deposition of citrullinated histones in PMA-stimulated NETs, although to a lower degree than upon other inducers (Martinod et al. 2016;van der Linden et al. 2017).
Another enzyme required to form NETs is a serine protease: neutrophil elastase. The proposed mechanism of its action is specific degradation of histones that destabilizes chromatin (Papayannopoulos et al. 2010). In addition, blockade of NET formation was also demonstrated in vivo on NE KO mice infected with Gram-negative bacteria (Papayannopoulos et al. 2010;Farley et al. 2012) or Gram-positive bacteria (Kolaczkowska et al. 2015). Also, the use of NE inhibitor resulted in the inhibition of C. albicans-induced NET formation (Papayannopoulos et al. 2010). However, Martinod et al. (2016) showed that numerous neutrophils derived from NE −/− mice ejected NETs upon in vitro ionomycin stimulation, while 40% of them did not (Martinod et al. 2016). Interstingly, during mouse sterile thrombosis, only 20% fewer NETs were produced by NE KO neutrophils (Martinod et al. 2016). This indicates that both PAD4 and NE are involved in NET formation but might be more or less redundant depending on the disease state and/or stimuli. For example, during S. aureus sepsis, NE −/− neutrophils did not produce NETs while some PAD4 −/− PMNs (c. 20%) did (Kolaczkowska et al. 2015), whereas during deep vein thrombosis, 80% of NE −/− neutrophils released NETs (Martinod et al. 2016) but no such structures were cast by the PAD4 −/− cells (Martinod et al. 2013). These findings reflect well on the diversity of NET types. The traps seem to vary not only in their appearance, involved molecules and pathways but also in the consequences for the producing cells. The first report on the existence of NETs presented many arguments supporting that the trap-releasing cells remain viable (Brinkmann et al. 2004) but subsequent studies reported on the process being lethal (Fuchs et al. 2007) and eventually a term NETosis was coined (Steinberg and Grinstein 2007). However, Yipp et al. (2012) showed by means of intravital microscopy of S. aureus-inflamed skin that anuclear neutrophils that released NETs remain alive and keep moving and phagocytosing ). This seems more economical and efficient than the beneficial suicide and was detected in the milieu of the live organism. Successively, viable NET-forming neutrophils were also reported in in vitro settings (Yousefi et al. 2009;Pilsczek et al. 2010). Most probably, the two modes represent another set of parallel mechanisms by which NETs are released, either upon cell rupture (Fuchs et al. 2007) or vesicular transport to the cell surface (Pilsczek et al. 2010).
We still do not know how to understand this variety of involved mechanisms and whether reported NETs are always BNETs^, as adequate, multicomponent detection is a key but not a golden standard. This issue is even becoming a topic of open discussions with Bhealthy critisism^such as the one of Nauseef and Kubes (2016).
NETs and age of neutrophils
Immature neutrophils versus mature neutrophils Neutrophils arise and mature in the bone marrow. The maturation consists of the mitotic stage (myeloblasts, promyelocytes and myelocytes) and postmitotic stage (metamyelocyte, neutrophil band and mature segmented neutrophils) (Borregaard 2010;Amulic et al. 2012;Lahoz-Beneytez et al. 2016). Neutrophil secretion from the bone marrow into circulation is controlled by circadian oscillations (Casanova-Acebes et al. 2013) and depends on the interactions between the CXCL12 chemokine and its CXCR4 receptor (retention of neutrophils in the bone marrow) and the CXCL1 ligand with the CXCR2 receptor (release of neutrophils into blood) (Martin et al. 2003;Eash et al. 2010). In circulation, neutrophil age and human neutrophil half-life is less than 1 day, about 19 h (Lahoz-Beneytez et al. 2016) and about 12 h in mice (Pillay et al. 2010a). Expression of CXCR4 increases on aging cells and causes neutrophils to return to the bone marrow, where they are removed by macrophages (Furze and Rankin 2008;Casanova-Acebes et al. 2013) but the cells can also be removed in the spleen and the liver (Shi et al. 2001;Suratt et al. 2001). In turn, this leads to secretion from the bone marrow of a correspondingly small number of mature but not immature (Bruegel et al. 2004;Nierhaus et al. 2013), neutrophils to the circulation (Semerad et al. 2002). As shown recently, the process is controlled by gut microbiota (Zhang et al. 2015) and most probably also by exosomes whose numbers and content change during aging (Prattichizzo et al. 2017). If during their life neutrophils are recruited to the site of inflammation, their life-span is prolonged and their death by apoptosis is delayed (Simon 2003;Milot and Filep 2011). During inflammation, especially the systemic one, both mature and immature neutrophils are recruited from the bone marrow (Drifte et al. 2013). Interestingly, a recent study showed that the first neutrophils to arrive at the site of inflammation are aged neutrophils and they are followed by nonaged cells (Uhl et al. 2016). The fact that aged cells disappear from circulation, neatly explains why fresh cells are recruited to the blood from the bone marrow in the course of inflammation.
Immature and mature neutrophils differ in their gene expression, the former having higher expression of genes controlling their differentiation and granular protein synthesis, including NE, MPO and BPI, whereas genes controlling chemotaxis or apoptosis are down-regulated in immature neutrophils (Martinelli et al. 2004). Comparison of human immature (bone marrow) and mature (blood) neutrophils in their capacity to produce NETs upon IFN-α/γ priming and following stimulation with complement factor C5, showed that only the mature neutrophils released the traps (Martinelli et al. 2004). Other studies revealed diminished yet detectable NET release by immature neutrophils. In the study by Taneja et al. (2008), circulating neutrophils consisted of c. 35% of immature cells (vs. 5% in healthy volunteers) during sepsis. And similar results were obtained by Pillay et al. (2010b). The immature neutrophils had a lower ratio of phagocytosis and Ca 2+ signaling (Taneja et al. 2008), antimicrobial recognition and killing and ROS generation (Pillay et al. 2010b). Also, in patients with sterile burn injury, immature neutrophils were numerously present in circulation and these patients had higher levels of circulating free DNA (cfDNA) and citH3, clinical markers of systemic formation of NETs (Hampson et al. 2017). This was especially apparent at times when numbers of immature neutrophils dominated in circulation. However, when neutrophils were isolated from blood and ex vivo-stimulated with PMA, the cells (a mixture of mature and immature neutrophils) of patients with thermal injury released fewer NETs (Hampson et al. 2017).
There is also a report on normal production of NETs by human immature neutrophils present in circulation that comes from studies on bone marrow transplantation (Glenn et al. 2016). Important, although not direct, information on NET production by immature neutrophils comes from studies on diseases during which the undeveloped cells are either present in blood or tissues. One such example is SLE, as lupus patients display a varying degree of neutrophil maturation (Denny et al. 2010;Villanueva et al. 2011). In particular, two neutrophil subpopulations, low-density granulocytes (LDGs) and high-density neutrophils, were identified in the course of the disease. The LDGs do not carry any specific markers identified to date but their nuclear morphology (c. 40% cells have lobular, band or myelocyte-like nuclei vs. c. 60% with segmented nuclei) suggests that many of these cells represent the immature phenotype of neutrophils (Denny et al. 2010). The cells have higher expression of azurophilic granule genes, including those encoding NE and MPO and exhibit increased spontaneous NET production and overall release more traps . A similar subset of neutrophils was also described in the course of psoriasis and psoriatic LDGs also tend to form NETs without any stimulation, in contrast to control or psoriasis mature neutrophils ). Low-density neutrophils, consisting of both immature and mature neutrophils, have also been described in cancer (Sagiv et al. 2015). Unlike high-density neutrophils, the lowdensity cells have a pro-tumor phenotype (i.e., decreased chemotaxis, phagocytosis and ROS production). The two phenotypes of tumor-associated neutrophils (TANs), i.e., highdensity and low-density neutrophils, are also termed N1 and N2, respectively (Fridlender et al. 2009). The N2 phenotype dominates in the presence of TGF-β but is diminished by IFN-β (Fridlender et al. 2009;Andzinski et al. 2016). It was shown that blood neutrophils collected from mice with tumors in which N2 phenotype was suggested to dominate (IFN-β KOs), produced fewer NETs, either spontaneously or upon PMA ex vivo stimulation (Andzinski et al. 2016). These, however, were not TANs and the exact phenotype of circulating neutrophils was not examined, nevertheless immature neutrophils present in a course of disease might not always release spontaneously higher amounts of NETs. In addition, the tumor environment is unique and thus we can speculate that NET release increases anti-tumoral response as NET components might damage tumor cells. But NETs could also function as scaffolds of tumor antigens, facilitating their take-up by DCs and macrophages. On the other hand, NETs can trigger metastasis, e.g., high-mobility group box 1 (HMGB1) released from NETs activates the TLR9-dependent pathway in cancer cells promoting their adhesion, proliferation, migration and invasion (Berger-Achituv et al. 2013;Tohme et al. 2016). Similar results came from a study on immature and mature granulocytes present in leukemic patients (Lukášová et al. 2013). In this study, only data on PMA-induced NETs were reported and acute myeloid leukemia (ALM) granulocytes were shown not to produce the traps as opposed to granulocytes isolated from peripheral blood of healthy donors (Lukášová et al. 2013). The immature cells expressed heterochromatin protein 1 γ (HP1γ) and dimethylated histone H3 at lysine 9 (H3K9me2). The two proteins interact to preserve the spreading of heterochromatin and HP1γ is absent in mature granulocytes. Terminally differentiated mature neutrophils are characterized by a tightly condensed chromatin and gene repression, while immature cells do not (Lukášová et al. 2013). Lukášová et al. (2013) hypothesized that it might be necessary for chromatin to be condensed to facilitate PAD4 action and for this NET formation to be weaker in immature cells.
One has to bear in mind that the majority of data on NET formation by immature neutrophils come from ill patients (with sepsis, SLE, psoriasis or cancer). Nevertheless, many of them, although not all, report on spontaneous release of the traps by immature neutrophils (if this aspect was studied/ reported) and diminished, or at least not futher increased, production of NETs upon stimulation (mostly with PMA) (Fig. 2). In addition, at least one study reported on concomitantly elevated markers of NETs in circulation. Considering all the above data, one might hypothesize that immature neutrophils present in blood tend to spontaneously release NETs, hence the presence of their markers in circulation and thus, when isolated and ex vivo-stimulated to produce the traps, fail to form them. This is either due to an exhausted phenotype of the cells or the fact that all neutrophils with a potential to release NETs have already done so once in vasculature. Especially, it is only about 25% of neutrophils that release NETs (Nauseef and Kubes 2016).
Aged or senescent neutrophils
Not much is known about NET production by senescent neutrophils. Aging neutrophils up-regulate CXCR4 and progressively lose CD62L (L-selectin) expression that facilitates their re-direction to the bone marrow (Zhang et al. 2015). However, they exhibit enhanced adhesion molecules (e.g., Mac-1, ICAM-1) and TLR4 expression (Zhang et al. 2015), which is in line with their rapid recruitment to the site of inflammation, prior to mature but not aged, neutrophils (Uhl et al. 2016). This aging phenotype is regulated by microbiota and is lost in mice treated with broad-range antibiotics or germfree animals but restored by application of LPS or fecal transplantation (Zhang et al. 2015). The CD62L lo CXCR4 hi aged neutrophils are significantly numerous in Selp −/− mice (Pselectin KOs) or anti-P-selectin-treated animals (Zhang et al. 2015;Uhl et al. 2016). When NET production was studied in the latter mice, neutrophils stimulated ex vivo with LPS dramatically increased trap formation. This was further confirmed in an endotoxemic model by intravital imaging of NETs in liver vasculature (Zhang et al. 2015). Therefore, in the case of scenescent neutrophils, the ex vivo and in vivo data clearly correlated, indicating their enhanced capacity to release NETs, which is in line with a pro-inflammatory phenotype of these cells (Fig. 2). However, no data on human scenescent neutrophils are available.
NETs and age of individuals
Immune system matures during fetal development and then declines as we age. These facts have important impacts on susceptibility to infection and the chances of surviving it. And, as such, it is also important how NET release changes with age. Especially, the world is undergoing a shift in demographics with low birth rates and aging of populations . Independently of the age of mothers, not only fewer babies are being born but also many of them are born preterm and therefore they are more likely to become ill or die, as preterm infants are more vulnerable to infection (Urquhart et al. 2017). In line with this, the risk of severe sepsis in neonates increases dramatically with decreasing gestational age ). On the other hand, the global population is aging and the number of indivuduals older than 65 years will double by 2050 . Elderly people are more susceptible to infection due to inflamm-aging or immunosenescence, i.e., the age-related dysfunction of the immune system but they also develop chronic inflammatory states (Boe et al. 2017).
NET formation in neonates
The immune system plays a very important role during pregnancy, with the purpose of protecting the mother and the developing fetus (Mor et al. 2011). Pregnancy is a period that is characterized by modulation of the immune system associated with both the course and stage of pregnancy, as well as the exposure to pathogens. Moreover, the pregnancy is characterized by a pro-inflammatory phase (first trimester), the antiinflammatory phase (second trimester) and by the end of the pregnancy returns to the pro-inflammatory phase (Mor and Cardenas 2010). Pregnant women have an increase in the total number of leukocytes, which correlates with the course of pregnancy (the highest level is in the third trimester) of which Fig. 2 Neurophil maturation-and age-dependent changes in neutrophil extracellular traps (NETs) formation. To strengthen the graphical visualization, potential to form NETs is marked withand +, where + < ++ < +++; −/+ indicates that, for immature neutrophils stimulated ex vivo, some studies reported a lack of NET formation (−) whereas others reported some NET release although weak (+). Phenotype of mature versus aged neutrophils is defined by high or low expression of CXCR4 and CD62L. Immature neutrophils were mostly defined by their nucleus morphology. Reference data are included and discussed in the main text the most abundant cells are circulating neutrophils (Crocker et al. 2000). These neutrophils display a decreased respiratory burst during the second and third trimesters; however, this activity returns to normal within 7 weeks post-partum (Crocker et al. 2000). With respect to NETs, increased levels of cfDNA (nucleosome/MPO complexes) are observed in pregnant women's serum, compared to nonpregnant women (Sur Chowdhury et al. 2016). Interestingly, the tendency to form such complexes increasingly relates to the duration of pregnancy. Nevertheless, the highest serum cfDNA level is observed in preeclampsia women, as opposed to women with normal pregnancy and nonpregnant women (Lo et al. 1999;Sur Chowdhury et al. 2016). Moreover, the level of both fetal and maternal circulating plasma DNA from preeclampsia women correlates with the degree of disease severity (Zhong et al. 2001).
The fetus, which is located in the uterus, develops its own immune system (Dauby et al. 2012). After birth, both preterm (<37 weeks) and term (37-42 weeks) neonates are characterized by a tolerogenic immune response due to the reduced number of immune cells, including neutrophils or lymphocytes, which increase in the first weeks of life (Walker et al. 2011;Nguyen et al. 2016). In the developing human fetus, a small number of neutrophils begin to appear in the clavicular marrow after 11-12 weeks post-conception with the majority observed after 13-15 weeks (Slayton et al. 1998a, b). However, neutropoiesis starts prior to this in the fetal liver (around week 5; Slayton et al. 1998a) and yolk sac (around week 3; Sperandio et al. 2013). Neutrophils of a mature individual display a capacity to migrate to the site of inflammation and effectively fight pathogens through phagocytosis or degranulation (Kolaczkowska and Kubes 2013). In term neonates, the phagocytosis and degranulation are equally efficient as in adults but not in preterm neonates (Bektas et al. 1990;Nupponen et al. 2002). However, both preterm and term neonates show impaired migration of neutrophils to the inflammatory focus (McEvoy et al. 1996;Nussbaum et al. 2013). Hence, the young organism is not able to defend itself as efficiently as the adult and therefore neonates are highly susceptible to infections, including sepsis, which directly affect increased morbidity and mortality (Gardner 2009;Lawn et al. 2010). Makoni et al. (2016) suggested that impairment of the neonatal neutrophils may be due to the increased number of developmentally immature neutrophils at birth rather than other abnormalities such as the expression of surface adhesion molecules, which is low at birth but increases over time (Carr et al. 1992;Makoni et al. 2016). Another reason could be keeping down immunity to prevent side effects that might result from its overactivity.
Furthermore, the formation of NETs in preterm or term infants/neonates has been reported to be weaker (Fig. 3). Neutrophils isolated from infants/neonates displayed impaired NET production after stimulation with LPS, PAF and fMLP, in contrast to neutrophils collected from adult individuals (Yost et al. 2009;Lipp et al. 2017). This was despite the presence of functional receptors that recognize these molecules and uncompromised phagocytosis. Nevertheless, when bacteria (E. coli, S. aureus) or PMA were used to induce NETs, neonatal neutrophils did not form NETs (Yost et al. 2009). On the other hand, Lipp et al. (2017) reported that the cells of term infants release some NETs in response to PMA and those of preterm babies release significantly fewer of these structures. Importantly, the defect of NET formation by neutrophils of preterm and term neonates was not rescued by the ROS donor (glucose oxidase) (Yost et al. 2009). Also, the study by Byrd et al. (2016) showed that NETs induced by neonates in response to a combination of fibronectin (Fn) with purified fungal β-glucan or Fn with C. albicans hyphae are ROS-independent, although in this case, NETs were formed normally. Thus, neonate neutrophils seem to be sensitive to fungal stimulation but not necessarily the bacterial components (Byrd et al. 2016). However, in contrast to Lipp et al. (2017), Marcos et al. (2009) showed that neonatal neutrophils can cast NETs upon LPS (as well as other numerous TLR agonists) although at first the signal is weaker (Marcos et al. 2009). Direct comparison of the two studies indicates that neonate neutrophils release NETs but they require a longer time for their maximal formation. In fact, further studies revealed that even the most prematurely born infants gain the capacity to release NET by day 3 post-birth and maximal capacity to cast NETs is achieved between day 3 and 14 of life . This characteristic seems also to be present in other mammals, as the same phenomenon was observed in pigs (Nguyen et al. 2016). Also, neutrophils of 21day-old mice produced fewer NETs than the cells of 60-dayold animals (Barth et al. 2016b).
In the search for mechanisms of impaired/delayed NET formation by neonates, a family of endogenous inhibitors of NETs was discovered . The family, called nNIF-related peptides (NRPs), after the first identified peptide (NET-inhibitory factor, nNIF), also consists of cancerassociated SCM recognition, immune defense suppression and serine protease protection peptide (CRISPP) and a 44amino acid carboxy terminus cleavage fragment of A1AT (α1-antitrypsin), A1ATm358 . The levels of inhibitors rapidly decrease in the circulation of the infant after delivery. This might explain why, in some studies, differences in NET formation were reported between preterm and term infants. The inhibitors were detected in different tissues/body fluids -umbilical blood (nNIF), placenta (A1ATm358), plasma (CRISPP-related peptides)underlinging their importance. They also inhibited NET formation induced by bacteria (S. aureus), damage-associated molecular pattern (DAMP; heme) and PMA ) but did not destroy them. A mechanism of their action is also very intriguing, as NRPs do not affect ROS production nor NE activity (although, after entering the cell, they localize in its close proximity) but inhibit PAD4 and histone citrullination. Importantly from a therapeutical point of view, the injection of either nNIF or CRISPP into adult mice infected with E. coli or LPS prevented formation of NETs and decreased mortality .
Why would such inhibitors function only in fetuses/neonates? nNIF levels were negligible in healthy adults and undetectable in the plasma of adult individuals with chronic inflammatory disorders . It is possibly because, during pregnancy, NET-inducing stimuli are present/ generated at the maternal fetal interface (Gupta et al. 2005;Marder et al. 2016;Mizugishi and Yamashita 2017) and thus excessive formatiom of the traps could cause inflammatory pathology in the fetomaternal environment. But then, shortly after birth, the inhbitors are degraded or neutralized by unknown means. Intriguingly, the latter correlates with the time when resident microbiota inhabits the human infant and the microbiota was indeed shown to regulate granulocytosis and host resistance to sepsis in the neonate (Deshmukh et al. 2014). The impact of microbiota on the life-span and functioning of neutrophils in adulthood has just been established (see BNETs and age of neutrophils^).
NET release by elderly individuals
An aging organism, like the newborn, is susceptible to a variety of inflammatory pathogenesis, leading to increased morbidity, which is due to impaired immune function (Collerton et al. 2012;Tseng et al. 2012;Boe et al. 2017). Therefore, the term immunosenescence has been introduced. Immunsenescence, or inflamm-aging, is associated with lowgrade, chronic, pro-inflammatory status, resulting from an imbalance between pro-inflammatory agents and antiinflammatory factors (Franceschi et al. 2007;Collerton et al. 2012). It is characterized by elevated levels of proinflammatory cytokines, including IL-6 and TNF-α, in physiological conditions (Bruunsgaard et al. 2000;Krabbe et al. 2004;Ferrucci et al. 2005). One hypothesis says it is because of the constant immune challenges over the lifetime leading to a higher basal activation state of cells of the innate immune system (Fulop et al. 2017). In addition, a recent study reports that these age-associated changes depend on the microbiota (Thevaranjan et al. 2017). On the other hand, the elderly have a weaker response to vaccination (Goodwin et al. 2006;Sasaki et al. 2011), which might result from an impaired ability to present antigens to T cells, the latter leading to a dysfunctional immune response (De Martinis et al. 2004;Plowden et al. 2004;Wong and Goldstein 2013).
Neutrophils of elderly individuals are characterized by impaired bactericidal activity (Wenisch et al. 2000), chemotaxis (Fulop et al. 2004), phagocytosis (Butcher et al. 2001;Simell et al. 2011) and decreased ability to perform a respiratory burst (Wenisch et al. 2000). However, some parameters are either preserved (chemokinesis) or up-regulated (degranulation) . The changes are believed to reflect on the behavior of the cells in aged individuals. They perform aberrant migration (altered chemotaxis/chemokinesis ratio) so they can spread more efficiently than those from younger individuals and, because they release more protease (as shown Fig. 3 Impact of individual (human) age on neutrophil extracellular trap (NET) release. Graphical representation of neutrophil capacity to produce NETs upon stimulation. In the case of neonates, neutrophil potential to release the traps changes in time. To strengthen the graphical visualization, potential to form NETs is marked withand + where + < ++ ; −/+ indicates that in the case of newborn infants some studies reported a lack of NET formation (−) whereas others reported some NET release (+). The presence of endogenous NET inhbitors shortly after birth is indicated by circles with a diagonal line. Reference data are included and discussed in the main text for NE; Sapey et al. 2014), possibly to facilitate migration through the ECM, more collateral damage can occur.
Interestingly, high levels of NE, along with eleventated pro-inflammatory cytokine levels, are also characteristic for the low-grade inflammatory state accompanying obesity (Talukdar et al. 2012). In fact, it is recognized now that such an inflammatory state connects aging, metabolic syndrome and cardiovascular disease (Guarner and Rubio-Ruiz 2015).
In physiological conditions, numbers of neutrophils in the bone marrow are similar between old and young mice. However, during inflammatory conditions, such as sepsis induced by cecal ligation and puncture (CLP), fewer neutrophils are observed in the peritoneal lavage in old versus young mice (Xu et al. 2017). Thus, while in healthy aged organisms, the pro-inflammatory state is apparent, in the course of inflammation, the immune response seems to be dimmed. Although not many studies have been undertaken on NET formation by elderly individuals, they all consistently reported weaker production of the traps, in line with the data on other neutrophil activities (Fig. 3). It was observed when the cells were first primed with TNF-α and then activated to form NETs with LPS or IL-8 (Hazeldine et al. 2014), stimulated with Pam3CSK4, a TLR2 ligand (Xu et al. 2017), S. aureus (Tseng et al. 2012) or mitochondrial DNA, a DAMP (Itagaki et al. 2015). Notably, expression of nucleases by S. aureus (vs. the nuclease null strains) led to increased bacterial dissemination in young but not old mice, suggesting that defective NET formation in elderly mice permitted both nuclease and nonnuclease expressing S. aureus to disseminate, altogether leading to more invasive S. aureus infection (Tseng et al. 2012). Interestingly, neutrophils isolated from elderly periodontitis patients also released fewer NETs than the young ones but this was not observed in the case of healthy age-matched controls (Hazeldine et al. 2014). In the studies applying TLR2 and TLR4 ligands, neutrophils collected from elderly people had normal expression of respective receptors required for the cell activation but dimished ROS production (Hazeldine et al. 2014;Xu et al. 2017). And thus the latter was proposed as a mechanism of the lower NET release. However, Hazeldine et al. (2014) as well as Tortorella et al. (2004) showed that there is no impairment in p38 mitogenactivated protein kinase (p38 MAPK) activity, the signaling cascade activated by ROS, in TNF-α-primed neutrophils in both the elderly and younger individuals. Furthermore, PMA, a strong chemical inducer of ROS, induced similar quantities of NETs in both age groups (Tortorella et al. 2004;Hazeldine et al. 2014).
Another possible mechanism leading to dimished NET formation in aged individuals is impaired autophagy. Although involvement of autophagy in NET formation is controversial (see BOn how NETs are created^), its impairment was suggested to be co-responsible, along with ROS, for a weak NET release by neutrophils of elderly individuals (Xu et al. 2017).
In particular, a defect of Atg5, involved in autphagosome formation, was pointed out to contribute to dimmed NET release. And instead of forming NETs, neutrophils were undergoing apoptosis (Xu et al. 2017) .
There is one report on an increased capacity of neutrophils from aged individuals to produce NETs. This observation comes from studies on aortic lesions in atherosclerotic mice and is strengthened by data from isolated neutrophils activated to produce NETs with 7-ketocholesterol, an athero-relevant stimulus, the most abundant oxysterol in human (Wang et al. 2017). Such an effect resulted from increased mitochondrial oxidative stress, thus mitochondrial (mitoOS) and not cytosolic, ROS generation. The former being indeed associated with atherosclerosis during aging (Vendrov et al. 2015). Considering that numbers of inflammatory neutrophils were the same in aged and young mice, the young animals had smaller lesions and their NET formation was mitoOS-dependent, indicating intrinsic changes in neutrophils of aged subjects. This experimental setting differs from the other studies on NET formation by neutrophils of elderly individuals in clear requirement of mitochondrial ROS and not NADPH oxidase-dependent.
As for what we know to date, neutrophils of elderly subjects in general cast fewer NETs (Fig. 3). No data indicate so far that this is because of active inhibiton of their formation as in neonates but rather it results from dysregulated activity of neutrophils. It is tempting to speculate that one of the mechanisms involved might be connected to the increased release of NE via degranulation, as this enzyme is critical for NET formation. For these NETs that require NADPH oxidasedependent ROS, a diminished respiratory burst by neutrophils of elderly subjects can provide an additional explanation. However, the observation that increased mitochondrial ROS can in fact increase NET formation by neutrophils of aged individuals suggests that the cells do not lose the capacity to release NETs per se and that this is rather due to upstream dysfunctional pathways.
Conclusions
The phenotype of any given cell reflects either its maturation state or the impact of extrinsic factors and manifests itself by changes in cell morphology, expression pattern of intracellular and extracellular molecules but, foremost, its (altered) functioning. This is also true for neutrophils and their capacity to induce NETs. As, nowadays, NETs are the focus of biomedical research, mostly due to the side effects of their formation, a search for their inhibitors or removing agents dominates the field. Owing to studies on neonate neutrophils is the discovery of endogenous NET inhibitors. This is especially promising in the light of finding that immature neutrophils, which are more abundant in numerous diseases in which NETs play a pivotal role, release the traps spontaneously. Obviously, the cells do not behave uniformly in all conditions and studies on NETs are also technically challenging as they mostly rely on either detection of singular NET components in body fluids or ex vivo stimulation of isolated neutrophils. Although, in the case of mice studies, these limitations can be overcome with intravital microscopy, detecting the traps directly in blood vessels or tissues of live animals, this technique cannot be applied to human studies. And NET inhibition can also be detrimental. For instance, at early stages of sepsis, the structures help to contain dissemination of infection and it is at later time points that their persistent presence causes collateral damage. Thus, NET inhibition or removal should also be timely adjusted, which, however, is difficult to control. Now, a new factor has to be taken into consideration when it comes to the control of NET formation and its consequences, namely the presence of neutrophils of certain ages (immature-maturesenescent) as well as the age of the individuals.
|
2018-02-21T18:56:32.521Z
|
2017-12-18T00:00:00.000
|
{
"year": 2017,
"sha1": "4f1232db7b063d9354252a97907a38bd6d090b51",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00441-017-2751-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "112d1ce1a70d04d522d034c3f0b7b10e63b9090d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
8650920
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of the Mechanical Properties of Basalt Fiber-Wood-Plastic Composites via Maleic Anhydride Grafted High-Density Polyethylene (MAPE) Addition
This study investigated the mechanisms, using microscopy and strength testing approaches, by which the addition of maleic anhydride grafted high-density polyethylene (MAPE) enhances the mechanical properties of basalt fiber-wood-plastic composites (BF-WPCs). The maximum values of the specific tensile and flexural strengths areachieved at a MAPE content of 5%–8%. The elongation increases rapidly at first and then continues slowly. The nearly complete integration of the wood fiber with the high-density polyethylene upon MAPE addition to WPC is examined, and two models of interfacial behavior are proposed. We examined the physical significance of both interfacial models and their ability to accurately describe the effects of MAPE addition. The mechanism of formation of the Model I interface and the integrated matrix is outlined based on the chemical reactions that may occur between the various components as a result of hydrogen bond formation or based on the principle of compatibility, resulting from similar polarity. The Model I fracture occurred on the outer surface of the interfacial layer, visually demonstrating the compatibilization effect of MAPE addition.
fiber, are two distinctly different materials with unique physical and chemical properties [27,28]. The interface between these two materials can be significantly improved by the addition of a compatibilizer [16,[29][30][31][32][33]. This study investigated the effects of maleic anhydride grafted high-density polyethylene (MAPE) on the mechanical properties of basalt fiber-wood-plastic composites (BF-WPCs) and studied the mechanisms of those effects by examining the interfacial fracture morphology and the physical and chemical interactions between the constituents. An approach is suggested for the development of high-strength, lightweight WPC structural materials by the application of bionic theory [34,35], and the results suggest a new strategy for the protection of precious ecological resources such as wood.
Experimental Design
Based on the results of preliminary experiments [26], the BF was chosen to be 6 mm in length, and the BF content was selected to be 20 wt % of the total amount, including the BF and WPC fractions. The samples were divided into six groups based on the addition of MAPE, which ranged from a weight ratio of 0% to 12% in increments of 3% (denoted as 20%-0%, 20%-12% or abbreviated as 0%, 12%). The pure WPC sample with no BF or MAPE is denoted as 0%-0%.
Sample Preparation
The BF and WPC were mixed for 10 min using a two-roll mill; the front roll was set to 160 °C and the back roll to 170 °C. The sample was then immediately hot-molded at 180 °C with a plate vulcanizer (type XLB-25 D, Shanghai First Rubber Machinery factory), preheated for 5 min and finally held at a pressure of 8 MPa for 10 min. Plates of the BF-WPC were then made for tensile and flexural testing. This approach differs from the previously published method [26] in which the sample was cooled for 12 h and then crushed.
Mechanical Property Testing
The samples were prepared for tensile and flexural testing in accordance with GB/T 1040. 2-2006 and GB/T 1449-2005, and the tensile and flexural tests were performed using an electronic universal testing instrument (REGER-200A, produced by Shenzhen REGER Instrument Co., Ltd. (Shenzhen, China)). The gauge length was chosen to be 80 mm for the tensile test and 60 mm for the flexural test. The test rates were all selected to be 10 mm/min. Five sample types were tested, and each sample type was analyzed three times. The total sample size was 15. The tests were executed three times, in April and December of 2011 and in February of 2012.
Fractography
The fracture surfaces (fractography) were investigated with scanning electron microscopy (SEM: JSM-5610LV) after the sample sections had been vacuum-coated with gold. Figure 1 shows the average values measured in each of three independent experiments (marked separately with tetragonal, triangular, and circular symbols). The overall average values and the dispersion of the experimental results are shown in Figure 1 (marked with the star symbol). The solid line represents the regression curve of the overall average of the experimental results, while the overall average of the pure WPC sample without BF is shown as a dotted line for comparison. Comparison with the dotted line in Figure 1 demonstrates that MAPE addition improved the tensile strength and flexural strength of the BF-WPC, although it increased the elongation compared with the WPCs with the short BF and without MAPE. Several differences can be observed among these three independent experiments, with the solid lines in Figure 1 representing the average values of 15 datapoints (these results are discussed in further detail below). The mechanical properties of the BF-WPC with MAPE increased drastically with increasing MAPE content and reached their maximum values when the MAPE content was less than 6%. Based on the discrete nature of the experimental data, Student's t-test was performed to determine whether the MAPE addition led to a significant change in the strength index. This test was performed by comparing samples with no MAPE (0% sample) to the sample with near maximal mechanical properties (6% sample). Table 1 presents the results of the t-test. The results are interpreted based on the criterion P (t 0 ) ≤ 0.05, and significant results are labeled with a star (*). Compared with the sample with no MAPE (Table 1), the indexes of the other samples increased significantly, except for the experimental results of the sample with 3% MAPE content. Compared with the maximum value observed for 6% MAPE content, the tensile strength significantly decreases andthe remaining two indicators do not change with increases in MAPE content above 6%. Thus, the maximum enhancement of mechanical properties is achieved at a MAPE content of approximately 6%, with no obvious effect with a further increase in MAPE content. The maximum values of the tensile and flexural strengths were increased by more than one-third versus the WPC without the addition of MAPE and by more than one-half versus the pure commercial WPC. When the MAPE content was greater than 6%, the tensile and flexural strengths decreased somewhat as MAPE content increased, although the elongation improved slightly. These results are consistent with the previous report by Chen et al. [36] regarding the tensile and flexural strengths of the WPC, which reached maximum values at a MAPE content of 6%-8%. Because the density of the BF (approximately 2600-3000 kg/m 3 ) is significantly higher than that of the WPC (approximately 1 t/m 3 ), the densities of the pure WPC samples were measured to determine whether the mechanical properties of the experimental WPC had improved. The density of samples without BF is 1.01, and BF and MAPE addition increases the density to 1.21, as determined by averaging measurements from five samples with 6% MAPE content. Therefore, the specific strengths were calculated for both the pure WPC and regression curve values at a MAPE content of 6%. The increasing rates of specific strength (IRSS) between the two groups can be calculated as follows: the tensile strength is of 43%, the elongation is of 7%, and the flexural strength is of 33%. The tensile and flexural strengths both increased by approximately one-third or more, while the elongation increased by less than 10%. Additionally, the specific strength value was improved. Figure 2 presents the tensile fracture micrographs of each sample, revealing that the presence of MAPE changes the micrographs. The surface of the BF is smooth (Figure 2b), and the wood fiber and HDPE can be distinctly observed in the two samples that do not contain MAPE (Figure 2a,b: marked with a triangle and a star, respectively). The fracture surfaces exhibit tooth-or burr-like patterns. Overhanging basalt fibers (indicated with an arrow) protrude from the cross-sections, and their roots are entombed within the wood-plastic matrix, which contains BF in each sample (Figure 2b-f). The micrographs of the sample containing MAPE lead to the following observations: a small amount of the BF appears to be smooth ( Figure 2e, marked with an arrow), but most of the BF has formed a boundary layer, and the shape of this layer is uneven (Figure 2d,e, marked with a wide arrow). Some of the BFs are relatively smooth, though they actually possess a thin membranous interfacial layer (Figure 2c,f, marked with a wide arrow). The wood fiber, HDPE and MAPE have nearly become integrated into a single material, and the cross-sections of the wood fiber and the HDPE have become soft and dense (Figure 2c-f, marked with a hexagon). Here, the burr-like patterns can no longer be observed.
Fracture Micrographs, a Model of the BF Interfacial Microstructure and its Mechanism of Formation
Many previous studies have reported the modification of WPCs with a compatibilizer [37][38][39][40][41]. Although the type of compatibilizer (e.g., MAPP and MAPE) and wood fibers used in modified WPCs may differ, the strengthening mechanisms share some common characteristics. (1) A chemical reaction occurs between the active functional groups of the compatibilizer and the functional groups of the plant fiber [42], and this reaction forms either a chemical bond or a hydrogen bond, both of which commonly occur when polyethylene-based compatibilizer reacts with the hydroxyl groups [23] of the fiber or when the anhydride of MAPE [43][44][45] reacts with the glycosidic bond [46] (esterification)of the wood fiber; (2) This bonding reduces the polarity and hydrophilic nature of the plant fiber and improves the bonding strength between the wood fibers and the non-polar matrix, in accordance with the principle of compatibility between similar polarities; (3) The long chain within the compatibilizer (MAPE) can be inserted into the matrix, thereby enhancing the combination of the compatibilizer with the matrix as the molecular chain intertwines within the region of insertion [16,29,45]. Thus, the compatibilizer can effectively improve the interfacial bonding character of the composite by combining these three functions [30,47]. The findings of similar, previously reported experiments do not need to be repeated here, and the mechanisms by which the composite microstructure that has been integrated into one material and has become soft and dense upon the addition of MAPE are self-evident. Therefore, this paper continues to focus on the mechanism of enhancement by the MAPE compatibilizer. Figure 3a); Type E represents the interfacial layer, IL, with its thickness marked as t, shown above a portion of the BF; and Types D and I are the same as Type E, except the interfacial layer is spread over the BF. The fracture behavior of Type D (Figure 3c) shows a wave-like mode that overlaps with the behavior of Type D; additionally, Type I shows a flat interfacial layer (Figure 3d). Table 2 summarizes the features of these types.
Based on the bonding mechanism and the effect of MAPE modification, the samples can be further divided into the two models model 0 and model I, as shown in Figure 3e,f. Model 0 corresponds to Type 0, which illustrates how the BF and WPC closely align even without an interfacial layer (Figure 3e). Type 0 can be found in all samples lacking MAPE and is sometimes found in samples containing MAPE. Model I corresponds to Types E, D and I. Model I is the primary interfacial model and can only be found in samples containing MAPE. The mark IL displayed in this model map represents the interfacial layer between the BF and the wood-plastic composite (Figure 3b-d,f). L i and L o represent the inner and outer faces of the interfacial layer, respectively. The plane, i.e., the declining box, with a certain relative degree of declination and color-filling indicates the MAPE content and its modification function. The plane is the top layer and covers the BF and WPC contents, which indicates compatibilization by MAPE. The plane plays a blocking role, obscuring the boundary between the HDPE and the wood fiber. Both the BF boundary and the boundary between the HDPE and the wood fiber, which are presented in Figure 3f, are even more obscured than the boundary presented in Figure 3e. The approach used to construct Model I will now be further explained. As indicated earlier, the wood fibers have a substantial hydrophilic quality, and the HDPE resin is a hydrophobic polymer. The BF is an inorganic material formed by molten rock that also possesses inert properties. Therefore, without additives, the interface commonly associated with Model 0 (Figure 3a,e), i.e., the interface between these three materials, is relatively weak due to the lack of chemical bonding among the active groups, which leads to poor compatibility. The MAPE added to the WPC [45] reduces the polarity and hydrophilicity of the wood fibers according to the three WPC strengthening functions mentioned above. According to the principle of compatibility based on similar polarity, MAPE not only enhances the compatibility and decreases the distance between the BF surface and the wood, but it also integrates itself into the BF surface, in addition to the nearly singular integration of the WPC. All of these factors improve the binding strength between the WPC and the BF [27,45,48]. Additionally, the Zeta potential determined by Hu et al. [49] bonds with the hydrophilic polar groups and that the BF surface contains a high Si content, which has the potential to chemically react with the surrounding active functional groups under certain conditions [28,32]. All of these factors contribute to the strengthening of the interfacial bond [16,29]. Furthermore, in addition to the increased compatibility between the BF and the WPC, closer proximity between the groups that participate in these hydrogen bonds or chemical reactions and the WPCs will facilitate hydrogen bond formation or chemical reactions [50,51]. This process of integrating all of the phases into one phase results in a large number of interfaces in Model I (Figures 2 and 3). A more in-depth mechanism must be developed for future research.
The Relationships between the Mechanical Properties and MAPE Content
As previously mentioned, the BF-WPC composites were mainly composed of the four materials wood, plastic, BF and MAPE, which have significantly different physical and chemical properties. Initially, it can be difficult to quantitatively analyze such a complex composite material. Therefore, this section comprehensively explores the fracture morphology, the interfacial model and the relationship between these two factors, as well as the mechanical properties of the sample.
The fracture morphologies of the samples with MAPE indicate that the wood fiber, HDPE and MAPE fractions have become almost entirely integrated, forming a single material (Figure 2c,f). Additionally, the interfacial model developed for the BF surface shows that the vast majority of fibers exist as depicted in the Model I interface. If the wood plastic is the substrate (matrix) and the BF is used as a reinforcing fiber, then the matrix is integrated into a single material that should exhibit enhanced mechanical properties compared to that of the relatively loose matrix without MAPE. For the latter matrix, as shown in Figure 3b-d, all fractures of Model I occur at L o , and no fractures occur at L i . This result qualitatively demonstrates that the binding force of L i is larger than that of the outer surface of L o . This result also demonstrates that MAPE addition improves the binding condition. Thus, a model is provided for the first time that shows the relationship between the MAPE, BF and WPC components, includes the concept of the interfacial layer, and includes sufficient schematic explanation to enable its modification [30,47]. This model provides evidence for the effect of compatibilization ( Figure 1).
Future research questions include whether the MAPE additive has the same effect on the mechanical properties of the WPC samples and what MAPE content yields optimal results. A significant difference exists among the various constituents of the BF-WPC system with respect to its physical and chemical properties. Furthermore, the experimental results are also affected by the experimental conditions (e.g., the mixing uniformity and atmospheric conditions), which is why the experiment was repeated three times. These factors are more consistent within each experiment than among the three experiments, with little variation among experimental results from the same batch (with the exception shown in Figure 1b, marked as *), while the differences between the average values of the different batches are more significant (Figure 1a-c, marked as ○). Additionally, the differences between the overall averages for all three experiments are significant (Figure 1, dotted lines). Nevertheless, the average values obtained from the three separate experiments are taken to be more representative. As a further illustration of this point, it should be noted that the regression curve of the total average is relatively stable, as shown in Figure 1.
From Figure 2, in which the MAPE content is 6%, the matrix is not integrated into a single material, as larger lumps can also be identified (Figure 2d, identified as the hexagonal mark). However, at a MAPE content of 12%, a significant number of networked (wiry) shapes are visible, which appear to be the polymeric substance ( Figure 2f, the hexagonal mark). This phenomenon is also reflected in the Model I interface (Figure 2d, the broad arrow marks). Overall, the thickness of the interfacial layer is thinner with a MAPE content of 3%, thicker with a MAPE content of 12%, and intermediate when the MAPE content is 6%. Because the tensile and flexural strengths of MAPE are largely similar to those of HDPE, the addition of large amounts of MAPE (9% or more) allows it to act as a matrix for HDPE, which decreases the quality of the interfacial layer [42,52], resulting in an excessively thick interface or a reduced strength. The tensile and flexural strengths also exhibit small decreases (solid regression line of Figure 1a-c). The MAPE itself exhibits good plasticity, and the elongation also exhibits a slight improvement (Figure 1b, solid regression line). The experimental results presented in Figure 1 show that the specific tensile and flexural strengths reach their maximums when the MAPE content is 5%-8%, although the elongation increases rapidly at first and then more slowly. These results are essentially the same as those obtained for the components integrated into a single material, possessing a BF with the "Model I" interface. A schematic diagram is presented first to illustrate the relationship between the MAPE with BF and WPC content as well as the concept of the interfacial layer and to intuitively demonstrate the effect of MAPE compatibilization.
Finally, it is worth noting the concern arising from the fact that the dispersion of each independent experiment is smaller than that of the total experiment and that the origins of this issue must be identified. This is a significant issue in terms of the possible future industrialization of BF-WPC composites. Although this issue was not fully solved in this work, we demonstrated that the MAPE additive was able to improve the mechanical properties of the BF-WPC. This issue of variability could be due to several preparation factors such as the composition of the BF-WPC complex, the preparation technology and the processing conditions. Thus, in the future, these issues can be addressed by designing comparative experiments. The results of this work therefore provide inspiration for future research efforts.
Conclusions
This investigation revealed that the MAPE content affects the mechanical properties of the BF-WPC, which was further discussed based on the fractographic analysis.
Explanations were proposed regarding the nearly complete integration of the wood fiber and the HDPE into each other when MAPE was added to the WPC and when the interfacial layer formed above the BF surface, as observed in the fracture micrographs. Two novel interfacial models were proposed and then used to explore the physical significance of MAPE addition. Intuitively, these explorations demonstrate the effect of compatibilization observed in the fracture micrographs, which further demonstrate that fracture occurs within the outer region of the interfacial layer when the sample contains MAPE. The interfacial mechanism of Model I was determined based on the formation of hydrogen bonds or chemical reactions among the components of the BF-WPC and on the principle of compatibility among similar polarities.
The fracture of the Model I interface occurred on the outer surface of the interfacial layer, visually demonstrating the compatibilization effect of MAPE. Investigating the matrix characteristics of the integrated material with different MAPE contents, in addition to the interfacial features of Model I, elucidated the qualitative relationship between the MAPE fraction and mechanical properties. The maximum specific tensile and flexural strengths can be achieved when the MAPE content is 5%-8%. The elongation increases rapidly at first and then slowly.
|
2015-09-18T23:22:04.000Z
|
2013-06-01T00:00:00.000
|
{
"year": 2013,
"sha1": "3675e579f9feddc28f1a86b630f603d46b4406c2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/6/6/2483/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3675e579f9feddc28f1a86b630f603d46b4406c2",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
59291944
|
pes2o/s2orc
|
v3-fos-license
|
Dense 3D Point Cloud Reconstruction Using a Deep Pyramid Network
Reconstructing a high-resolution 3D model of an object is a challenging task in computer vision. Designing scalable and light-weight architectures is crucial while addressing this problem. Existing point-cloud based reconstruction approaches directly predict the entire point cloud in a single stage. Although this technique can handle low-resolution point clouds, it is not a viable solution for generating dense, high-resolution outputs. In this work, we introduce DensePCR, a deep pyramidal network for point cloud reconstruction that hierarchically predicts point clouds of increasing resolution. Towards this end, we propose an architecture that first predicts a low-resolution point cloud, and then hierarchically increases the resolution by aggregating local and global point features to deform a grid. Our method generates point clouds that are accurate, uniform and dense. Through extensive quantitative and qualitative evaluation on synthetic and real datasets, we demonstrate that DensePCR outperforms the existing state-of-the-art point cloud reconstruction works, while also providing a light-weight and scalable architecture for predicting high-resolution outputs.
Input
Stage-2 Stage-3 Stage-1 We inhabit a world of illuminated physical objects having diverse shapes, sizes and textures. The human visual system is capable of processing the retinal image of an object to extract the underlying 3D structure. Our 3D per-ception capabilities go beyond mere reconstruction of the overall shape. We are highly adept at capturing finer details present on the object surface. This ability to effectively capture accurate 3D characteristics is vital to our understanding of the physical world and manipulation of objects within it.
Machines would greatly benefit from learning to perceive the 3D world as humans do. A number of tasks such as robot grasping, object manipulation, and navigation are inherently dependent upon an agent's ability to understand physical objects and scenes. Further, the ability to not only have an understanding of the overall shape, but to also model the two-dimensional surface manifold is a critical function in 3D perception. An ideal machine would have the capacity to hierarchically grow its understanding of a surface representation (Fig. 1).
With the recent advances in deep learning, the problem of 3D reconstruction has largely been tackled with the help of 3D-CNNs that generate a voxelized 3D occupancy grid. Unlike 2D images, where all the pixels add rich spatial and structural information, volumetric representations suffer from sparsity of information. The information needed to perceive the 3D structure is provided by surface voxels, while the voxels within the volume increase the representational and computational complexity with minimal addition in information. This representation is particularly inefficient to encode high resolution outputs since the surface to volume ratio diminishes further with increase in resolution. 3D CNNs are also compute heavy and add considerable overhead during training and inference.
The drawbacks of volumetric representations are alleviated by 3D point clouds. Point clouds are a more efficient alternative, since the points are sampled on the surface of the object, and can effectively capture detailed surface information. An inherent challenge in processing a point cloud however is that it is unordered i.e. any permutation of the points doesn't alter the 3D structure that it represents. CNN architectures have traditionally been used for analyzing ordered data such as images, voxelized grids, etc. Extending the above for unordered representations such as point clouds has very recently been studied using architectures and loss formulations introduced in [18,20,5,23,14].
However, existing point cloud based reconstruction works directly predict the full resolution point cloud in a single stage. This has a number of issues when it comes to predicting dense outputs. (1) First, there is a substantial increase in the number of network parameters, which makes it difficult to scale up such architectures. (2) Second, loss formulations such as the Earth Mover Distance, which is frequently applied on point sets is computationally heavy and its application to dense outputs would add considerable memory and time overhead. As a result, dense predictions would fail to benefit from some of the properties enforced by such loss formulations such as uniformity in points.
In this work, we seek to find answers to two important questions in the task of single-view reconstruction (1) Given a two-dimensional image of an object, what is an efficient and scalable solution to infer a dense 3D point cloud reconstruction of it? (2) How do we upscale light-weight sparse point cloud representations so as to approximate surfaces in higher resolutions? To address the former issue, we use a deep pyramidal architecture that first predicts a low-density sparse point cloud using minimal parameters, and then hierarchically increases the resolution using a point processing framework. To achieve the latter, we propose a mechanism to deform small local grids around each point using neighbourhood terrain information as well as global shape properties. These learnt local grids now approximate a surface in the next stage. The benefits of this technique are two-fold. First, predicting a low resolution coarse point cloud enables us to utilize loss formulations that otherwise may be very intensive for dense point clouds. This leads to better quality predictions. Second, using a pyramidal approach on point sets drastically reduces the number of network parameters, a crucial operation for dense predictions.
In summary, our contributions in this work are as follows: • We propose a deep pyramidal network for point cloud reconstruction called DensePCR, that hierarchically predicts point clouds of increasing resolution. Intermediate point clouds are super-resolved by extracting global and local point features and conditioning them on a grid around each point, so as to approximate a surface in the higher resolution.
• We perform a network analysis of the proposed architecture to compare against existing approaches and demonstrate that densePCR has substantially fewer number of learnable parameters, an essential requirement for dense prediction networks. Specifically, our network gives rise to 3X reduction in the parameters as compared to the baseline point cloud reconstruction networks.
• We highlight the efficacy of this network in generating high quality predictions by evaluating on a large scale synthetic dataset, where we outperform the stateof-the-art approaches by a significant margin, despite having fewer parameters.
• We evaluate DensePCR on real data and demonstrate the generalization ability of our approach, which significantly outperforms the state-of-art reconstruction methods.
Related Work
3D Reconstruction For a long time, the task of 3D reconstruction from single-view images had largely been tackled with the help of 3D CNNs. A number of works have revolved around generating voxelized output representations [6,30,3,19,11,31,27]. Giridhar et al. [6] learnt a joint embedding of 3D voxel shapes and their corresponding 2D images. Choy et al. [3] trained a recurrent neural network to encode information from more than one input views. But voxel formats are computationally heavy and informationsparse, which lead to research on the octree data structure for representing 3D data [25,22,21,9,28].
Recently, Fan et al. [5], introduced techniques for generating unordered point clouds to obtain single-view 3D reconstruction results outperforming volumetric approaches [3]. While [5] directly predict the 3D point cloud from 2D images, our approach stresses the importance of first predicting a low-resolution point cloud and super-resolving it to obtain a dense prediction. Groueix et al. [7] represented a 3D shape as a collection of parametric surface elements and constructed a mesh from the predicted point cloud. Mandikal et al. [16] proposed a latent matching setup in a probabilistic framework to obtain diverse reconstructions. Other works utilize 2D supervision in the form of silhouettes and depth maps for 3D reconstruction [32,26,29,35,8,15,10]. Concurrently with us, Yuan et al. [34] propose to deform grids for completing partial depth maps. Apart from reconstruction, other 3D perception tasks have also been performed using point clouds [1,33,17].
Hierarchical Prediction
The concept of Laplacian pyramid networks has been previously used in 2D vision tasks for hierarchical prediction. Denton et al. [4] proposed a generative adversarial network to generate realistic images based on a Laplacian pyramid framework (LAPGAN). Lai et al. [13] extended the above by introducing a robust loss formulation and making architectural modifications for improving speed and accuracy. In the 3D vision domain, Hane et al. [9] proposed an octree-based method for hierarchical surface prediction. While the focus of [9] is on extending a volumetric representation into an octree-based one to enable surface prediction, we directly operate on points sampled on the object surface.
Dense Reconstruction Network
Dense Prediction Yu et al. [33] proposed a network to upsample point clouds (PU-Net), which is more related to our work. However, the proposed DensePCR differs from PU-Net in two aspects. First, the objectives of the two models are different. PU-Net is a super-resolution model which is designed to upsample input point clouds which display high uniformity albeit with some resistance to small noise. On the contrary, the proposed DensePCR is a reconstruction model that predicts a particular 3D point cloud based on the given RGB image and then increases the resolution. It is important to note that the point clouds to be upsampled are far from ideal, unlike in the case of PU-Net. Second, PU-Net uses a repulsion loss to avoid point clustering and ensure that output points are well-separated. In contrast, we ensure that points don't cluster in higher resolutions by conditioning aggregated features on a local coordinate grid. This is akin to deforming small local grids around each point while upscaling. Furthermore, PU-Net uses the computationally expensive Earth Mover's Distance (EMD) loss at higher resolutions, while we use the more light-weight Chamfer distance metric.
Approach
Our training pipeline consists of a multi-stage training paradigm as outlined in Fig. 2. The input RGB image is passed through an encoder-decoder network that outputs the reconstructed point cloud at a low resolution. This intermediate output is subsequently passed through a dense reconstruction network to obtain a high-resolution point cloud. This is done via aggregating global and local point features and conditioning them around a coordinate grid to approximate a surface around each point in the higher resolution. Point set distance based losses are enforced at every stage to ensure a coherency of intermediate outputs. Each of the components of our approach is described in detail below.
Multi-Stage Training
Our goal is to train a network to generate a dense 3D point cloud given a singl-view image. For this purpose, we train a deep pyramidal neural network in a hierarchical fashion (Fig. 2a). An encoder-decoder network takes in an input RGB image I and outputs a sparse 3D point cloud X P . Since a point cloud is an unordered form of representation, we need a loss formulation that is invariant to the relative ordering of input points. To enforce this, we can use one of two commonly-used loss functions on point sets -Chamfer Distance (CD) and Earth Mover's Distance (EMD). Chamfer distance between X P and X P is defined as: Intuitively, this loss function is a nearest neighbour distance metric that computes the error in two directions. Every point in X P is mapped to the nearest point in X P , and vice versa. Although this is a computationally light formalism, it has a critical drawback in that there is no mechanism to ensure uniformity of predicted points [1]. In other words, the optimization often leads to a minima where only a subset of points account for the overall shape, thereby resulting in the clustering of the other points. The EMD loss serves to alleviate this concern. EMD between two point sets X P and X P is given by: where φ : X P → X P is a bijection. Since it enforces a point-to-point mapping between the two sets, it ensures uniformity in point predictions. However, a major drawback of EMD is that it is a computationally intensive formulation, requiring considerable amount of time and memory for high-resolution point sets. Keeping this in mind, we design a training regime that can take advantage of both the losses, while avoiding the common pitfalls. The sparse point cloud X P predicted at the first stage is optimized via the EMD loss. This ensures that the base point cloud to be upsampled is a uniform prediction. All subsequent point clouds are optimized via the Chamfer Distance. This design choice allows us to train the network so as to enjoy the benefits arising from both EMD as well as CD. The architecture of the Dense Reconstruction Network ensures that the upsampled points remain uniform and do not collapse onto one another.
Dense Reconstruction Network
We aim to effectively predict a dense point cloud given its sparse counterpart. Towards this end, we propose an architecture that processes the input point cloud and learns various features that aid in the upsampling process (Fig. 2b).
Global Feature Learning An understanding of global shape properties is essential knowledge in order to reconstruct the input shape at a higher resolution. This is especially important when dealing with a diverse set of data, since point density and spacing are generally modelspecific. To incorporate this knowledge into the learning framework, we make use of an MLP architecture similar to PointNet [18]. It consists of a shared set of MLPs operating on every individual point in the input point cloud X P of resolution n × 3 (Fig. 2b-i). A point-wise max pooling operation is applied after the final layer output, to obtain the global feature vector X g having dimension 1×n g , where n g is the number of filters in the final MLP. X g now contains salient properties defining the global shape characteristics.
Local Feature Learning The importance of local features in per-point prediction tasks such as segmentation and normal estimation has been established by [18,20]. Local point features provide neighbourhood information that enable better capture of the local terrain. Such information is necessary in order to build models that can effectively upsample points to fill in gaps. Hence, we take the approach of [20] to extract local features around every input point. Specifically, we construct neighbourhood balls around every incoming point, and locally apply a sequence of MLPs on every neighbourhood. Neighbourhood features are then pooled to obtain the local feature for a point. Performing this operation on every point in the input point cloud X P gives a set of local features X l , having dimension n × n l , where n l is the number of filters in the final MLP.
Feature Aggregation and Coordinate Grid Conditioning Once we compute the global and local features, we require a mechanism to propagate these features to a higher resolution. For this purpose, we form a feature vector by concatenating the input points X P , tiled global feature X g , and the local feature X l to obtain [X P , X g , X l ] of dimension n × (3 + n g + n l ). We then tile this feature vector by the upsampling factor to obtain a feature of dimension 4n × (3 + n g + n l ). However, we require a mechanism to induce separability between features of the same point. We achieve this with the help of a 2D coordinate grid X gc of dimension 2×2 reshaped to 4×1, centered around each point (here gc stand for 'grid conditioning'). The grid serves to condition the point features so that MLPs in the next layer can learn to propagate the point features. Intuitively, the network learns a set of parameters to deform the grid so as to approximate a surface around that point in the higher resolution. This grid is tiled for every input point to obtain a feature of dimension 4n × 1. The final aggregated feature vector is given by [X P , X g , X l , X gc ], having dimension 4n × (3 + n g + n l + 1), which is operated upon by a set of shared MLPs to finally reconstruct the dense point cloud.
Implementation Details
The image encoder is a 2D CNN consisting of 16 convolutional layers and outputs a latent vector of dimension 512.
Our network predicts point clouds at three stages of hierarchy -1024, 4096 and 16384 points. The decoder which outputs the initial sparse point cloud consists of fully connected layers of dimension [256, 256, 1024 * 3]. Thereafter, the dense reconstruction network consists purely of multilayer perceptrons (MLPs). Specifically, we use MLPs of dimensions [32,64,64] for the global feature learning module, [32,32,64] for the local feature learning module, and [32,64,64] in the feature aggregation module. The ball radius size for neighbourhood feature pooling in the local module is set to 0.1 and 0.05 for 1024 and 4096 points respectively. The grid used for feature conditioning is a 2 × 2 grid of size 0.2 . We first pre-train the different stages of the network independently and later fine-tune the model end-toend to obtain the final predictions. We notice that this training setup converges faster and is more stable than training the entire pyramid in an end-to-end manner from scratch. We use the Adam [12] optimizer with a learning rate of 0.00005 and a minibatch size of 32. Network architectures for all components in our proposed framework are provided in the supplementary material.
Dataset
We train all our networks on synthetic models from the ShapeNet [2] dataset. We use the 80%-20% train/test split provided by [3] consisting of models from 13 different categories. We use the input images provided by [3], where each model is pre-rendered from 24 different azimuth angles. We crop the images to 128 × 128 resolution before passing it through our network. For generating the ground truth point clouds, we uniformly sample N , 4N and 16N points on the mesh surface using farthest point sampling, where N = 1024.
Evaluation Methodology
We report both the Chamfer Distance (Eqn. 1) as well as the Earth Mover's Distance (EMD, Eqn. 2) computed on 16,384 points in all our evaluations. We use an approximate version of the EMD metric due to the high computational cost involved. For computing the metrics, we renormalize both the ground truth and predicted point clouds within a bounding box of length 1 unit.
Baselines
We consider the PSGN network proposed by Fan et al. [5] as the baseline for the point cloud reconstruction task. Specifically, we consider two variants of PSGN -(1) PSGN-FC, and (2) PSGN-ConvFC in our evaluation. The two variants differ in their decoder architectures. PSGN-FC consists of a set of fully connected layers to directly predict the full resolution point cloud. PSGN-ConvFC uti-lizes both a deconvolution as well as fully connected network for point prediction. We use the same architecture as provided by the authors for training the networks, making necessary changes to modify the output to 16,384 points. Note that both variants of PSGN as well as our DensePCR network share the same architecture for the encoder, so as to be comparable. Our network augments the decoder with a dense reconstruction network, whereas PSGN predicts the output in a single stage. Detailed network architectures for the baselines and our network is provided in the supplementary.
Evaluation on ShapeNet
We compare our DensePCR model with the two variants of PSGN [5] on the synthetic ShapeNet dataset [2]. Since [5] establishes that the point cloud based approach significantly outperforms the state-of-art voxel based approaches [3], we do not show any comparison against them. Table 1 shows the comparison between PSGN [5] and our method on the validation split provided by [3]. We outperform both variants of PSGN in 5 out of 13 categories in Chamfer and all 13 categories in the EMD metric, while also having lower overall mean scores. The gain in the EMD metric can be attributed to our multi-stage training setup, which enables us to use the EMD loss at the coarse stage and the light-weight Chamfer loss along with gridconditioning at the fine stage, thus enforcing uniformity as well as maintaining it at latter stages. It is worth noting that we achieve state-of-the-art performance in both metrics despite the fact that our network has one third the number of trainable parameters in comparison to PSGN. A lower EMD score also correlates with better visual quality and encourages points to lie closer to the surface [1,33]. Note that the numbers for EMD are much higher than that for Chamfer, since EMD places a high penalty on clusters as it is a pointto-point distance matching metric. Qualitative comparison is shown in Fig. 3. Compared to PSGN, we are better able to capture the overall shape and finer details present in the input image (legs of chairs, wings and tail of airplanes, trigger of rifles). Note that both the variants of PSGN have heavily clustered regions in many of the predictions while our reconstructions are more uniformly distributed.
The uniformity in our predictions can be attributed to the loss formulations we adopt at different stages of the network. Specifically, enforcing the EMD loss at the coarse stage forces the predictions at the first stage to be uniform. Since subsequent stages are trained to upsample the coarse output, the predictions tend to be uniform despite using the Chamfer distance as the loss. Hence, a careful selection of losses at various stages enables us to train a model with lower computational complexity that achives better performance. Figure 3. Qualitative results on ShapeNet [2]. Compared to both variants of PSGN [5], we are better able to capture the overall shape and finer details present in the input image (legs of chairs, wings and tail of airplanes, trigger of rifles). Note that both the variants of PSGN have heavily clustered regions in many of the predictions while our reconstructions are more uniformly distributed.
Category
Chamfer EMD PSGN-FC [ Table 2. Single view reconstruction results on the real world Pix3D dataset [24]. Chamfer metrics are scaled by 100, EMD metrics are scaled by 10.
Evaluation on Real-World Pix3D
We evaluate the performance of our method on the Pix3D dataset [24] to test the generalizability of our approach on real-world datasets. The dataset consists of a large collection of real images and their corresponding metadata such as masks, ground truth CAD models and pose. We evaluate our trained model on categories that cooccur in the ShapeNet training set and exclude images having occlusion and truncation from the test set, as is done in the original paper [24]. We crop the images to centerposition the object of interest and mask the background using the provided mask information. Table 2 contains the results of this evaluation. Evidently, we outperform both variants of PSGN [5] by a large margin in both Chamfer as well as EMD metrics, demonstrating the effectiveness of our approach on real data. Fig. 4 shows sample reconstructions on this dataset. Surprisingly, both variants of PSGN have very poor generalizability, predicting highly incoherent shapes that often do not correspond to the input image (especially in chairs and sofas). On the other hand, DensePCR has very good generalization capability and predicts shapes that display high correspondence with the input image, in spite of the input images being from a different distribution. Note that all three networks are trained on the same ShapeNet training set and tested on Pix3D.
Network Parameters
All the networks used in performing the experiments consist of the same architecture for the image encoder ( 8.1M parameters) and vary with respect to the decoder. The decoder in PSGN-FC consists of only fully connected layers to directly predict 16,384 points, thus having a size-able number of parameters at 17.1M. The PSGN-ConvFC decoder consists of both deconvolution layers as well as fully connected layers and has 13M parameters. Our network consists of fully connected decoder to predict the initial sparse prediction, followed by extremeley light-weight MLPs thereafter. This enables us to train our network with as little as 5.2M parameters, almost a 3X reduction compared to both the PSGN variants. This makes our network highly apt for scaling up to very dense reconstructions with minimal addition in parameters. It is worth noting that the majority of parameters in our setup are from the initial base point cloud prediction network. The dense reconstruction network has very few parameters (0.043M parameters per every stage of hierarchy). Hence, as we scale up, there will be negligible addition of parameters, making it highly efficient for dense prediction. Detailed network architecture for each of the models is provided in the supplementary.
Intermediate Outputs
We analyze the network predictions at the intermediate stages of the pyramidal network in Fig. 5. We observe that the initial sparse prediction has good correspondence with the input image, while also having uniformity in point prediction due to the constraints enforced by the EMD loss (Eqn. 2). The dense reconstruction network effectively upsamples this prediction to obtain dense point clouds that maintain the uniformity despite being trained with the Chamfer loss (Eqn. 1). This can be attributed to the grid conditioning mechanism that prevents points from collapsing to a single region in space.
Failure Cases
We present some failure cases of our method in Fig. 6. We notice that certain predictions have artifacts consisting of small cluster of points around some regions. This is due to outlier points in the sparse point cloud, which get aggregated in the dense reconstruction. Utilizing a cascade of multiple neighbourhood features centered around each point might help in alleviating some of these issues, since it would be able to handle a higher variance in point cloud density. We also observe that certain predictions distort finer details present in the input image. Propagating the image feature at the dense reconstruction stage might aid in better capturing such details.
Conclusion
In this paper, we proposed a deep pyramidal network for hierarchical prediction of dense point clouds from a single RGB image. We highlighted the memory efficiency and scalability of this network in predicting high-resolution outputs. We presented a technique to approximate a surface around points by deforming local grids via aggregating local and global point features. Quantitative and qualitative We observe that the dense reconstruction network is able to effectively scale up the point cloud resolution, while also retaining the uniformity of points present in the initial prediction.
Input Ground Truth
Prediction a) b) Figure 6. Failure cases of our method. a) Certain predictions introduce artifacts such as a small clusters at the dense stage. b) Absence or distortion of finer details is observed in some models.
evaluation on the single-image 3D reconstruction task on a synthetic dataset showed that our method generates high quality dense point clouds, while also providing a lightweight and scalable framework for point cloud prediction. Furthermore, evaluation on a real-world dataset displayed the superior generalization capability of our approach to new and unseen datasets.
|
2019-01-25T15:07:44.000Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "dc24a04254a69cedae7744c46ec66c428bdf869e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1901.08906",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "22851d3c24ccb8f3141cf8cc1715506f4099584a",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
242644397
|
pes2o/s2orc
|
v3-fos-license
|
Substrate Temperature Influence on Optical Properties of C 60 Thin Films Within the Visible Range
Fullerene thin films of about 200 nm thicknesses have been deposited by thermal evaporation method on soda lime glass at substrate temperature 303 and 403K under pressure about 10 -5 mbar. This study concentrated on the influence of substrate temperature on the optical properties of C 60 thin films within the visible range. Optical characterization has been carried out at room temperature using the absorption spectra, at normal incidence, in range (200-900) nm. The absorption and extinction coefficients of the samples have been evaluated according to the variation in the UV- Visible spectrum. Increasing substrate temperature causes decreasing in optical band gap energy, for direct allowed transitions, and slightly changing in refractive index. This incident was due to the reducing of interatomic intervals, which may be correlating a decrease in the amplitude of atomic vibrations around their equilibrium sites.
Introduction
There is flexibility in synthesizing new molecules of organic semiconductors through inducing a change in the electronic properties of them by changing either the functional groups or atomic arrangement, and hence makes these organic semiconductors good candidates for optoelectronic applications Fullerenes molecules arrange at high symmetric sites. They can be collected in a solid structure like a spherical cage to form a new class of molecular combination semi to a football. They have wide applications in the field of superconductors, electronics, magnetics, medicine, biochemistry and physics of optoelectronics [2,3]. C 60 thin films which show a significant optical performance is mostly used in electronic devices such as light emitting diodes [4] and solar cells [5]. That's why there is an increasing interest in these materials over the last decades. Fullerene is an n-type, air stable, an organic semiconductor with applications in field effect devices and photovoltaics. Optical absorption properties in thin film form are important parameters for the suitability of a material for optoelectronic device applications. There are a lot of studies consider solids based on fullerenes as semiconductor materials and collect all physical properties of them [6,7]. Morphology and structural properties had been studied in other paper of this author [8]. In this study, the influence of substrate temperature on Fullerene optical properties has been studied. In the visible region, the absorption band for the thin film is known as the Q-band, and also in the near UV region the B-band.
Experiment
Powder of C 60 99% (Sigma Aldrich), with melting point 800 K, is put in a molybdenum boat heated with DC current governed by a variac transformer. The transformer is capable to supply the necessary current for heating the molybdenum boat which it was used for the evaporation operation. Before starting evaporation operation and to get rid of unuseful gases the evaporated material was guardedly degassed for 0.5 hour while the shutter was closed. Thin films of C 60 were deposited at temperatures 303 and 403 ºK on a cleaned soda lime glass substrates through pressure about 10 -5 mbar using a (15 F6 Hind Hivac) evaporation system. The thin film was deposited at a rate of 3-5 Aº/s while it has been sublimating from a molybdenum boat. The evaporation rate was maintained constant through every evaporations process. The films thicknesses were 200 nm which was measured by a device named Quartz crystal monitor. This manner is used for synthesis C 60 thin film on a cleaned soda lime glass substrate. The film seems to have a good adhesion with substrate. The absorption spectra of the C 60 films were measured at room temperature using a double-beam spectrometer (RAY LIGH, UV -2100). Results and discussion: Optical behavior of material was used to determine its optical constants. The Coefficient of absorption α was calculated from the formula [9]; α = 2.303A / τ Where τ is the film thickness and A is the absorbance, the absorption coefficient spectra of thin films deposited at both temperatures have been shown in Figure-1. Annealing process often increases the crystallinity of the film, so the absorbance will be increased too as seen in two figures above. Since C60 is organic material, so it cannot stand for high temperatures, and if it reaches decomposition temperature it will lose its characteristics. Band gap was calculated using Tauc relation [10]; α h ν = β (h ν -E g ) n Where β is the energy band edge parameter and the value of n defines the nature of optical transition (n = 0.5 refers to direct transition and n = 2 refers to indirect transition). Variation between (αhν) 2 and hν has been recorded in Figure- The results show that both onset and fundamental energy gaps are decreased from 2.40 eV for the film deposited at 303K to 2.35 eV for film deposited at 403K corresponds to the so-called onset energy gap (Q-band) while the absorption energy gap decreased from 3.30 eV for film deposited at 303K to 3.20 eV for film deposited at 403K corresponds to the fundamental energy gap (B-band).
This can be referred to the lowering of the interatomic spacing which may be correlating with a decrease in the amplitude of atomic vibrations around their equilibrium sites [11]. According to this observation, we conclude that C 60 film becomes more conducting when it deposited at a high temperature substrate. It can be attributed also to the upward shift in Fermi level toward the conduction band whose position is determined by the distribution of electrons through the localized states [12].
Optical properties of a material depend upon the interaction of the material with the electric field of the electromagnetic wave. The extinction coefficient is a measure of damping of incident wave in the material [13]. The extinction coefficient has been calculated by using the relation: Both the absorption and extinction coefficients are found to vary with incident photon energy. The extinction coefficient depends directly on absorption coefficient. Variation of extinction coefficient with wave length for the samples at different substrate temperatures is shown in Figure- The refractive index in general increases with increasing of substrate temperature as shown in Figure-6. It was attributed to the increase of absorbance of C 60 thin films i.e. increasing of interactions between incident photons and C 60 molecules.
By using the relations: Ԑ 1 = n 2 -k 2 and Ԑ 2 = 2nk the real and imaginary part of dielectric constant were evaluated [14] which show that the dielectric constants depend on the value of the refractive index and also the dielectric loss depends mainly on the values of extinction coefficient. The variation of the real and imaginary parts of dielectric constants values with the variation of wave length have been shown in Figures-7 and 8.
Conclusion
Absorption and extinction coefficients of C 60 thin films vary with wavelength and they increase with the increasing of substrate temperature. Refractive index through the range (400-550) nm was increased with the increasing of substrate temperature. The optical energy gap decreases with the increasing of substrate temperature, this incident was referred to the reducing of interatomic spacing of C 60 structure, which may be correlated to a decrease in the amplitude of atomic vibrations around their equilibrium sites. It is possible to consider C 60 as a good candidate for optoelectronic applications. Ts= 303 K Ts= 403 K
|
2021-10-09T15:58:51.045Z
|
2019-05-26T00:00:00.000
|
{
"year": 2019,
"sha1": "25bc6cbbefd370ba5dfe1b5cdee6204e6cb21401",
"oa_license": "CCBY",
"oa_url": "https://ijs.uobaghdad.edu.iq/index.php/eijs/article/download/834/486",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "85331b3c6481bc7aba604e68b72e192dfa351ff0",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": []
}
|
55222040
|
pes2o/s2orc
|
v3-fos-license
|
Interactive comment on “ Source apportionment of PM 2 . 5 in Cork Harbour , Ireland using a combination of single particle mass spectrometry and quantitative semi-continuous measurements ”
Local and regional aerosol sources impacting Cork Harbour, Ireland over 3 weeks in August 2008 were identified using positive matrix factorization applied to several aerosol measurements, including aerosol time-of-flight mass spectrometry (ATOFMS). The majority of the manuscript discusses in detail the mass spectral signatures of the observed particles. As written, the manuscript does not contribute significantly to the overall understanding of aerosol sources, and although the manuscript focuses significantly on the analysis approach, it is not novel. A more thorough literature search
should be conducted and integrated, considering other studies of long-range aerosol transport to Ireland, as well as previous single-particle mass spectrometry studies. The conclusions are not well supported or discussed in terms of the overall applicability of the results or method to future work. This manuscript is not currently recommended for publication in ACP; rather, following significant revisions, it is recommended that the manuscript be resubmitted to a different journal, such as Atmospheric Environment, in line with previous publication of similar studies.
Specific comments: Abstract: The first half of the abstract reads similarly to a methods section with the second half of the section stating some results. However, it is unclear what has been added to the scientific understanding of aerosol source apportionment. Further, it is not appropriate to include such information as the number of ATOFMS mass spectra generated here. The abstract should be reorganized to primarily be a discussion of the main results of the work.
Introduction, Paragraph 2: This summary of the use of single-particle mass spectrometry for source characterization is highly incomplete; several studies using single-particle mass spectrometry have focused on source characterization of ambient aerosol [eg. Bein et al., 2007, Pekney et al., 2006, Reinard et al., 2007. In addition, there is no reference here to previous studies of ship emissions, which seems particularly pertinent for this work [Ault et al., 2010, Ault et al., 2009, Healy et al., 2009. A more thorough literature search should be completed and integrated into the introduction.
Introduction, Last Paragraph: While it is useful that ATOFMS mass spectral signatures were obtained for coal, peat, and wood combustion, ATOFMS source signatures have been measured previously for coal and wood combustion [Gard et al., 1997, Silva et al., 1999.
Section 3.2.1: Previous measurements of the ATOFMS source signature of coal combustion should be compared to these results. Due to the large temporal variability of the "coal-amm-nit" and "peatnit" particle classes ( Figure 5), it seems deceiving to report average diurnal trends in Figure 4. Please address this. Also, why are the other particle types shown in Figure 4 not shown in Figure 5?
Page 1048, lines 7-10: Why is adsorption, condensation, or hydrolysis expected to the dominant processes? Please support this further. Previous studies have shown conversion from KCl to K2SO4 and KNO3 during biomass burning plume aging, for example [Gaudichet et al., 1995, Li et al., 2003, Yokelson et al., 2009.
Page 1049, lines 3-4: It seems important to note that Figure 4 shows the "non-nitrated", fresh subclasses to peak at night, as well. This point should be considered in the discussion of the nitrate formation, particularly since aged sea salt had a similar temporal pattern compared to fresh sea salt (page 1050, lines 13-15).
Section 3.2.3: Comparison with Ault et al. [2010Ault et al. [ , 2009 should also be made for the source signature of the ship emissions.
Sections 3.2.4 and 3.2.5: It is suggested to discuss more clearly the chemical and size differences between the EC particle types. These sections should be reorganized to make these differences clearer and to incorporate discussion of additional comparisons with previous ATOFMS source studies of vehicle emissions [Shields et al., 2007, Silva and Prather, 1997, Sodeman et al., 2005, Suess and Prather, 2002, Toner et al., 2008, Toner et al., 2006. Comparison with Vogt et al. [2003] may also be appropriate.
Page 1052, last paragraph: While it is noted in Figure 8 that m/z -95 may be attributed to methanesulfonate, this is not discussed throughly and should be, particularly given the marine origin of the air mass. Without the presence of m/z -79 or m/z -63 (confirming the presence of phosphate), it is speculative to call these particles "EC-phos-aged". Based on the temporal trend in Figure 9, it would seem as though there was a local background of these particles as well.
Page 1056, lines 20-21: What is the significance of an error of "100"? What does this mean and how specifically was it chosen?
Page 1057, lines 1-10: How does this solution compare with previous PMF studies?
In particular, comparison with other single-particle mass spectrometry studies utilizing PMF, including Owega et al. [2004] and Pekney et al. [2006], should be incorporated in this manuscript with more detailed discussion of the source results.
Page 1058, lines 1-2: What chemical species is proposed that would not be detected? Biological material has been previously detected by single-particle mass spectrometry using 266 nm radiation for laser desorption/ionization , Pratt et al., 2009, Russell, 2009].
Page 1058, lines 2-7: This comparison of previous results by Hellebust et al. should be expanded in this manuscript given the similarity of the studies.
Page 1060, lines 22-24: The reasoning for biogenic SOA and crustal dust as missing ATOFMS particle types is not well supported and should be discussed further. The authors are directed to previous cases of missing particles (likely pure ammonium sulfate), where light absorption and scattering were utilized to identify these particles that were not chemically analyzed [Spencer et al., 2008. This conclusion should be examined further and reconsidered.
Technical corrections:
Page 1036, lines 2-5: This sentence is misleading as it implies that the ATOFMS was used for quantitative measurements. Please rephrase to make the product(s) of the ATOFMS measurements clearer.
Page 1036, lines 10-11: Canada is considered to be part of North America. Please fix this phrasing. Also, note this mistake on page 1044 (line 4), page 1053 (lines 13-14), C335
|
2019-09-12T19:21:02.124Z
|
2010-01-01T00:00:00.000
|
{
"year": 2010,
"sha1": "383c128b15acaed610f10a7603bb244dbf2da720",
"oa_license": "CCBY",
"oa_url": "https://www.atmos-chem-phys.net/10/9593/2010/acp-10-9593-2010.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "2f235934fffa232020e91a290f492e1644b43d87",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
14957735
|
pes2o/s2orc
|
v3-fos-license
|
Glucose-Insulin Therapy, Plasma Substrate Levels and Cardiac Recovery After Cardiac Ischemic Events
Introduction The potential usefulness of glucose-insulin therapy relies to a large extent on the premise that it prevents hyperglycemia and hyperlipidemia following cardiac ischemic events. Methods In this review we evaluate the literature concerning plasma glucose and free fatty acids levels during and following cardiac ischemic events. Results The data indicate that hyperlipidemia and hyperglycemia most likely occur during acute coronary ischemic syndromes in the conscious state (e.g. acute myocardial infarction) and less so during reperfusion following CABG reperfusion. This is in accordance with observations that glucose-insulin therapy during early reperfusion post CABG may actually cause hypolipidemia, because substantial hyperlipidemia does not appear to occur during that stage of cardiac surgery. Discussion Considering recent data indicating that hypolipidemia may be detrimental for cardiac function, we propose that free fatty acid levels during reperfusion post CABG with the adjunct glucose-insulin therapy need to be closely monitored. Conclusion From a clinical point of view, a strategy directed at monitoring and thereafter maintaining plasma substrate levels in the normal range for both glucose (4–6 mM) and FFA (0.2–0.6 mM) as well as stimulation of glucose oxidation, promises to be the most optimal metabolic reperfusion treatment following cardiac ischemic episodes. Future (preclinical and subsequently clinical) investigations are required to investigate whether the combination of glucose-insulin therapy with concomitant lipid administration may be beneficial in the setting of reperfusion post CABG.
Introduction
Ever since the introduction of glucose-insulin-potassium therapy in the setting of cardiac ischemic diseases [1], this technique has gone through alternating periods of varying attention. Although originally devised to prevent intramyocardial hypokalemia during ischemia and reperfusion, the therapy is nowadays mainly advocated to reduce hyperglycemia and hyperlipidemia which have been deemed to be present during the reperfusion period.
The first part of this review entails a critical appraisal of the literature concerning plasma substrate concentrations following reperfusion of ischemic cardiac events. We hope to convince the reader that, although plasma levels of free fatty acids (i.e., non-esterified fatty acids) and glucose may be elevated during reperfusion, this certainly is not always observed. Factors affecting substrate availability are highly influenced by the specific standard operation protocols used by the hospital (protocols for fasting, anesthesia techniques, heparin use, inotropic therapy), the anxiety of the organism (conscious versus anaesthetized) and the medical characteristics of the patient. The availability of substrates in the blood stream is a major determinant of myocardial metabolic activity. Metabolic treatment of the ischemic heart during reperfusion, therefore, can only be successful when it is based on accurate knowledge concerning the pre-treatment (=baseline) levels of circulating substrates of the patient.
The second part of this review is directed at how glucose-insulin therapy may affect substrate availability during reperfusion and how these changes may affect cardiac recovery following an ischemic episode through specific alterations in cardiac metabolism. From a clinical point of view, a strategy directed at keeping plasma substrate levels within the normal range (thus preventing both hyperand hypoglycemia and hyper-and hypolipidemia) and at stimulation of glucose oxidation seems to hold the most promise for optimal treatment of the heart following cardiac ischemic episodes.
Clinical factors affecting substrates
In the healthy, fed individual, blood glucose levels range between 4-6 mM (70-110 mg/100 dl) and free fatty acid (FFA) levels between 0.2-0.6 mM. However, in the perioperative condition considerable changes in these substrate levels may occur as a consequence of the specific operation protocol. Patients scheduled for elective cardiac operations are usually requested to refrain from food intake 12-24 h before the operation. This fasting procedure results in low insulin levels (removing insulin-inhibition of lipoprotein lipase (LPL)), increased FFA concentration and decreased glucose concentration [2,3]. The next confounding factor is the type of anesthetic regimen chosen. Many anesthetics differently affect plasma glucose and insulin levels, with minor effects on plasma FFA. We recently showed that, while pentobarbital and sufentanil-propofol were without effect on plasma glucose concentration, volatile anesthetics and α 2 -agonists (e.g. xylazine, medetomidine) resulted in hyperglycemia and thus deregulation of glucose homeostasis [4]. Both anesthesia techniques based on the use of either opioids or volatile anesthetics are frequently used for the anesthetic management of patients undergoing cardiac operations. Furthermore, although cardiac operations necessitate the use of the anti-coagulation agent heparin, it has long been known that heparin can result in a several-fold increase in plasma FFA levels by inducing the release of hepatic and endothelial-bound LPL [5,6]. This heparineffect is often only present during the early reperfusion phase following by-pass operations, because heparin is usually quickly antagonized upon termination of the by-pass. Less well-known is the ongoing lipolysis in the sample due to the presence of this detached LPL, especially after the in vivo administration of heparin [7,8]. Without special precautions, e.g. inhibition of ex vivo lipolysis with a potent lipoprotein lipase inhibitor [9], plasma FFA may be substantially overestimated by the lab. We believe that this aspect has often been neglected, resulting in an overestimation of FFA during conditions of heparin administration (usually early reperfusion). In addition, acute myocardial infarction in the conscious state and post-ischemic reperfusion are characterized by high levels of catecholamines from endogenous sources and/or from the inotropic support given by the clinician [10,11]. These high levels of catecholamines have also been hypothesized to inhibit insulin secretion and to activate lipolysis of adipose tissue resulting in increased plasma FFA [12,13].
Therefore, it appears that, in the peri-operative patient, the standard clinical treatment and applied biochemical determination of especially FFA are critical issues when reviewing substrate levels during reperfusion. Each surgical/ anesthesia team should determine the metabolic profile associated with its perioperative protocols to determine if improvement in cardiac function may be anticipated with the use of metabolic support. To illustrate this important point, it is shown in Table 1 that certainly not all studies report increased FFA and glucose levels following ischemia. The table indicates that FFA levels are most likely to be elevated during conditions of acute coronary syndromes in the conscious state (e.g. acute MI, [14][15][16]). During conditions of actual reperfusion, such as following PTCA after acute MI [16][17][18][19][20] or after less severe ischemia, such as following CABG in the anaesthetized condition [21][22][23][24], FFA levels are less likely to be elevated.
It should be noted that, although elevated FFA levels have long been viewed as a risk factor, there is evidence that elevated glucose levels are also associated with increased risk [25,26]. However, the table indicates that hyperglycemia is certainly not always present upon reperfusion and when it is present, the rise in plasma glucose is usually modest. An exception to this may be the diabetic patient [24] who is more likely to develop hyperglycemia. It therefore seems that, before embarking on standardizing the use of insulin therapy immediately following cardiac ischemic events on the premise of preventing hyperglycemia and hyperlipidemia, the metabolic profile of the patients and the clinical care treatment should be documented. Ideally, insulin therapy should only be given on indication of acutely monitored elevated glucose and/or FFA plasma levels. Alternatively, in a subset of each specific group of patients, together with the clinical care treatment given, the metabolic profile is characterized in terms of plasma glucose and FFA levels before a decision is made whether the whole group of patients is treated with insulin therapy. Only when hyperglycemia and hyperlipidemia are present should one use insulin therapy to combat these elevated circulating glucose and FFA levels.
Glucose-Insulin therapy and substrate availability
Because plasma substrate availability may limit metabolism to a large extent [27,28], any change in substrate availability instigated by insulin therapy will result in shifts in the major cellular metabolic pathways involved in the generation of ATP, i.e., glycolysis and glucose oxidation and fat oxidation. The administration of large amounts of insulin will decrease circulating glucose through activation of GLUT transporters that facilitate cellular glucose uptake and inhibition of hepatic gluconeogenesis. The increased risk of developing temporary periods of hypoglycemia is one of the well-known adverse effects of insulin therapy [29]. Insulin therapy is therefore almost always combined with glucose administration to prevent hypoglycemia. As a result of this increased cellular glucose uptake, glucoseinsulin therapy will increase glycolysis and glucose oxidation. Insulin will also reduce plasma FFA concentrations, resulting in decreased fatty acid oxidation, promotion of TAG storage and possibly hypolipidemia. This lowering of lipids is mainly due to insulin-mediated inhibition of the hormone-sensitive lipase in adipose tissue, the determining enzyme for whole-body lipid fuel availability [30] and insulin-induced CD36-mediated increase in FFA uptake [31]. Interestingly, the developing of hypolipidemia with glucose-insulin therapy has received far less attention in current clinical practice as compared to hypoglycemia [32]. This probably stems from the premise that fatty acids are commonly believed to be elevated at reperfusion and that any decrease in fatty acid metabolism is always associated with improved cardiac recovery from ischemia. However, we now know that hyperlipidemia is not always present at reperfusion (see above), and that too little fatty acid oxidation can be detrimental for cardiac recovery (see below). In the following paragraphs, the insulin-mediated changes in the three major ATP-generating pathways (increased glycolysis, increased glucose oxidation, decreased fatty acid oxidation) are discussed in relation to cardiac recovery from ischemia. It is well documented that the potential usefulness of GIK certainly also leads to non-metabolic mechanisms such as the activation of cell survival pathways leading to cardioprotection through reduction in apoptosis [33][34][35][36] and reduced inflammatory (22) and neurohumoral responses (11). However, in this review we focus primarily on the GIK effects on plasma substrate levels and metabolic pathways.
Increased glycolysis
The non-oxidative breakdown of glucose to pyruvate or lactate, i.e. glycolysis, is commonly increased in the postischemic heart [37][38][39]. In addition, important cardiac pathologies such as hibernation, hypertrophy and heart failure are associated with increased glycolytic activity, probably as a result of recurrent episodes of limited oxygen supply [40][41][42]. The use of glucose-insulin therapy will CABG coronary artery bypass grafting, AMI acute myocardial infarct, PTCA percutaneous transluminal coronary angioplasty, n.r. not reported further enhance post-ischemic glycolysis, provided that hypoglycemia does not occur. The functional benefit of increased glycolysis is in keeping with the observation that the heart requires glycolysis during early reperfusion [43,44]. Although glycolytically produced ATP only amounts to ∽ 2% of total ATP production in the presence of oxygen, this ATP seems to be preferentially used for membrane functions allowing recovery of ionic homeostasis (preventing Ca 2+ overload) during early reperfusion. In addition, acidification at the moment of reperfusion may be beneficial, because it inhibits mitochondrial permeability transition pore opening [45].Thus, increased glycolysis during reperfusion is protective in conditions of recurrent ischemia and reperfusion. However, increased glycolytic flux during zero-flow ischemia is detrimental [46] and may result in increased acidification during ischemia. Finally, the postischemic increase in glycolysis will also slow down cellular energy transfer [47][48][49] and may decrease the rate of maximal, sustainable oxidative phosphorylation [50,51]. It therefore seems that increasing glycolysis during reperfusion with glucose-insulin therapy will protect against reperfusion damage at the expense of slower energy transfer. Because it is unlikely that delayed energy transfer will be detrimental for the heart during acute reperfusion, increasing glycolysis during reperfusion seems to entail an overall protective effect against reperfusion damage.
Increased glucose oxidation
Although increased glucose oxidation is not always observed following ischemia [27,37,52], most studies have shown that the activation of glucose oxidation in the reperfusion phase improves the recovery of cardiac performance [53,54]. Thus glucose-insulin therapy will be beneficial through activation of this pathway, provided that no hypoglycemia occurs that may actually decrease glucose oxidation. In addition, the therapy will reduce hyperglycemia, a risk factor that has been shown to contribute independently to poor recovery [55].
Currently, the two most prevailing hypotheses explaining the protection offered by increased glucose oxidation involves increased mechanical efficiency and decreased proton load, respectively. With respect to the former, it has recently been shown that oxygen consumption for the same amount of work increased considerably (from 10% at high work to 48% at unloading conditions) when switching from a predominantly glucose metabolizing condition to a predominantly fatty acid metabolizing condition in in vivo pigs [56]. This decreased efficiency with increased fatty acid oxidation may be explained by a fatty acid induced decreased P:O ratio in addition to uncoupling of oxidative consumption and futile, energy-consuming cycling of fatty acids in and out the intracellular triglyceride pool [56]. The second hypothesis is associated with the so-called coupling of glycolysis with glucose oxidation, i.e. when pyruvate produced by glycolysis is subsequently oxidized in the TCA cycle, the proton production from hydrolysis of glycolytically-produced ATP is zero [27]. However, if glycolysis is uncoupled from glucose oxidation, and pyruvate is not oxidized further, there is a net production of two protons from each glucose molecule metabolized. Such may occur with increased fatty acid metabolism. Fatty acids inhibit glucose oxidation more than glycolysis and glycolysis more than glucose uptake, indicating that with increasing fatty acid oxidation the rate of glycolysis increasingly deviates from that of glucose oxidation, resulting in the uncoupling of glycolysis from glucose oxidation [57]. The resulting acidosis may then cause decreased contractility and increased calcium overload during the reperfusion period. Increased glucose oxidation will then prevent ongoing acidosis during the early reperfusion period, thereby improving recovery of mechanical performance. However, this theory is not without controversy. It has been shown that insulin-glucose administration in an isolated heart model improved functional recovery compared with the administration of dichloroactetae (DCA) which is a direct stimulant of carbohydrate oxidation, whereas DCA had a greater impact on increasing glucose oxidation [58]. In that study it was found that the protection was not associated with increased coupling of glucose oxidation to glycolysis. A recent study examining mechanical recovery following low-flow ischemia in GLUT1overexpressed mice hearts reached similar conclusions [59]. An uniform mechanistic, cellular, explanation for the advantageous effects of increasing glucose oxidation is therefore currently lacking.
In general, the literature suggests that treatment aimed at stimulation of glucose oxidation while normoglycemia is maintained (i.e. without the concomitant development of hyperglycemia or hypoglycemia) seems to be promising and safe with respect to improved cardiac function in the post-ischemic period.
Decreased fatty acid oxidation
At the moment, the prevailing opinion is that increased fatty acid oxidation during the reperfusion period is detrimental to the functional recovery of the heart. However, although some studies demonstrate that increasing FA oxidation is associated with decreased functional recovery in the reperfusion phase [57,60,61] other studies showed unaltered [62] or even improved recovery with increased FA oxidation [37,63]. It has been suggested that the effects of lipids are primarily detrimental when present at high levels during the ischemic period, but not during reperfusion [64]. That FA oxidation during reperfusion can be protective is also supported by studies showing decreased functional recovery of hearts from fatty acid transporterknockout animals that lack the sarcolemmal fatty acid transporter CD36 [65], a similar observation as found for GLUT4-knockout animals [66]. Recently, Tuunanen et al [67] demonstrated that acute depression of plasma FFA by acipimox, an inhibitor of lipolysis, depressed cardiac work in healthy and diseased humans with additional decreases in cardiac efficiency for heart failing patients. Acipimox decreased FFA concentrations from 0.6 mM to 0.1 mM. Interestingly, the very few studies examining the effect of insulin therapy on FFA levels also indicate that this therapy may reduce FFA to similar levels. Addo et al [16] reported FFA levels of 0.2 mM with GIK therapy in patients undergoing primary percutaneous coronary intervention (without taking into account sample lipolysis). We recently demonstrated FFA levels around 0.1 mM with insulin therapy following CABG procedures [23]. These low FFA levels are commensurate with the observation that FFA concentrations are often not increased during reperfusion (see "Plasma substrate availability during cardiac ischemic events" section above). From these observations it may be hypothesized that there exists a requirement for a minimal concentration of plasma FFA for the heart in order to perform optimally. As for glucose, the data suggest that the prevention of hypolipidemia as well as hyperlipidemia during reperfusion may be an important consideration of any metabolic therapy, including the glucose-insulin therapy.
Finally, one complicating factor concerning FA effects that may explain some of the controversy is that there are "good" and "bad" fatty acids. Especially the saturated fatty acids such as palmitate appear to be detrimental, even inducing apoptosis [68], whereas other fatty acids (mainly (poly)unsaturated fatty acids) appear to be beneficial in the reperfusion period [69]. Future preclinical studies have to demonstrate the potential beneficial effect of the combination of glucose-insulin therapy with additional lipids on myocardial energy balance, infarct size and mechanical function following ischemia and reperfusion. Due to the ambiguity of the cardiac functional effects of changes in FA metabolism, clinically directed manipulation of FA metabolism must be executed with great care.
Summary
In the peri-operative patient, the standard clinical treatment and applied biochemical determination of especially plasma FFA are critical issues when reviewing substrate levels during reperfusion. Each surgical team should determine the metabolic consequences of its perioperative protocol to determine if improvement in cardiac function may be anticipated with the use of metabolic support. FFA levels are most likely to be elevated during conditions of acute coronary syndromes in the conscious state (e.g. acute MI) and less so during reperfusion post CABG. Consequently, GIK therapy in conditions of CABG may result in hypolipidemia that can potentially limit the beneficial effects of this therapy. Whether hypolipidemia is disadvantageous during post CABG reperfusion awaits further pre-clinical and subsequently clinical studies. At the moment, monitoring of FFA levels during GIK therapy, as is commonly performed for glucose levels, is strongly recommended.
From a clinical point of view, a strategy directed at monitoring and thereafter maintaining plasma substrate levels in the normal range for (glucose 4-6 mM; FFA 0.2-0.6 mM), and at stimulation of glucose oxidation, seems to hold the most promise for optimal metabolic reperfusion treatment following cardiac ischemic episodes.
|
2014-10-01T00:00:00.000Z
|
2008-02-12T00:00:00.000
|
{
"year": 2008,
"sha1": "b6fbe08101aa8c398d9278dfc17fde77b29a997c",
"oa_license": "CCBYNC",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10557-008-6087-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b6fbe08101aa8c398d9278dfc17fde77b29a997c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255723393
|
pes2o/s2orc
|
v3-fos-license
|
The Use of Cavitron Ultrasonic Surgical Aspirator for High-Risk Neuroblastoma with Image-Defined Risk Factors in Children
Aim of the study: The cavitron ultrasonic surgical aspirator (CUSA) has gained popularity in adult surgical oncology, but its application in children is limited to liver surgery and neurosurgical procedures. The complete resection of neuroblastoma with image-defined risk factors (IDRFs) is still considered one of the most difficult procedures to achieve in pediatric surgical oncology, with a high morbidity rate and potential risk of intraoperative mortality. The aim of our study is to describe the application of ultrasonic dissection in neuroblastoma with IDRFs. Methods: A retrospective study was performed, analyzing patients operated on from 2000 to 2018. Patient characteristics, resection completeness, and postoperative surgical and oncology outcomes were analyzed. Main results: Twenty-six patients with high-risk neuroblastoma and IDRFs were operated on in the study period with a CUSA. A complete macroscopic resection was performed in 50% of patients, while the other half was operated on with minimal residual (<5 mL). Six post-operative complications occurred without the need for surgery (Clavien–Dindo < 3). The overall survival was 50%, with a median follow-up of 69.6 months (5.6–140.4). Conclusions: The application of the CUSA in neuroblastoma with IDRFs can be considered an effective and safe alternative technique to achieve a radical resection.
Introduction
Neuroblastoma (NB) represents the most frequent extracranial solid tumor in children, with a heterogeneous presentation in different anatomic sites and a typical behavior towards vascular structures, rarely infiltrating the adventitious tunic, but displacing and surrounding it [1] (Figure 1). Vascular involvement currently represents the most challenging scenario in children undergoing surgical resection for NB. In the last several years, the International Neuroblastoma Risk Group Staging System (INRGSS), based on NB vascular characteristics, was developed to guide the timing for surgery, identifying image-defined risk factors (IRDFs) [2,3].
In 2007, Kiely described the most frequently applied technique for the surgical resection of NB with vascular involvement, describing tumor incision with piecemeal removal and a vascular dissection to the tunica media with a cold scalpel [4].
Despite the introduction of several technological advances, including robotic surgery and other perioperative tools to guide surgical resection such as three-dimensional modeling, the vascular dissection of neuroblastic tumors with IDRFs remains a challenging procedure with potential life-threatening events and organ injury. A radical resection represents a controversial topic in those tumors with favorable biology, while complete resection has been shown to be related to better oncological outcomes [5,6]. Vascular involvement currently represents the most challenging scenario in children undergoing surgical resection for NB. In the last several years, the International Neuroblastoma Risk Group Staging System (INRGSS), based on NB vascular characteristics, was developed to guide the timing for surgery, identifying image-defined risk factors (IRDFs) [2,3].
In 2007, Kiely described the most frequently applied technique for the surgical resection of NB with vascular involvement, describing tumor incision with piecemeal removal and a vascular dissection to the tunica media with a cold scalpel [4].
Despite the introduction of several technological advances, including robotic surgery and other perioperative tools to guide surgical resection such as three-dimensional modeling, the vascular dissection of neuroblastic tumors with IDRFs remains a challenging procedure with potential life-threatening events and organ injury. A radical resection represents a controversial topic in those tumors with favorable biology, while complete resection has been shown to be related to better oncological outcomes [5,6].
The use of the Cavitron Ultrasonic Surgical Aspirator (CUSA) was first described by Hodgson et al. in 1979 [7] and extensively applied over the years in adult and pediatric neurosurgery due to its ability to respect vascular tunica during tumor dissection.
In pediatric literature, the application of the CUSA has only been reported by Loo in 1988 as a feasible technique for neuroblastoma resection in a case series of five patients; nevertheless, this approach was never further reported [8].
This article describes the technical detail of the CUSA for pediatric neuroblastoma resection with IDRFs to achieve a complete macroscopic resection.
Technical Procedure
Under general anesthesia, children were positioned in dorsal decubitus. A transverse supraumbilical incision was performed instead of a median laparotomy, due to the transversal abdominal development during childhood. In the case of preoperative IDRFs involving the porta hepatis and supra-mesenteric aorta, a reverse Mercedes incision was performed to obtain sufficient vascular control.
Retroperitoneal space access was obtained previa right/left colon mobilization, starting dissection from the avascular Toldt line with bowel peritoneal mobilization. The aorta and inferior cava veins were isolated in their cranial and caudal extra tumoral location The use of the Cavitron Ultrasonic Surgical Aspirator (CUSA) was first described by Hodgson et al. in 1979 [7] and extensively applied over the years in adult and pediatric neurosurgery due to its ability to respect vascular tunica during tumor dissection.
In pediatric literature, the application of the CUSA has only been reported by Loo in 1988 as a feasible technique for neuroblastoma resection in a case series of five patients; nevertheless, this approach was never further reported [8].
This article describes the technical detail of the CUSA for pediatric neuroblastoma resection with IDRFs to achieve a complete macroscopic resection.
Technical Procedure
Under general anesthesia, children were positioned in dorsal decubitus. A transverse supraumbilical incision was performed instead of a median laparotomy, due to the transversal abdominal development during childhood. In the case of preoperative IDRFs involving the porta hepatis and supra-mesenteric aorta, a reverse Mercedes incision was performed to obtain sufficient vascular control.
Retroperitoneal space access was obtained previa right/left colon mobilization, starting dissection from the avascular Toldt line with bowel peritoneal mobilization. The aorta and inferior cava veins were isolated in their cranial and caudal extra tumoral location with a colored silicone vascular loop in order to control any eventual intraoperative hemorrhage.
Tumor dissection was started by combining monopolar and bipolar electrosurgery until the identification of the vascular encasement locations. En-bloc resection was avoided to prevent any organ failure.
The neuroblastoma capsule was initially opened using monopolar electrosurgery, and then the CUSA ® Excel (Integra LifeSciences, Princeton, NJ, USA) was applied for the parenchyma dissection.
Tumor fragmentation is achieved by the vibrational energy of the surgical tip by an alternating voltage at 36 kHz; the suction system of the instrument, with a vacuum pump that provides a maximum pressure of 640 mmHg, allows for continuous tissue contact with the tip, enhancing tumor fragmentation. An irrigation system is supplied by an adjustable pump connected to the handpiece with an adjustable flush of saline or lactated ringer's solution; the continuous flux from the irrigation hole of the tip provides the clearance of the adjacent suction hole and adequate aspiration.
The energy created by the ultrasonic waves fragments parenchymal cells, owing to their high-water content, and selectively spares blood vessels and ducts (such as bile ducts), due to their poor water and high collagen content.
Tissue selectivity is related to the quantity of collagen and/or elastin that increase tissue strength and subsequent fragmentation power. In addition, different tissue strengths (parenchymal/vascular structures) perceived by the CUSA tip provide tactile feedback to the surgeon, who can feel the difference in tissues encountered during dissection.
Different techniques of CUSA were described by Honda in 2020 and defined as boring, shoveling, and back-scoring approaches [9].
The boring technique consists of the insertion of the tip of the CUSA straight into the parenchyma, aspirating and removing tumoral tissue in a single movement.
Shoveling is the preferred technique by neurosurgeons, using about one-third of the metal tip as a spoon with a single motion.
The back-scoring technique consists of tumor scratching with the end of the metal tip, pursuing the scratching along the dissection line, and pulling out the CUSA handpiece.
Due to the post-chemotherapy tumor modifications, calcifications can make the shoveling technique dangerous (which is the most common approach for liver and neurosurgical resections) due to the higher discrepancy of solidity between the vascular adventitia and media tunica and calcified parenchyma. The boring technique, consisting of the blind insertion of the CUSA tip straight into the parenchyma to the blood vessel wall, was never used due to the high risk of vascular perforation.
The back-scoring technique was preferred for the controlled dissection along the cutting line, applying balanced CUSA use with a consistent amount of pressure (Figures 2 and 3). Despite the CUSA being designed to stop the parenchyma dissection when the vascular adventitia tunica is encountered, vessel perforation is still possible if the instrument's tip pressure is maintained on the same point or if a previous vascular dissection was performed.
To reduce the risk of perforation, a continuous dissection is required to perform the back-scoring technique in safe conditions.
In the case of vascular lesions, the CUSA allows vessel sealing by the stanching principle [9]. Stanching allows bleeding point control using thermal denaturation with compression by the flank of the CUSA electrocautery tip and soft coagulation. The lateral parts Despite the CUSA being designed to stop the parenchyma dissection when the vascular adventitia tunica is encountered, vessel perforation is still possible if the instrument's tip pressure is maintained on the same point or if a previous vascular dissection was performed.
To reduce the risk of perforation, a continuous dissection is required to perform the back-scoring technique in safe conditions.
In the case of vascular lesions, the CUSA allows vessel sealing by the stanching principle [9]. Stanching allows bleeding point control using thermal denaturation with compression by the flank of the CUSA electrocautery tip and soft coagulation. The lateral parts of the CUSA cylindrical metallic tip are atraumatic, as longitudinal ultrasonic vibration provides tissue fragmentation and aspiration only at the level of the terminal part of the tip.
This study was approved by the local Institutional Review Board at Bicêtre Hospital and conducted according to the principles of the Declaration of Helsinki. Informed consent was obtained from all patients' parents.
Results
In the analyzed period (2000 to 2018), 43 patients were admitted to Bicêtre Hospital for stage L2 and M neuroblastoma according to the INRGSS classification.
Among these, 26 children (60.5%) were operated on with the technical support of the CUSA and included in the study. A total of 15 (57.7%) patients were male and the median age at diagnosis was 37 months (range: 0-174 months).
Seven nephrectomies (26.9%) were required for tumoral infiltration at the level of the renal hilum to obtain a macroscopical radicality in these patients with unfavorable tumor biology. One intestinal resection was performed for tumoral involvement of the duodenum. No massive bleeding occurred during the surgical procedure and 11 patients required transfusion with a median estimated blood loss of 160 mL (80-200).
A total of six patients (23.1%) presented surgical complications (three cases of chylous ascites and three acute renal insufficiencies), but none of them was a grade III Clavien-Dindo requiring re-intervention.
Macroscopic gross resection was performed in 13 patients (50%) and minimal residual (<5 mL, >90% of resection) in the other half of patients.
Discussion
The surgical treatment of high-risk neuroblastoma remains a controversial and challenging topic in the field of pediatric surgical oncology. The IDRF assessment represents a cornerstone of neuroblastoma management, guiding oncologists and surgeons on the timing and surgical approach of these patients [10][11][12][13][14][15][16].
In addition to the higher risk of intra and post-operative surgical morbidity and mortality, the presence of IDRFs are related to a poor oncologic outcome [17][18][19][20][21][22].
Current literature shows that pre-operative chemotherapy has a variable effect on IDRFs, with reported effectiveness of 38.5-81.8% in reducing IDRFs prior to surgery [23][24][25][26]; however, an increase in IDRFs during chemotherapy has also been reported [27].
Recent literature has shown that surgical completeness is related to an increased survival rate in patients with high-risk neuroblastoma, however, complete resection remains challenging due to the presence of IDRFs [28]. The CUSA introduction for IDRF management follows the same principles of hepatobiliary and neurosurgical application, preventing vascular wall lesions during tumor dissection.
Renal pedicle involvement is the most frequent IDRF and nephrectomy for highrisk neuroblastoma is often required to achieve a radical resection with a risk to develop a progressive reduced glomerular function in long-term follow-up [29], but the other abdominal IDRFs related to organ-supply vessels (including celiac axis, porta hepatis, and the superior mesenteric artery) cannot be managed in the same way, as their injury is potentially related to life-threatening events (liver and intestinal failure).
The CUSA technique followed the surgical oncology principle introduced by Kiely [4], replacing the cold knife vascular dissection with an ultrasonic dissection. The cold knife technique introduced the concept of tumor opening straight to the vascular adventitia tunica, respecting the dissection plane, and then using the monopolar electrocautery.
The proposed CUSA technical approach offers several advantages when compared to the monopolar electrocautery and cold knife dissection, allowing a sparing vascular technique working on a dry dissection plane without modification due to the monopolar electricity diffusion to vascular tissues. An additional advantage is a possibility of combining dissection and vascular thermal sealing in case of punctiform perforation.
During the study period, all patients with neuroblastoma and positive IDRFs were operated on with the CUSA approach, then, in absence of a case-control comparison, we compared our series with the available literature and the largest reported series of patients with similar preoperative characteristics [25,26,28].
The largest surgical series of NB with IDRFs patients published in Europe during our study period showed 8.4% of patients who did not undergo surgery for tumor inoperability, while all patients with IDRF persistency after chemotherapy were operated on in our center.
Resection completeness was differently described in the literature, using the percentage or macroscopic/minimal residual tumor tissue terminology. Comparing our results and the American and European series, with 30% and 24%, respectively, of incomplete resection, the CUSA approach allowed a greater completeness of resection.
The nephrectomy rate was comparable with the current literature, while no normal organ was resected during surgery as reported in the largest American and European studies [28,30].
Data from the patients treated with this technique in our institution showed a low rate of intra-operative and post-operative complications; intra-operative mortality, which is still reported in the largest reported series, never occurred in our series [28,31]. In our cohort, oncology follow-up showed a similar complete remission rate compared to the largest surgical series of patients with high-risk neuroblastoma [28,30].
The application of the CUSA in children has several limitations that are mainly related to the cost-effectiveness of new technology applications in pediatric surgery and the learning curve needed to develop these advances in the pediatric field [31].
Advances in surgical technology are usually applied with a consistent delay when comparing pediatric and adult surgery, which is mainly due to the low case load in pediatrics and related technology costs that are difficult to justify.
Currently, the CUSA is mostly used in pediatric neurosurgery and hepatobiliary surgery [32,33], and not all pediatric hospitals have both of these surgical units.
The multidisciplinary interventional surgical platform and orientation of the team at the national liver surgery and national transplantation center allowed for CUSA access and related technical skills for surgeons.
CUSA use, like other recent technologies applied in children [34], requires appropriate training. Low-cost simulation models were described to improve CUSA skills [35] and can be considered in the absence of an adequate case load in hepatobiliary surgery.
The concentration of care is another possible tool to improve the application of new technologies and surgical outcomes in patients with neuroblastoma, as recently reported by van der Steeg and colleagues [36]. The concentration of care has been related to improved surgical outcomes in high-volume hospitals and can be considered to optimize healthcare resources in those centers with experienced pediatric surgical oncology and the availability of modern technologies.
Conclusions
The proposed technique combines new technology advances and established surgical oncology approaches for neuroblastoma with IDRFs.
The application of the CUSA could be considered as a complementary safe dissection technique for vascular dissection in the case of high-risk neuroblastoma to provide an extensive resection to improve children's oncology outcomes. Data Availability Statement: The data are available upon request to the corresponding author.
Conflicts of Interest:
The authors declare no conflict of interest.
|
2023-01-12T18:28:21.515Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ce063c9571df9a0f9cb120e53d1c90e2b41b1383",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9067/10/1/89/pdf?version=1672644934",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cecbcf79bd41c2125d21cfebcd552eebbd025367",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
10937339
|
pes2o/s2orc
|
v3-fos-license
|
Economic valuation of informal care in cerebrovascular accident survivors in Spain
Background Cerebrovascular diseases are the second leading cause of death worldwide and one of the health conditions which demand the highest level of social services. The aim of this study was to estimate the social cost of non-professional (informal) care provided to survivors of cerebrovascular accidents (CVA) with some type of disability in Spain. Methods We obtained data from the 2008 Survey on Disability, Independent Living and Dependency (EDAD-08) on the main characteristics of individuals who provide informal care to survivors of CVAs in Spain. We estimated the cost of substituting informal care in favor of formal care provided by professional caregivers (proxy good method) and performed a statistical analysis of the relationship between degree of dependency and number of care hours provided using ordinary least squares regression. Results The number of disabled people diagnosed with CVA totaled 1,975 (329,544 people when extrapolating to the national population using the elevation factor provided by EDAD-08). Of these, 1,221 individuals (192,611 people extrapolated to the national population) received at least one hour of informal care per week. The estimated hours of informal care provided in 2008 amounted to 852 million. The economic valuation of the time of informal care ranges from 6.53 billion euros (at 7.67 euros/hour) to 10.83 billion euros (when calculating each hour of care at 12.71 euros). The results of our statistical analysis highlight the importance of degree of dependency in explaining differences in the number of hours of informal care provided. Conclusions The results of our study reveal the high social cost of cerebrovascular accidents in Spain. In addition, evidence is presented of a correlation between higher degree of dependency in CVA survivors and greater number of hours of care received. An integral approach to care for CVA survivors requires that the caregivers’ role and needs be taken into account.
Background
Cerebrovascular diseases (CVD) are the second leading cause of death after ischemic heart disease (IHD), and the third leading cause of disability-adjusted life years in highincome countries after IHD and lower respiratory infections. The World Health Organization estimated that in 2010 CVD caused 5.87 million deaths across the globe (11% of all causes of death) and 102 million disabilityadjusted life years [1,2].
In Spain, CVA is the leading cause of death among women and the third among men [3]. In addition to deaths, the effects of the disease on survivors and consequentially the loss in quality of life must be taken into consideration when assessing its impact [4,5]. People who survive a CVA usually suffer negative long-term effects which, in many cases, reduce the person's autonomy and lead to functional dependency [6].
Reviews of CVA cost studies report that there are fewer economic analyses of the costs of CVA than there are for other diseases [7][8][9]. However, the available studies stress that CVA is an acute and highly costly condition which accounts for a significant portion of overall healthcare spending. Evers et al. estimated that 3-4% of health spending in high-income countries is spent on CVA, mainly reflecting the direct cost of hospitalization during the first year of survival [9]. However, studies that address the cost of CVA from a broader (social) perspective indicate that non-medical costs (formal and informal care and productivity losses) may far outstrip medical costs [10][11][12][13].
As far as we know, only one study published to date has applied a cost-of-illness design using a national sample to CVD in Europe [14]. This study revealed that nonhealthcare costs accounted for 38.9% of the overall costs of the disease, with informal care being the most substantial item among non-healthcare costs (54.4% of this item). Studies which calculate such global estimates are uncommon in the literature, since they require very rich sources of data. In Spain, research has been published describing informal care in patients who have had a CVA, although these studies were either performed on small samples of patients or used mathematical models [4,[15][16][17], and only one such study [15] analyzed the influence of the degree of dependence on care hours received. Informal care is a societally relevant resource, since any absence of informal caregivers requires the care time needed by the patient to be substituted by professional social services (formal care). Obtaining data on informal care rendered would reveal the reduced societal burden of CVA avoidance or, in cases of CVA occurrence, the data would help prevent patients from progressing to a state of great dependence.
The main purpose of this study was to assess the social cost of informal care associated with the loss of autonomy (dependency) among CVA survivors in Spain and to study the association between the degree of dependency and the total number of hours of care received by these individuals.
Methods
The Survey on Disability, Independent Living and Dependency 2008 (EDAD-08) was designed by adapting the 1999 edition of the Survey on Disabilities, Impairments and State of Health to the current social and demographic situation and the new International Classification of Functioning, Disability and Health [18]. The main improvement of EDAD-08 over the 1999 study is its adaptation to the new International Classification of Functioning, Disability and Health (ICF) [19]. Previously, interviewees were asked directly if they had a disability. In the EDAD-08 study, however, questions examined the limitations in activity experienced by the subject, as proposed by the ICF. Interview questions like those in the EDAD-08 study make it possible to identify the limitations and disabilities from a wider and more objective point of view.
The households section of the EDAD-08 study consists of various questionnaires. These include the Household Survey (first phase), followed by two individual surveys, the Survey on Disabilities for persons over 6 years of age and the Survey on Limitations for children up to five years on the one hand, and a questionnaire for the primary caregivers on the other (second phase). The data were collected between November, 2007 and February, 2008 and all questionnaires and results are available on Spanish National Statistics Institute website [18]. In order to fulfill the objective of providing national, regional, and provincial estimates with a certain degree of reliability, the survey sample was comprised of 96,075 households and a stratified two-stage sampling was used (for details, see reference [20]). Of the total sample, 22,795 persons with disabilities were identified and interviewed in depth. Personal interviews were used to collect individual information; in exceptional cases, these face-to-face interviews were complemented with telephone interviews. Among the variables included in the EDAD-08 survey were the personal characteristics of the persons with disabilities (including whether or not they received personal care and, if so, the length of time of the care received), characteristics of the caretaker persons (when identified), and which activities were provided under informal care.
The number of disabled people diagnosed with CVA amounted to 1,975 individuals (329,544 people extrapolated to the national population). Of these, 1,221 (192,611 people extrapolated to the national population) received at least one hour of informal care per week.
In this study, informal care was defined as the attention provided to an individual with limitations in autonomy when conducting one or more of his/her daily activities, this attention being provided by persons who are not professional social workers. The key attribute of these care persons is that the determining factor causing the informal caregiver to accept the role is a family or social bond between him/her and the person with limited autonomy. Receiving compensation from another family member for these services does not disqualify the service as nonprofessional in nature.
In order to classify individuals who have limitations in their autonomy by degree of severity, a correspondence was assigned between the questions contained in the EDAD-08 survey and the Official Dependency Scale of Spain [21].
Valuation methods
An assessment of informal care requires that a distinction be made between the cost of care provided by informal caregivers and other care costs, such as home adaptations or the acquisition of medical equipment (e.g., wheelchairs, walkers, adjustable beds, special telephones, etc.), all of which are common elements in disabling diseases. The cost estimates in this study were performed using data relating only to the time spent by caregivers providing care. There are different methods for assigning value to time [22][23][24]. Given that the EDAD-08 survey provides data to calculate the hours of care provided, we have applied the proxy good method to use these data in our research [24]. The proxy good method or market cost method values time spent on informal care based on the labor market prices of a similar market substitute. Services provided by informal caregivers were assigned a value based on what it would cost to have a professional caregiver provide the services. Doing so allowed us to determine how much it would cost to replace the informal caregivers with professionals.
Using 2008 as our base year, we designed three scenarios for estimating replacement or substitution costs. In the first scenario, the hours of care were valued at the average official wage for social services home care in the 3 autonomous regions of Spain with the lowest hourly wage [25]. The value used was 7.67 euros per hour of care. In the second scenario, we used the average hourly wage reported by all of the autonomous regions. The average hour of care was estimated at 12.71 euros. In the third scenario, the same government source was used to determine the hourly wage for care in each autonomous region. These regional wages ranged from 6.2 to 22.8 euros.
The number of hours of informal care was assessed by carefully applying various criteria to data contained in the EDAD-08 survey. First, the disabled person-or the person providing information on the household-was asked if the person with the disability received assistance or social care because of his/her disability and the average number of hours per day he/she received help from others, not including the care and services provided at day centers or other services provided by professionals. This question was used to exclude cases in which care was mainly provided by domestic workers, hired healthcare professionals, state-run social services, social services provided by non-governmental entities (NGOs, associations) or for-profit companies. Some survey respondents indicated that the disabled person received social care, however they did not say who provided the care or how many hours of care were given (14% of total CVA survivors who stated they needed personal attention did not indicate the hours of caregiving needed). These cases were not included in our estimate of the hours of informal care. Thus, the estimate was calculated based on the hours of care provided by family, friends, and neighbors (daughters, sons, mother, father, spouse or partner, sisters, brothers, grandmothers, grandfathers, granddaughters, grandsons, other relatives, and friends or neighbors). Lastly, we censored at 16 the maximum number of daily hours per caregiver; that is, we assumed a daily period without care of eight hours per informal caregiver. Therefore, cases where the number of hours of care per day exceeded 16 hours (17-24) were considered as 16-hour days. We also estimated the hours of informal care without applying any censoring-; that is, we did not set a maximum of 16 hours and used the hours of informal care as reported by the people who answered this question. In Spain, the social services home care are made up of a set of initiatives that are performed in the home of the dependent person in order to cater for his/her everyday needs. It includes: (a) services related to attending to domestic or home needs (cleaning, ironing, washing, cooking and others); (and b) services related to personal care, in performing the activities of daily living (bathing, dressing, helping to walk, etc.).
Statistical analysis
We carried out a statistical analysis to estimate the marginal effect that a higher degree of dependency had on the number of hours of informal care provided to a CVA survivor who had a disability (but not one that would render the person dependent according to the Spanish Official Scale in effect in 2008 [21]). This analysis estimated the number of additional hours of informal care that would be required if his/her dependency status changed to moderately, severely or greatly dependent. For this purpose, we designed several models using different procedures which are explained below. We created two types of estimation models where the variable under study (number of hours of informal care provided by the primary caregiver in a week) was estimated with and without a 16-hour per-day limit.
A multivariate analysis using the number of hours of informal care provided by a primary caregiver per week (not limited to a maximum of 16 hours a day) as the dependent variable was performed using ordinary least squares regression to obtain the Eicker-White heteroscedasticity robust covariance matrix estimate [26]. In Model 1, the control variables were age, sex, and education of the persons receiving care, in addition to the degree of dependency. In Model 1a, a limit of 16 hours per day was applied as a maximum number of informal-care hours. The explanatory variables in this model were the same as in Model 1. In Model 2 (the 16-hour daily limit was applied in Model 2a) the control variables were age, sex, and education of the persons receiving care; degree of dependency; and size of the town of residence. In Model 3 (the 16 hour a day limit was applied in Model 3a), the dependent variables were the same as in Models 2 and 2b, with the addition of the variable of a live-in caregiver. In Model 4 (the 16 hour a day limit was applied in Model 4a), the control variable "region of residence" was added. In Model 5 (the 16 hour a day limit was applied in Model 5a), information was added on whether the person who received informal care also received formal care.
The most complex models can be expressed by the following Equation 1: where: H i = hours of informal care per week (primary caregiver); A i = age of care recipient; S i = sex of care recipient; E i = education of care recipient; D i = degree of dependence of care recipient; T i = size of city or town of residence; C i = live-in caregiver; R i = region of residence; F i = formal care provided to care recipient.
Results
The number of disabled people diagnosed with CVA amounted to 1,975 individuals. Of these, 1,318 (66%) reported receiving informal care, but only 1,221 (61%) stated that at least one hour of informal care was provided during the week. The results provided below are an extrapolation from this sample to the entire population according to the elevation factor provided by the EDAD-08 survey. Under this extrapolation, the EDAD-08 survey identified 329,500 people in Spain who had survived a CVA and had some type of disability. Of these, 208,865 declared to receive informal care and 192,611 declared to receive at least one hour of care weekly. The main characteristics of care recipients and caregivers appear in Table 1.
It is important to note that primary informal caregivers who provided at least one hour of caregiving during the week (n = 192,611) represent a considerable number of hours of care; 96.1% stated that they provided care 6 or 7 days a week for an average of 11.3 hours a day. Consequently, many caregivers bear heavy burdens: 42.0% believed that their health had suffered; many stated that they felt tired (55.4%) or depressed (34.3%); three out of ten caregivers were not able to work outside the home due to the demands of caregiving; 17.1% reported having had to stop working; 12.1% thought their careers had been negatively affected; and one in four caregivers thought that they had experienced financial difficulties owing to the care they gave. Furthermore, seven out of ten caregivers thought that their free time had been reduced significantly and, more specifically, 53.0% stated they could not go on holiday, and 39.7% of caregivers had no time to take care of themselves.
The 2008 estimated hours of informal care in Spain amounted to 852 million. The greatest proportion of these hours was provided by primary caregivers (731.8 million hours of informal care, or nearly 86% of the total). The rest of the time was provided by other informal caregivers. Applying our cost-assessment methodology, the economic valuation ranges between €6.53 billion (scenario 1) and €10.83 (scenario 2). The economic valuation estimated in scenario 3 (€10.37 billion) closely resembles the figures estimated in scenario 2. Estimates for each of the autonomous regions in Spain are available upon request ( Table 3).
The statistical analysis underscores the importance of degree of dependency in explaining differences in the number of hours of care provided. In the models where the maximum number of hours per day was limited to 16, our results show that compared to our reference group (informal care provided to persons not classified as dependent according to the Spanish Official Scale), a person who is moderately dependent would receive 20-22 additional hours of care per week from their primary informal caregiver. In the case of severely dependent individuals, the primary informal caregiver would provide 35-38 additional hours of care. Primary informal caregivers would provide an additional 38-40 hours of care per week to recipients classified as greatly dependent.
In the models where the number of hours per day was not limited to a pre-established maximum, the results show that compared to the reference group (informal care provided to persons not classified as dependent), a person who is moderately dependent would receive 26-30 additional hours of care per week from their primary informal caregiver. In the case of individuals who are severely dependent, the primary informal caregiver would provide 48-53 additional hours of care. Primary informal caregivers would provide an additional 53-59 hours of care per week to care recipients classified as greatly dependent.
Discussion
Our study estimates the total cost of non-professional care provided to persons who have suffered a CVA, using the Survey on Disability, Independent Living and Dependency 2008. The present study is the first to use an official scale to identify the degree of dependency of people with CVA. Although other works have evaluated informal care in CVA taking into account level of dependency, methodological differences between the studies, including the use of different dependency scales, prevents the comparison of their respective results [15,27,28]. On the other hand, other works with a robust methodology for measuring costs and which have included the cost of informal care, did not control for level of dependency in CVA survivors [14,16,[29][30][31][32].
Originally, Spain provided low levels of social protection expenditure for long-term care (CLD) compared to other European countries [33]. Thus, the Spanish care model for people with limitations on their autonomy relies to a much greater extent on family care (informal care) than do other European countries, which devote greater resources to professional care [34]. This discrepancy must be considered before extrapolating the findings of this work to other countries, where it is expected that the number of hours of informal care in patients surviving a CVA would be less, with greater involvement of care professionals.
In order to appreciate the relevance of our results, we need to look at other estimates of the cost of total formal care provided for dependent people in Spain. Jiménez-Martín and Vilaplana estimated that the total cost of formal services provided between 2008 and 2009 totaled 6.34 billion euros [35]. According to the Spanish Ministry of Health and Social Policies, the cost of the System for the Promotion of Personal Autonomy and Care for Dependent Persons (Sistema para la Autonomía y Atención a la Dependencia, or SAAD) in 2010 amounted to 6.77 billion euros [36]. Although the legislative Act on the Promotion of Personal Autonomy and Care for Dependent Persons has driven the growth of spending on dependent care for communities and local government, this financial support was already earmarked before the passage of the law; dependent-care support in 2003 represented 0.32% of GDP, raising to 0.64% in 2010 [36]. Even with the legislation in force, these figures are paltry when compared to the cost of informal care for CVA survivors cited in this study.
Therefore, a comprehensive policy to provide care to dependent people must take into account the role of and care provided by primary caregivers, recognizing the contributions they make. Other authors have made similar suggestions in this regard [37,38]. Policy makers in both the public and private spheres should take note of the fact that the excess burden borne by caregivers generates a host of problems for these persons' health, career, leisure time, and family life. Strategies designed to mitigate these problems would bolster informal support networks, thereby helping to improve the wellbeing of caregivers as well care recipients while facilitating efforts to coordinate social, health, and family services.
Some limitations of this paper should be stressed. Firstly, the EDAD-08 survey, while an excellent work that serves as a rich source of information on CVA survivors and their disabilities, contains data that is cross-sectional, not longitudinal. This approach would be a significant impediment for studies on the needs of CVA survivors seeking to compare the care received before and after the CVA or on the impact of comorbidity among significant numbers of CVA survivors. Since a causal relationship between a specific disease and the number of hours of informal care received cannot be established, the estimated hours of care must not be attributed solely to the effects of CVAs. Thus, this study does not refer to the need for informal care caused by CVAs; rather, it assesses the care provided to persons who have suffered CVAs. This distinction is important.
The authors have censored to a maximum of 16 hours per day the informal care provided by a primary caregiver. This limit assumes that the caregiver has at least eight hours for rest, personal care, and other activities. Since in many cases this time may be less than indicated, setting this limit would provide a conservative estimate of the time of informal care. Although it seems fair to suggest that daily care time can be 24 hours, and despite the fact that the time spent on tasks belonging to the category of "joint production" must be considered, the decision of the authors to choose a limit of 16 hours is modifiable. In the economic valuation analysis performed, several thresholds (12,14,16, and 18 hours) can be employed as a sensitivity analysis depending on whether the analysts prefer to be more or less conservative in their estimations.
Another point worth noting is that estimates were performed using a proxy good method based on hourly wages. Another option would have been to assign values based on tasks performed by the caregiver. However, this was not possible because while the EDAD-08 survey does contain questions on the various tasks performed by caregivers, it does not ask respondents to specify the amount of time spent on each.
It also would have been worthwhile to explore methodological issues by drawing comparisons of different methods for valuing time (e.g., proxy good method, opportunity cost, and contingent valuation) or different ways of calculating hours of care (e.g., diary method, recall method). In any event, the EDAD-08 survey does not contain data with which to analyze these questions.
Another limitation of the analysis is seen in the extrapolation of results to the national level, using the elevation factors contained in the EDAD-08 survey. As in any study based on a given sample which is then extrapolated to a larger population, the results are subject to a margin of uncertainty.
Conclusions
The results reveal that despite limiting the maximum number of hours, CVA survivors in Spain received 852 million hours of informal care in 2008. Regardless of the estimated cost per hour of care, informal care comes at a huge cost. We estimated the value of hours of informal care using hourly wages of social services employees as a shadow price, and the monetary value lies in the range of 6.53 to 10.83 billion euros. This sum could be considered the minimum value that informal caregivers contribute to societal well-being with their services.
The results of this study suggest that no strategy, program, or policy designed to promote the health and care of persons with limited autonomy can overlook the important role played by social networks (formed mainly by family members). Their contribution must not be ignored if we are to avoid designing and implementing measures fraught with inefficiency and inequities that could erode the wellbeing of citizens.
|
2017-06-23T05:07:41.437Z
|
2013-12-05T00:00:00.000
|
{
"year": 2013,
"sha1": "d64176bd38ae9c7b6691fd0c75101688f86b12fe",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-13-508",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "94342f21d9f072574eeee09764db941fa01c11f0",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208821668
|
pes2o/s2orc
|
v3-fos-license
|
Bio-inspired Flexible Lateral Line Sensor Based on P(VDF-TrFE)/BTO Nanofiber Mat for Hydrodynamic Perception
Fish and some amphibians can perform a variety of behaviors in confined and harsh environments by employing an extraordinary mechanosensory organ, the lateral line system (LLS). Inspired by the form-function of the LLS, a hydrodynamic artificial velocity sensor (HAVS) was presented in this paper. The sensors featured a polarized poly (vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)]/barium titanate (BTO) electrospinning nanofiber mat as the sensing layer, a polyimide (PI) film with arrays of circular cavities as the substrate, and a poly(methyl methacrylate) (PMMA) pillar as the cilium. The P(VDF-TrFE)/BTO electrospinning nanofiber mat demonstrated enhanced crystallinity and piezoelectricity compared with the pure P(VDF-TrFE) nanofiber mat. A dipole source was employed to characterize the sensing performance of the fabricated HAVS. The HAVS achieved a velocity detection limit of 0.23 mm/s, superior to the conventional nanofiber mat-based flow sensor. In addition, directivity was feasible for the HAVS, which was in accordance with the simulation results. The proposed bio-inspired flexible lateral line sensor with hydrodynamic perception ability shows promising applications in underwater robotics for real-time flow analysis.
Introduction
Known for their excellent mechanical sensing capabilities, biological mechanosensitive receptors have promoted the advancement of multifunctional sensors in the fields of biomedical engineering, robotics, and artificial intelligence. Remarkably, crack-shaped slit receptors in the legs of scorpions and spiders, specialized for sensing cyclic vibration in a wide range of frequencies, enable them to extract wake signatures left by the prey [1,2]. Flow-sensitive hair receptors of crickets perceive disturbance in the surroundings via variations in weak airflows [3]. Harbor seals use whiskers to decipher hydrodynamic traces left by surrounding animals [4,5]. Inspired by such biological receptors, much progress has been achieved in the field of artificial sensors [6][7][8].
The lateral line system (LLS) allows fish and some amphibians to perform fundamental behaviors for underwater survival, even in complicated environments [9]. The LLS of fish is composed of non-uniformly distributed hair receptors, called neuromasts, which perceive minute motions in water [10]. According to their functions, neuromasts can be divided into two primary types: superficial neuromasts (SNs) and canal neuromasts (CNs) [11]. SNs are distributed on the surface of the body and are sensitive to flow velocities. In addition, SNs respond differently to stimulation sources from different directions for the deflection of stereocilia inducing depolarization and hydepolarization of the electrical activity in the hair cell, thus providing spatial flow direction information along the skin surface of fish [12,13]. CNs are located in lateral line canals and connect to external flow through a series of pores, assisting fish to identify the pressure distribution in the water [14,15].
Inspired by the LLS of fish, various artificial flow velocity sensors have been developed [16]. Flow velocity sensors with different sensing mechanisms, such as piezoresistive, capacitive, optical, hot-wire, and piezoelectric properties, have been proposed [17][18][19][20][21][22][23][24][25][26]. Piezoelectric polymers are flexible and biocompatible compared with inorganic materials, and advantageous as they can be used to develop fully flexible sensors for curved surfaces [27]. Bora et al. developed a flow sensor with an electrospun polyvinylidene fluoride (PVDF) nanofiber mat. However, the flow sensor showed a poor detection limit of 5 mm/s and no directional sensing ability [28].
PVDF and its copolymer poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] are excellent sensor materials as they provide advantages such as energy conservation, excellent processability, and conformal elasticity [29,30]. The piezoelectricity of the PVDF-based polymers can be improved by incorporating inorganic materials such as ZnO, PZT, barium titanate (BTO), graphene and gold nanoparticles (AuNPs) [31][32][33][34][35]. Moreover, electrospinning is an effective way to obtain highly aligned piezoelectric nanofibers with anisotropic piezoelectricity [36]. In this paper, a hydrodynamic artificial velocity sensor (HAVS) was introduced, mimicking the LLS of fish. A highly aligned P(VDF-TrFE)/BTO piezoelectric nanofiber mat with enhanced crystallinity was used as the sensing layer. The nanofibers mat is sensitive along the alignment direction compared with that perpendicular to the fiber alignment, which provides the HAVS device with directivity.
Sensor Structure and Sensing Principle
The HAVS device inspired by the LLS is composed of a flexible polyimide (PI) substrate with a round cavity, a piezoelectric P(VDF-TrFE)/BTO nanofiber mat, and a high-aspect-ratio poly(methyl methacrylate) (PMMA) pillar, as shown in Figure 1a. A 50-µm-thick commercial PI substrate is fabricated with a round cavity to obtain a sensor diaphragm with a thickness of 20 µm. Two bottom electrodes (Au/Cr) with semicircular shape (diameter of 2 mm) and an interval of 500 µm are situated on the sensor diaphragm. The thickness of the P(VDF-TrFE)/BTO nanofiber mat is 40 µm, and the diameter of the top electrode is 2.4 mm. Another commercial PI film with a thickness of 6 µm is used to encapsulate the device. The diameter of the pillar is optimized by a multiphysics simulation. The simulation results illustrate that the generated signal achieves the maximum when the diameter of the pillar is 600 µm ( Figure S1). The commercial available PMMA pillar with a diameter of 500 µm and a height of 5 mm is employed.
As shown in Figure 1b, when an external flow passes parallel to the sensor surface, a drag force is imparted to the pillar. Due to the rigid connection between the pillar and the PI film, the bending moment is transferred to the bottom membrane inducing the piezoelectric output. to external flow through a series of pores, assisting fish to identify the pressure distribution in the water [14,15]. Inspired by the LLS of fish, various artificial flow velocity sensors have been developed [16]. Flow velocity sensors with different sensing mechanisms, such as piezoresistive, capacitive, optical, hot-wire, and piezoelectric properties, have been proposed [17][18][19][20][21][22][23][24][25][26]. Piezoelectric polymers are flexible and biocompatible compared with inorganic materials, and advantageous as they can be used to develop fully flexible sensors for curved surfaces [27]. Bora et al. developed a flow sensor with an electrospun polyvinylidene fluoride (PVDF) nanofiber mat. However, the flow sensor showed a poor detection limit of 5 mm/s and no directional sensing ability [28].
PVDF and its copolymer poly(vinylidene fluoride-trifluoroethylene) [P(VDF-TrFE)] are excellent sensor materials as they provide advantages such as energy conservation, excellent processability, and conformal elasticity [29,30]. The piezoelectricity of the PVDF-based polymers can be improved by incorporating inorganic materials such as ZnO, PZT, barium titanate (BTO), graphene and gold nanoparticles (AuNPs) [31][32][33][34][35]. Moreover, electrospinning is an effective way to obtain highly aligned piezoelectric nanofibers with anisotropic piezoelectricity [36]. In this paper, a hydrodynamic artificial velocity sensor (HAVS) was introduced, mimicking the LLS of fish. A highly aligned P(VDF-TrFE)/BTO piezoelectric nanofiber mat with enhanced crystallinity was used as the sensing layer. The nanofibers mat is sensitive along the alignment direction compared with that perpendicular to the fiber alignment, which provides the HAVS device with directivity.
Sensor Structure and Sensing Principle
The HAVS device inspired by the LLS is composed of a flexible polyimide (PI) substrate with a round cavity, a piezoelectric P(VDF-TrFE)/BTO nanofiber mat, and a high-aspect-ratio poly(methyl methacrylate) (PMMA) pillar, as shown in Figure 1a. A 50-μm-thick commercial PI substrate is fabricated with a round cavity to obtain a sensor diaphragm with a thickness of 20 μm. Two bottom electrodes (Au/Cr) with semicircular shape (diameter of 2 mm) and an interval of 500 μm are situated on the sensor diaphragm. The thickness of the P(VDF-TrFE)/BTO nanofiber mat is 40 μm, and the diameter of the top electrode is 2.4 mm. Another commercial PI film with a thickness of 6 μm is used to encapsulate the device. A PMMA pillar as an artificial cilium was bonded on the surface of the PI film. The diameter of the pillar is optimized by a multiphysics simulation. The simulation results illustrate that the generated signal achieves the maximum when the diameter of the pillar is 600 μm ( Figure S1). The commercial available PMMA pillar with a diameter of 500 μm and a height of 5 mm is employed.
As shown in Figure 1b, when an external flow passes parallel to the sensor surface, a drag force is imparted to the pillar. Due to the rigid connection between the pillar and the PI film, the bending moment is transferred to the bottom membrane inducing the piezoelectric output.
Directivity of the HAVS
Under an external stimulus, the piezoelectric membrane deforms on one side of the pillar. When the direction of the stimulus changes, stress variation appears on the piezoelectric membrane. Different piezoelectric signals are induced on the piezoelectric membrane, which are obtained from a semicircular electrode. The influence of the flow velocity direction on the piezoelectric output of the HAVS was analyzed by a multiphysics simulation. A sinusoidal uniformly distributed pressure with an amplitude of 200 Pa was exerted on one side of the pillar. The direction of the pressure varied from 0 • to 360 • ( Figure S2). Figure 2a demonstrates the distribution of stress in the x direction when the pressure is at 0 • . It can be seen that the maximum compressive stress occurs on the right side of the pillar, and the maximum tensile stress occurs on the left. The stress induced piezoelectric signals on both sides of the pillar are obtained by the two semicircular electrodes. Figure 2b demonstrates the stress distribution in the x direction when pressure is applied at 90 • . The maximum tensile stress and compressive stress appear between the semicircular electrodes, and the induced signal is neutralized on each semicircular electrode. Figure 2c illustrates the piezoelectric signals collected from a semicircular electrode by varying the directions of the applied pressures from 0 • to 180 • . The output signal varies due to the change in the direction of the applied pressure. Maximum peak voltage appears when the pressure is at the 0 • and 180 • positions. Figure 2d reveals the relationship between the piezoelectric output and the direction of the pressure. The peak amplitudes of the signals can be summarized in a figure of 8 pattern, which demonstrates that the HAVS device is directionally sensitive.
Directivity of the HAVS
Under an external stimulus, the piezoelectric membrane deforms on one side of the pillar. When the direction of the stimulus changes, stress variation appears on the piezoelectric membrane. Different piezoelectric signals are induced on the piezoelectric membrane, which are obtained from a semicircular electrode. The influence of the flow velocity direction on the piezoelectric output of the HAVS was analyzed by a multiphysics simulation. A sinusoidal uniformly distributed pressure with an amplitude of 200 Pa was exerted on one side of the pillar. The direction of the pressure varied from 0° to 360° ( Figure S2). Figure 2a demonstrates the distribution of stress in the x direction when the pressure is at 0°. It can be seen that the maximum compressive stress occurs on the right side of the pillar, and the maximum tensile stress occurs on the left. The stress induced piezoelectric signals on either side of the pillar are obtained by the two semicircular electrodes. Figure 2b demonstrates the stress distribution in the x direction when pressure is applied at 90°. The maximum tensile stress and compressive stress appear between the semicircular electrodes, and the induced signal is neutralized on every semicircular electrode. Figure 2c illustrates the piezoelectric signals collected from a semicircular electrode by varying the directions of the applied pressures from 0° to 180°. The output signal varies due to the change in the direction of the applied pressure. Maximum peak voltage appears when the pressure is at the 0° and 180° positions. Figure 2d reveals the relationship between the piezoelectric output and the direction of the pressure. The peak amplitudes of the signals can be summarized in a figure of 8 pattern, which demonstrates that the HAVS device is directionally sensitive.
Fabrication Process
The fabrication process of the HAVS device is illustrated in Figure 3. A commercially available 50-μm-thick PI film was etched to form 30-μm-deep cavities by reactive ion etching with a stencil mask (Figure 3a). After treatment with O2 plasma for 3 min, an Au/Cr (160 nm/40 nm) layer was sputtered on the PI film by magnetron sputtering with a stencil mask to form the bottom electrodes
Fabrication Process
The fabrication process of the HAVS device is illustrated in Figure 3. A commercially available 50-µm-thick PI film was etched to form 30-µm-deep cavities by reactive ion etching with a stencil mask (Figure 3a). After treatment with O 2 plasma for 3 min, an Au/Cr (160 nm/40 nm) layer was sputtered on the PI film by magnetron sputtering with a stencil mask to form the bottom electrodes of the sensor (Figure 3b). The P(VDF-TrFE)/BTO nanofiber mat was fabricated by a far-field electrospinning method and the detailed electrospinning process was demonstrated in our previous work [33]. The thickness of the nanofiber mat was 40 µm, which was well controlled by adjusting the electrospinning time. The nanofiber mat was annealed in an oven at 135 • C for 2 h. For the poling process, the fiber was sandwiched in PET films with Cu tapes as the electrodes. The device was immersed in silicone oil under an electric field of 8.4 V/µm for 1 h at 80 • C with the same field applied in the cooling run. Schematic diagram of the polarization device is shown in Figure S3.
The fabricated P(VDF-TrFE)/BTO piezoelectric nanofiber mat was then transferred on the PI substrate (Figure 3c). After forming the upper electrodes of Au/Cr (160 nm/40 nm) on the P(VDF-TrFE)/BTO piezoelectric nanofiber mat via the same method as the bottom electrodes (Figure 3d), the laminated structure was encapsulated by a 6-µm-thick PI film (Figure 3e). The fine coaxial cables were connected to the pads by conductive Ag adhesive and then cured on a hot plate at 80 • C for 4 h. A Parylene C layer with a thickness of 7 µm was deposited on the device for waterproofing and electrical isolation. The PMMA pillars were carefully mounted and adhered on the PI film under a digital microscope by epoxy resin. The device was then heat treated for 4 h at 80 • C, guaranteeing that the epoxy resin was completely cured to form a strong contact. Finally, the HAVS device was fabricated as shown in Figure 3f. of the sensor (Figure 3b). The P(VDF-TrFE)/BTO nanofiber mat was fabricated by a far-field electrospinning method and the detailed electrospinning process was demonstrated in our previous work [33]. The thickness of the nanofiber mat was 40 μm, which was well controlled by adjusting the electrospinning time. The nanofiber mat was annealed in an oven at 135 °C for 2 h. For the poling process, the fiber was sandwiched in PET films with Cu tapes as the electrodes. The device was immersed in silicone oil under an electric field of 8.4 V/μm for 1 h at 80 °C with the same field applied in the cooling run. Schematic diagram of the polarization device is shown in Figure S3. The fabricated P(VDF-TrFE)/BTO piezoelectric nanofiber mat was then transferred on the PI substrate (Figure 3c). After forming the upper electrodes of Au/Cr (160 nm/40 nm) on the P(VDF-TrFE)/BTO piezoelectric nanofiber mat via the same method as the bottom electrodes (Figure 3d), the laminated structure was encapsulated by a 6-μm-thick PI film (Figure 3e). The fine coaxial cables were connected to the pads by conductive Ag adhesive and then cured on a hot plate at 80 °C for 4 h. A Parylene C layer with a thickness of 7 μm was deposited on the device for waterproofing and electrical isolation. The PMMA pillars were carefully mounted and adhered on the PI film under a digital microscope by epoxy resin. The device was then heat treated for 4 h at 80 °C, guaranteeing that the epoxy resin was completely cured to form a strong contact. Finally, the HAVS device was fabricated as shown in Figure 3f.
Fabrication Results
The morphology of the aligned nanofibers was observed by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Figure 4a shows the SEM image of the nanofibers. The average diameter of the fiber is approximately 500 nm. The fibers present a rough surface, which is attributed to the inclusion of the BTO nanoparticles. The TEM image in Figure 4b illustrates the distribution of the BTO particles (dark grains) in the polymeric matrices. The average diameter of a BTO nanoparticle is approximately 60 nm. X-ray diffraction (XRD) is used to identify the crystalline structures of the nanofiber mat (Figure 4c). The sharpest peak at 19.9° is related to the β phase of the P(VDF-TrFE). The P(VDF-TrFE)/BTO nanofiber mat shows enhanced crystallinity compared to the pristine P(VDF-TrFE) nanofiber mat. The orientation planes of (100),
Fabrication Results
The morphology of the aligned nanofibers is observed by scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Figure 4a shows the SEM image of the nanofibers. The average diameter of the fiber is approximately 500 nm. The fibers present a rough surface, which is attributed to the inclusion of the BTO nanoparticles. The TEM image in Figure 4b illustrates the distribution of the BTO particles (dark grains) in the polymeric matrices. The average diameter of a BTO nanoparticle is approximately 60 nm. X-ray diffraction (XRD) is used to identify the crystalline structures of the nanofiber mat (Figure 4c). The sharpest peak at 19.9 • is related to the β phase of the P(VDF-TrFE). The P(VDF-TrFE)/BTO nanofiber mat shows enhanced crystallinity compared to the pristine P(VDF-TrFE) nanofiber mat. The orientation planes of (100), mat and 76.3% for the P(VDF-TrFE)/BTO nanofiber mat. The XRD and FTIR results demonstrate that the fibers with BTO show enhanced crystallinity compared with the pristine P(VDF-TrFE) nanofibers. Figure 5 shows the flexible HAVS developed on the PI substrate.
Velocity Response of the HAVS
The velocity response of the HAVS was characterized in a water tunnel of dimensions 0.7 × 0.35 × 0.4 m 3 . The water tunnel was filled with 25-cm-deep water. Schematic diagram of the experimental setup is shown in Figure 6. A hydrodynamic stimulus was generated by a vibrating sphere with a diameter of 15 mm, known as the dipole source, which has been commonly used in studies on artificial LLS [14,31] . The dipole was driven by a pneumatic vibrator through a stainless steel rod, and the vibration amplitude was 1.1 mm. The vibration frequency of the dipole was set at 80 ± 2 Hz in this experiment, and could be adjusted manually. The output signal of the sensor was amplified by a charge amplifier (NEXUS Conditioning Amplifier-2692, Brüel & Kjae r, Denmark). The signal was
Velocity Response of the HAVS
The velocity response of the HAVS was characterized in a water tunnel of dimensions 0.7 × 0.35 × 0.4 m 3 . The water tunnel was filled with 25-cm-deep water. Schematic diagram of the experimental setup is shown in Figure 6. A hydrodynamic stimulus was generated by a vibrating sphere with a diameter of 15 mm, known as the dipole source, which has been commonly used in studies on artificial LLS [14,31] . The dipole was driven by a pneumatic vibrator through a stainless steel rod, and the vibration amplitude was 1.1 mm. The vibration frequency of the dipole was set at 80 ± 2 Hz in this experiment, and could be adjusted manually. The output signal of the sensor was amplified by a charge amplifier (NEXUS Conditioning Amplifier-2692, Brüel & Kjae r, Denmark). The signal was filtered by an adjustable filter (3624, NF Electronic Instruments, USA) with a high pass of 60 Hz and
Velocity Response of the HAVS
The velocity response of the HAVS was characterized in a water tunnel of dimensions 0.7 × 0.35 × 0.4 m 3 . The water tunnel was filled with 25-cm-deep water. Schematic diagram of the experimental setup is shown in Figure 6. A hydrodynamic stimulus was generated by a vibrating sphere with a diameter of 15 mm, known as the dipole source, which has been commonly used in studies on artificial LLS [14,31]. The dipole was driven by a pneumatic vibrator through a stainless steel rod, Sensors 2019, 19, 5384 6 of 10 and the vibration amplitude was 1.1 mm. The vibration frequency of the dipole was set at 80 ± 2 Hz in this experiment, and could be adjusted manually. The output signal of the sensor was amplified by a charge amplifier (NEXUS Conditioning Amplifier-2692, Brüel & Kjaer, Denmark). The signal was filtered by an adjustable filter (3624, NF Electronic Instruments, USA) with a high pass of 60 Hz and an amplification gain of 5. A data acquisition card (USB-4711, Advantech, China) with a sampling rate of 2000 Hz was used to record the output. The unamplified charge outputs (peak values) of the sensor were employed for statistical analysis.
The sensor was positioned at the center of the water tunnel immediately beneath the dipole. The dipole vibrated parallel to the direction of the sensor array (perpendicular to the pillar). The flow velocity was adjusted by changing the distance between the dipole center and the apex of the pillar. The velocity detected by the HAVS device was calculated using the following formula [25]: where f is the vibration frequency of the dipole, a is the diameter of the dipole, s is the amplitude of the dipole, and D is the distance between the dipole center and the apex of the pillar in the sensor.
Sensors 2019, 19, x FOR PEER REVIEW 6 of 11 an amplification gain of 5. A data acquisition card (USB-4711, Advantech, China) with a sampling rate of 2000 Hz was used to record the output. The unamplified charge outputs (peak values) of the sensor were employed for statistical analysis. The sensor was positioned at the center of the water tunnel immediately beneath the dipole. The dipole vibrated parallel to the direction of the sensor array (perpendicular to the pillar). The flow velocity was adjusted by changing the distance between the dipole center and the apex of the pillar. The velocity detected by the HAVS device was calculated using the following formula [25]: where f is the vibration frequency of the dipole, a is the diameter of the dipole, s is the amplitude of the dipole, and D is the distance between the dipole center and the apex of the pillar in the sensor. Figure 7a shows that the response of the HAVS varied as the velocity was changed to 78.56, 25.77, and 0.58 mm/s. Figure 7b shows the relationship between charge output and velocity. Error bars illustrate the deviation for three repeatable measurements for the same HAVS device. The charge outputs increased with the increase in flow velocities and the sensitivity of the HAVS device was 0.08 pC/(ms -1 ). The detection limit is defined in terms of signal output at the background noise level [37]. The HAVS device demonstrated a velocity detection limit of 0.23 mm/s. Compared with the previous PVDF nanofiber mat-based velocity sensor (detection limit: 5 mm/s) [28], the velocity detection limit in this work is greatly improved, which is probably attributable to the enhanced piezoelectricity of the P(VDF-TrFE)/BTO nanofiber mat after polarization [38]. Figure 7a shows that the response of the HAVS varied as the velocity was changed to 78.56, 25.77, and 0.58 mm/s. Figure 7b shows the relationship between charge output and velocity. Error bars illustrate the deviation for three repeatable measurements for the same HAVS device. The charge outputs increased with the increase in flow velocities and the sensitivity of the HAVS device was 0.08 pC/(ms -1 ). The detection limit is defined in terms of signal output at the background noise level [37]. The HAVS device demonstrated a velocity detection limit of 0.23 mm/s. Compared with the previous PVDF nanofiber mat-based velocity sensor (detection limit: 5 mm/s) [28], the velocity detection limit in this work is greatly improved, which is probably attributable to the enhanced piezoelectricity of the P(VDF-TrFE)/BTO nanofiber mat after polarization [38].
Directivity Measurement of the HAVS
The directivity detection of the HAVS device was characterized in the water tunnel, as shown in Figure 8a. The dipole was immersed in water to a depth of 50 mm and vibrated perpendicular to the pillar. The vibration frequency of the dipole was 70 ± 2 Hz. The sensor was positioned on a rotational micropositioner. The distance between the dipole center and the apex of the pillar was fixed as 9.5 mm. The micropositioner stage with the sensor was rotated in 15 • increments from 0 • to 360 • , where 0 • represents the direction of fiber alignment along with the direction of dipole vibration. The sensor at the positions of 0 • and 90 • is illustrated in Figure 8b. The vibration direction of the dipole remained stationary. The velocity at the apex of the pillar, calculated using Equation (1), was found to be 1.9 m/s. The charge outputs of the sensor were averaged over three repeated measurements. pC/(ms -1 ). The detection limit is defined in terms of signal output at the background noise level [37]. The HAVS device demonstrated a velocity detection limit of 0.23 mm/s. Compared with the previous PVDF nanofiber mat-based velocity sensor (detection limit: 5 mm/s) [28], the velocity detection limit in this work is greatly improved, which is probably attributable to the enhanced piezoelectricity of the P(VDF-TrFE)/BTO nanofiber mat after polarization [38]. The directivity detection of the HAVS device was characterized in the water tunnel, as shown in Figure 8a. The dipole was immersed in water to a depth of 50 mm and vibrated perpendicular to the pillar. The vibration frequency of the dipole was 70 ± 2 Hz. The sensor was positioned on a rotational micropositioner. The distance between the dipole center and the apex of the pillar was fixed as 9.5 mm. The micropositioner stage with the sensor was rotated in 15° increments from 0° to 360°, where 0° represents the direction of fiber alignment along with the direction of dipole vibration. The sensor at the positions of 0° and 90° is illustrated in Figure 8b. The vibration direction of the dipole remained stationary. The velocity at the apex of the pillar, calculated using Equation (1), was found to be 1.9 m/s. The charge outputs of the sensor were averaged over three repeated measurements. The sensor output in the time domain is shown in Figure 9a. The charge output of the sensor at 0° is approximately three times higher than that of the sensor at 90°. The figure of 8 pattern of the output signals is displayed in the polar coordinates shown in Figure 9b. The results demonstrate that the HAVS device is directionally sensitive. The different charge outputs can be attributed to the following two factors. Firstly, when the HAVS device is located at 0° and 180°, the fiber orientation is consistent with the direction of dipole vibration. The pillar transmits the flow field-generated force induced by the dipole vibration to the nanofiber mat. The nanofiber mat is more sensitive to the strain in the fiber alignment direction [36]. However, when the sensor is positioned at 0° and 180°, the fiber orientation may not be exactly the same as the vibration direction of the dipole. As a result, the charge output at 195° is higher than that at 180°. The other factor relates to the structure of the bottom electrode, which causes the differences in the charge collection. At 0° and 180°, the stress induced charge output achieves maximum on a semicircular electrode. In the cases of 90° and 270°, the stress induced charge outputs are partially neutralized on a semicircular electrode, which leads to the decrease in charge output. The fabricated HAVS device opens up possibilities for directional detection. The sensor output in the time domain is shown in Figure 9a. The charge output of the sensor at 0 • was approximately three times higher than that of the sensor at 90 • . The figure of 8 pattern of the output signals is displayed in the polar coordinates shown in Figure 9b. The results demonstrated that the HAVS device is directionally sensitive. The different charge outputs can be attributed to the following two factors. Firstly, when the HAVS device was located at 0 • and 180 • , the fiber orientation was consistent with the direction of dipole vibration. The pillar transmitted the flow field-generated force induced by the dipole vibration to the nanofiber mat. The nanofiber mat is more sensitive to the strain in the fiber alignment direction [36]. However, when the sensor was positioned at 0 • and 180 • , the fiber orientation may not be exactly the same as the vibration direction of the dipole. As a result, the charge output at 195 • was higher than that at 180 • . The other factor related to the structure of the bottom electrode, which caused the differences in the charge collection. At 0 • and 180 • , the stress induced charge output achieved maximum on a semicircular electrode. In the cases of 90 • and 270 • , the stress induced charge outputs were partially neutralized on a semicircular electrode, which leaded to the decrease in charge output. The fabricated HAVS device opens up possibilities for directional detection.
Conclusions
We developed a piezoelectric polymer-based flexible HAVS inspired by the LLS of fish. The sensor featured a polarized P(VDF-TrFE)/BTO electrospinning nanofiber mat and a flexible PI substrate. The XRD and FTIR results demonstrated that the nanofibers with BTO exhibited enhanced crystallinity and piezoelectricity. The HAVS device achieved a lower velocity sensing limitation of 0.23 mm/s, compared to that of the conventional nanofiber mat-based flow sensor (i.e., 5 mm/s). Moreover, the highly aligned P(VDF-TrFE)/BTO nanofiber endowed the flexible HAVS with directivity. Given the excellent sensing performance, the bio-inspired flexible HAVS shows a potential in hydrodynamic images for robotic applications.
Supplementary Materials:
The following are available online at www.mdpi.com/xxx/s1. Figure S1: Relationship between the diameter of the pillar and the generated voltage. Figure S2: Direction of the pressure varied from 0° to 360° in the simulation process. Figure
Conclusions
We developed a piezoelectric polymer-based flexible HAVS inspired by the LLS of fish. The sensor featured a polarized P(VDF-TrFE)/BTO electrospinning nanofiber mat and a flexible PI substrate. The XRD and FTIR results demonstrated that the nanofibers with BTO exhibited enhanced crystallinity and piezoelectricity. The HAVS device achieved a lower velocity sensing limitation of 0.23 mm/s, compared to that of the conventional nanofiber mat-based flow sensor (i.e., 5 mm/s). Moreover, the highly aligned P(VDF-TrFE)/BTO nanofiber endowed the flexible HAVS with directivity. Given the excellent sensing performance, the bio-inspired flexible HAVS shows a potential in hydrodynamic images for robotic applications.
Supplementary Materials: The following are available online at http://www.mdpi.com/1424-8220/19/24/5384/s1. Figure S1: Relationship between the diameter of the pillar and the generated voltage. Figure S2: Direction of the pressure varied from 0 • to 360 • in the simulation process. Figure
|
2019-12-06T16:29:52.359Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ef5253494ab7daedfc9d7b9b3281e3a0c3f8484d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/24/5384/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a4595a1b1761b765a4fb17577fca95cc3c456fe",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science",
"Materials Science"
]
}
|
225795526
|
pes2o/s2orc
|
v3-fos-license
|
Development of lymphangiosarcoma after pelvic radiation therapy for uterine cervical cancer: A case report and literature review
1Department of Obstetrics and Gynecology, Hannan Chuo Hospital, 3-3-28, Minamishinmachi, Matsubara, Osaka 580-0023 2Department of Surgery, Hannan Chuo Hospital, 3-3-28, Minamishinmachi, Matsubara, Osaka 580-0023 3Department of Obstetrics and Gynecology, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871 4Department of Diagnostic Pathology, Kindai University Nara Hospital, 1248-1 Otodacho, Ikoma, Nara 630-0293 (Japan)
Introduction
Angiosarcoma (AS) accounts for only 1-2% of all soft tissue sarcomas [1]. Common sites of occurrence include the skin, breast, deep soft tissue, and liver. AS of the gastrointestinal (GI) tract is very rare [2]. Although its etiology remains unclear, previous studies suggested external-beam irradiation as a cause of AS. We herein present a case of a woman with lymphangiosarcoma (LAS) of the small intestine occurring 7 years after pelvic radiotherapy for uterine cervical cancer. Moreover, through a literature review, we provide current information on the diagnosis and management of LAS.
Case presentation
A 41-year-old Japanese woman with a history of pelvic radiotherapy presented with diarrhea lasting for two weeks and diffuse abdominal pain, distension, nausea, and vomiting that started the previous day. Seven years ago, she had been treated with definitive radiotherapy for FIGO stage IB2 mucinous adenocarcinoma of the uterine cervix. Radiotherapy consisted of whole pelvic external beam radiotherapy (EBRT) followed by high-dose-rate intracavitary brachytherapy (HDR-ICBT). A total pelvic dose of 50 Gy was given in 25 fractions at 5 fractions per week. The total dose of HDR-ICBT prescribed at Point A was 27.2 Gy in 4 fractions. She was regularly followed-up after definitive radiotherapy. Her post-treatment follow-up was uneventful for 7 years without recurrent disease.
On admission, laboratory examinations revealed a white blood cell (WBC) count of 5300/mm 3 with 81% neutrophils and a red blood cell count of 444 × 10 4 /mm 3 . Her Creactive protein level was significantly elevated to 0.56 mg/dl (normal range; < 0.4 mg/dl). Although a significant increase was observed in her CA-19-9 level at the time of the initial diagnosis of cervical cancer, her serum CA-19-9 level was normal (23 U/ml, normal range; < 37). Her CA-125 level was also significantly elevated to 254 U/ml (normal range; < 65), but was normal at the initial diagnosis of cervical cancer. Her serum CEA level was normal at 1.1 ng/ml (normal range; < 5).
Abdominal and trans-vaginal ultrasound revealed large ascites formation. Computerized tomographic (CT) scans of the abdomen and pelvis showed dilated small bowel loops with a thickened bowel wall and ascites formation, suggesting bowel obstruction of the small intestine without any evidence of the recurrence of cervical cancer. The site of bowel occlusion was unclear. Her chest X-ray was normal and free air was not detected on abdominal X-ray. A large amount of ascites, but a normal uterus, ovaries, and fallopian tubes were observed on pelvic MRI. Upper endoscopy was normal. Although she developed melena and tarry stools, the site of bleeding was not identified by colonoscopy. However, 18 F-fluoro-2-deoxy-dglucose (FDG) positron emission tomography CT ( 18 F-FDG PET/CT) revealed the diffuse uptake of FDG in the lower abdominal cavity and focal uptake around the right diaphragm, suggesting peritoneal dissemination. A cytological analysis of ascites was positive, suggesting a malignant tumor in the peritoneal cavity. Cultures of ascites were negative. We suspected primary peritoneal cancer or car- cinomatous peritonitis from other origins, and exploratory laparotomy was offered to the patient.
Ten days before her planned surgery, the patient developed acute abdominal pain, suggesting panperitonitis. Free air was confirmed by abdominal X-ray, and emergency laparotomy was performed on suspicion of GI tract perforation. Emergency laparotomy revealed a large amount of purulent fluid with grayish white matter in the peritoneal cavity. Her uterus, both adnexa, liver, gallbladder, spleen, and diaphragm were normal. Multiple small nodules suggestive of disseminated tumors were noted on the surface of the greater omentum and sigmoid colon. The small intestinal wall and mesentery were ischemic and necrotic accompanying multiple tumor nodules, with an ileal segment exhibiting perforation. After taking biopsy samples from disseminated tumors, a 15-cm-long segment of the small intestine 3 cm proximal to the terminal ileum was excised ( Figure 1), and colostomy was performed.
A histological examination of the small intestine revealed that the mucosa of the small intestine was smooth and there was no ulceration or cancerous lesion. However, severe edema and congestion were noted in the submucosal tissue. Histologically, there were many enlarged vessels between the submucosal layer and serosa ( Figure 2 (i)), which contained slightly eosinophilic fluid and a few erythrocytes ( Figure 2 (ii)). The wall of the vessel consisted of thin intimal cells and lacked a tunica media. In the subserosal layer, the intima of vessels had large oval nuclei with increased chromatin and papillary proliferation into the lumen. Many small lumens consisting of dense atypical spindle cells were also noted around the enlarged vessels (Figure 2 (iii)). These atypical vessels invaded the submucosal layer and adipose tissue of the mesentery. However, the intima of the normal-sized artery and vein was thin and had no atypia. An immunohistochemical analysis revealed that the atypical cells were positive for vimentin, CD31, and podoplanin (D2-40), and were negative for CD34, keratin, calretinin, and PAX8, suggesting a lymph vessel origin ( Figure 3). We concluded that the patient had LAS.
The postoperative course was complicated by bacterial infection and short bowel syndrome. The following were informed to the patient and her husband: 1) due to the presence of widely disseminated tumors, complete surgical removal was not possible and 2) there are currently no effective adjuvant treatments after initial surgery. She received 3 cycles of nivolumab as off-label use; however, it was ineffective. The patient died of disease progression 7 months after the diagnosis of LAS.
Discussion
We herein described the first case of LAS of the small intestine presenting with intestinal perforation and acute abdomen. The patient had a history of pelvic radiation therapy. Although multimodal treatment, including surgery and immunotherapy, was performed, the patient died of disease progression 7 months after the diagnosis of LAS.
Angiosarcoma (AS) represents 1 to 2% of soft tissue sarcomas and most frequently occurs in the subcutis. Thus, intra-abdominal AS is a very rare neoplasm that generally occurs in the liver or spleen, and, in extremely rare cases, the GI tract [3]. In a review that included 106 cases of GI vascular tumors treated at the Mayo Clinic between 1925 and 1944, only 14 cases of AS were found: 7 in the stomach, 3 in the small intestine, 2 in the rectum, 1 in the esophagus, and 1 in the appendix [4].
LAS is AS with prominent lymphatic differentiation that arises from lymphatic endothelial cells. The classic microscopic appearance of LAS is a network of anastomosing, delicate vascular channels lined by atypical endothelial cells. However, a morphological diagnosis is sometimes challenging when differentiating a vascular neoplasm from an epithelial neoplasm, such as poorly differentiated carcinomas. Moreover, due to close histological and embryogenetic relationships, difficulties are associated with differen-tiating between malignant vascular tumors of blood vessels (AS) and of a lymph vessel origin (LAS) based solely on morphology. Enzinger and Weiss stated that "as it is usually impossible to determine which tumors display lymphatic versus vascular differentiation, all are referred to as AS, even those that arise in the setting of lymphedema." [5]. Thus, pathologists have used the term AS to encompass both AS and LAS. However, the recent development of new immunohistochemical markers suggests that LAS may be defined immunophenotypically. Typical LAS is positive for the mesenchymal marker vimentin, endothelial markers (e.g., CD31 and CD34), and markers of the lymphatic endothelium (e.g., D2-40, Prox-1, LYVE-1, and VEGR receptor-3 (VEGFR-3)), but negative for the epithelial marker cytokeratin [6].
Although the etiology of AS/LAS is unclear, several factors have been implicated in its pathogenesis, including occupational exposure to certain toxins (e.g. vinyl chloride, thorotrast, or arsenic) [7], a chronic lymphedematous extremity (e.g. Stewart-Treves syndrome, Milroy's disease, and chronic filaria) [8,9], external-beam irradiation [10], trauma, and the prolonged retention of a foreign body [11]. The present case had a history of pelvic radiotherapy.
Common post-radiation vascular tumors include undifferentiated pleomorphic sarcoma, LAS, hemangiosarcoma, leiomyosarcoma, and fibrosarcoma [12]. Post-radiation AS/LAS generally occurs after an interval of 5-11 years following irradiation for carcinoma of the breast, ovary, uterus, and cervix [13]. The dose of prior radiation ranged between 4000 to 8000 rad [14]. In the present case, LAS developed 7 years after pelvic radiotherapy for uterine cervical cancer.
Post-radiation AS has been reported in the omentum, small intestine, and colon, and in the form of diffuse AS in the entire abdomen [15]. In the present case, LAS developed in the small intestine. The small intestine constitutes approximately 75% of the GI tract; however, only between 1 and 1.6% of all GI malignant tumors originate at this site [16]. As shown in Table 1, to the best of our knowledge, there have only been 16 cases of post-radiation AS arising in the small intestine, including ours, demonstrating the extreme rarity of this condition. Of these, 6 cases were postradiation AS that occurred following irradiation for cervical cancer [17]. Since previous case series did not conduct immunostaining using lymphatic vessel markers, the present case is the only case of post-radiation LAS arising in the small intestine (Table 1).
Clinically, malignant tumors of the small intestine generally present with GI bleeding, anemia, and intestinal obstruction. Due to their non-specific clinical presentation, most often with GI bleeding and anemia, a diagnosis is only reached after an extensive search. In the present case, symptoms were abdominal pain and bowel obstruction, and the time from the initial consultation to the day of surgery was 74 days.
AS has been treated using a multimodal approach. Although complete surgical resection is considered to be the only effective treatment, complete surgical excision is often not possible due to the infiltrative and multifocal nature of this malignancy. Chemotherapy, radiotherapy, and their combination have been attempted for these patients [18,19]. However, there is currently no consensus on the effectiveness of chemotherapy or radiotherapy. Commonly employed anti-cancer agents have been cisplatin, taxanes, ifosfamide, gemcitabine, and anthracyclines [20]. More recently, the use of biologic agents, such as sorafenib, sunitinib, and bevacizumab, has also been reported [15,21]. However, no survival benefit of these therapeutic regimens has been demonstrated. According to previous studies, the overall 5-year survival rate for patients with AS/LAS is 20-35% [16,22]. As shown in Table 1, most patients with AS/LAS of the small intestine after radiotherapy died within one year of diagnosis (Table 1). Among the 15 cases for which survival times were available, the one-year survival rate after diagnosis was only 33.3%. All patients with post-radiation AS/LAS with a history of cervical cancer died within one year (Table 1). Our patient attempted the featured checkpoint inhibitor nivolumab as off-label use; however, it was ineffective. Thus, a novel effective treatment is urgently required for this disease.
In summary, we herein reported the first case of postradiation LAS in the small intestine. Although AS/LAS of the small intestine is extremely rare, it needs to be considered in patients with a history of radiotherapy. Due to the rarity of LAS, we consider it to be of great importance to report even individual cases in order to establish an optimal treatment.
Author contributions
HU, SM and TW wrote the manuscript. IA performed the research. SY performed review & editing. All authors contributed to editorial changes in the manuscript. All authors read and approved the final manuscript.
Ethics approval and consent to participate
Informed consent was obtained from the patient for publication of this case Report.
|
2020-06-18T09:07:03.767Z
|
2020-06-15T00:00:00.000
|
{
"year": 2020,
"sha1": "0251e5fd0260fa80db673697f90a9dbf210079d8",
"oa_license": null,
"oa_url": "https://ejgo.imrpress.com/EN/article/downloadArticleFile.do?attachType=PDF&id=4112",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1ad93b91dfe5e2c23efdd13ba9df03f7e5f1c5a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226237218
|
pes2o/s2orc
|
v3-fos-license
|
KrigHedge: Gaussian Process Surrogates for Delta Hedging
We investigate a machine learning approach to option Greeks approximation based on Gaussian process (GP) surrogates. The method takes in noisily observed option prices, fits a nonparametric input-output map and then analytically differentiates the latter to obtain the various price sensitivities. Our motivation is to compute Greeks in cases where direct computation is expensive, such as in local volatility models, or can only ever be done approximately. We provide a detailed analysis of numerous aspects of GP surrogates, including choice of kernel family, simulation design, choice of trend function and impact of noise. We further discuss the application to Delta hedging, including a new Lemma that relates quality of the Delta approximation to discrete-time hedging loss. Results are illustrated with two extensive case studies that consider estimation of Delta, Theta and Gamma and benchmark approximation quality and uncertainty quantification using a variety of statistical metrics. Among our key take-aways are the recommendation to use Matern kernels, the benefit of including virtual training points to capture boundary conditions, and the significant loss of fidelity when training on stock-path-based datasets.
Introduction
Fundamentally, hedging is about learning the sensitivities of the contingent claim to evolving market factors. For example, Delta hedging manages risk by controlling for the sensitivity of the financial derivative to the underlying spot price. Theta manages risk by controlling for the sensitivity of the financial derivative to the passing of time, and so on. Thus, successful hedging strategies depend on accurately learning such sensitivities. Unfortunately the related Greeks are rarely available analytically, motivating the large extant literature (Capriotti et al. 2017;Fu et al. 2012;Jazaerli and Saporito 2017;Wang 2020, 2021) on Greek approximation and computation. The goal of this article is to contribute to this enterprise by investigating a novel tie-in between machine learning and hedging. The idea is to develop a non-parametric method that does not require working with any particular stochastic model class-all that is needed is the data source (or a black-box simulator) generating approximate option prices. The training dataset is used to fit a data-driven input-output mapping and evaluate the respective price sensitivity. Specifically, we propose to use Gaussian Process (GP) surrogates to capture the functional relationship between derivative contract price and relevant model parameters, and then to analytically differentiate the fitted functional approximator to extract the Delta or other desired Greek.
Our specific implementation brings several advantages over competing methods. First, GPs can handle both interpolation and smoothing tasks, i.e. one may treat training inputs as being exact or noisy. Therefore, GP surrogates can be applied across the contexts of (a) speeding up Greek computations when a few exact data samples are available (model calibration), of (b) utilizing approximate Monte-Carlo-based samples, and of (c) fitting to real-life data. Second, GPs are well-suited to arbitrary training sets and so naturally accommodate historical data that is highly non-uniform in the price dimension (namely based on a historical trajectory of the underlying). Third, GPs offer uncertainty quantification so rather than providing a best-estimate of the desired Greek, GPs further supply a state-dependent confidence interval around that estimate. This interval is crucial for hedging purposes, since it indicates how strict one ought to be in matching the target Greek. Fourth, GPs interact well with dynamic training, i.e. settings where the training sets change over time.
Differently to our approach presented here, GP regression have been applied to other financial mathematics ' problems. For instance, in De Spiegeleer et al. (2018), the authors considered GPs to speed-up pricing of derivatives contracts (including exotic ones) within reasonable reduction of accuracy. Additionally, they also applied GP regression to interpolate implied volatility surfaces and use it for backtesting an option strategy.
Considering a different application, Crépey and Dixon (2019) applied multiple-output GPs to speed-up mark-to-market of derivative portfolios in the context of credit valuation adjustment (CVA). Moreover, the authors also use single-output GP to exemplify the learning of the pricing formula of financial models as the Heston model. Similarly to our approach, they also mention that GPs provides analytic expression for sensitivities of derivative prices. However, differently from our approach, they consider only the Black-Scholes model where the GP regression is trained using the Black-Scholes formula and, although Greek approximation is considered, the implications to the hedging problem are not studied. Furthermore, Chataigner et al. (2021) used GPs to fit no-arbitrage constrained option price surfaces given empirical quotes, while Goudenège et al. (2020) applied them for value function approximation of American options.
Existing literature on numerical Greeks approximation is generally split between the noiseless setting (known as curve-fitting or interpolation) and the noisy case (statistical regression). For interpolation, the state of the art are the Chebyshev polynomials recently studied in the series of works Gaß et al. (2018); ; Glau and Mahlstedt (2019). For regression many of the best performing methods, such as random forests, are not even differentiable so do not necessarily possess gradients. In contrast, GPs gracefully unify in a single framework both the noiseless and noisy settings.
Within this landscape our contribution is to provide a detailed analysis of GP surrogates for Greek approximation and Delta hedging. To this end, we investigate the role and relative importance of various surrogate ingredients, such as kernel family, shape of experimental design, training data size, and propose several modifications that target the financial application. Moreover, we assess the performance of our Greek approximators both from the statistical perspective, as well as from the trader's perspective in terms of the resulting hedging error. In particular, in Proposition 1 we connect the quality of the Delta approximation with the resulting hedging loss, providing insights into how errors in estimating the Greeks translate into the hedging P&L.
The rest of the paper is organized as follows. Section 2 explains our approach of approximating price sensitivities using GP surrogates. Section 4 presents numerical experiments in the classical Black-Scholes model, while Section 5.1 does the same for a local volatility model where ground truth is no longer immediately available. Section 5 discusses our findings and outlines future research directions.
Modeling the Option Price Surface
To fix ideas, consider hedging of a European Call contract. The European Call has a given strike K and maturity T and is written on underlying (S t ) t≥0 . The respective no-arbitrage option price is given by (we also use P to denote a generic contract price function) where we emphasize the dependence on the calendar time t and the current spot price S. Above Q is a pricing martingale measure, kept generic for now. Any other European-style financial contract can be similarly considered; we do not make any direct use of the specific form (or smoothness) of the Call payoff in (1) henceforth.
As a canonical example of hedging, we are interested in finding the Call Delta ∆(t, S) := ∂P (t, S)/∂S, for arbitrary (t, S). In the most classical setting (such as the Black-Scholes model), one has an analytical formula for (t, S) → P (t, S) and can then simply differentiate the latter to obtain the Delta. We rather consider the more common situation where P (t, S) is not directly known. Instead, we are provided a training set D = {(t i , S i , Y i ) : i = 1, . . . , N }, where Y i P (t i , S i ), and have to use this data to infer or learn (t, S) → ∆(t, S). This problem is motivated by the situation where a pricing model is available but it is computationally expensive to directly compute P (t, S) each time the option price is needed and so only a sample of such computations is provided. We distinguish two sub-cases: (a) Computing P (t, S) exactly is possible, but is challenging/time-consuming. For example, it might necessitate solving a partial differential equation. Then D is a collection of inputs where Y i = P (t i , S i ) was evaluated exactly and the goal is to obtain a cheap representation of the map (t, S) → ∆(t, S) by extrapolating the exact Y i 's.
(b) Option prices are evaluated through a Monte Carlo engine. For a given (t i , S i ), the modeler has access to an empirical average Y i ofŇ Monte Carlo samples, with precision being on the order of O(Ň −1/2 ). For finiteŇ , Y i is a noisy estimate of P (t i , S i ). The training set D is then a collection of such noisy samples that need to be smoothed, interpolated and differentiated to learn the map (t, S) → ∆(t, S).
We note that because the training data is generated by the modeler, there is the related question of experimental design, i.e. how to choose D wisely to maximize computational efficiency that we will also explore.
Surrogate Gradients
In both Setting (a) and Setting (b) above, our aim is to provide an estimate of ∆(t, S) for arbitrary (t, S) ∈ D in some test set D . This could include in-sample predictions, i.e. for (t, S) ∈ D (for example obtaining Delta at same inputs used for training), or out-of-sample predictions, including extrapolation (t, S) outside the convex hull of D which would be the case if training is confined to the past t < T 0 and we want to Delta hedge in the future, t > T 0 . Analogously to needing ∆(t, S) in order to hedge the respective risk of the underlying price moves, the trader is also interested in other Greeks. Two examples that we will also consider below include the Theta Θ(t, S)-sensitivity to t, and the Gamma Γ(t, S)-the second derivative of P (t, S) with respect to S.
We emphasize that while the training set contains information about option price P (t i , S i ), our goal is to learn the price sensitivities. We tackle this issue by using the intermediate step of first fitting a statistically-driven non-parametric mapping (t, S) → P (t, S) called a surrogate or a metamodel. We then set ∆(t, S) := ∂ P (t, S)/∂S. A key idea is that the second step of taking derivatives is done analytically, even though P (t, S) is non-parametric. On the one hand, this strategy reduces the error in ∆ since only a single metamodeling approximation is needed and the differentiation is exact. On the other hand, it offers precise uncertainty quantification, offering an in-model assessment of the accuracy of ∆ by rigorously propagating the underlying uncertainty in P . In particular, the method provides credible bands around ∆, giving the end-user a clear guidance on how well is the model learning the Greek. This information is critical for trading purposes, in particular in the context of no-transaction regions under transaction cost regimes, see for instance Whalley and Wilmott (1997).
Our data-driven approach is broadly known as curve-fitting. In general, parametric curve-fitting via constructing a surrogate (t, S) → P (t, S) (e.g. via a spline-based P ) and then differentiating it is known to lead to highly unstable estimates for ∆ and other gradients. This is because the typical L 2 criterion that is driving the fitting of P (t, S) is completely unaware of the subsequent plan to compute gradients. As a result, differentiating the typical regression fit can lead to nonsensical gradient estimates, see e.g. Jain and Oosterlee (2015). The machine learning folklore (e.g. in the context of vast Bayesian optimization literature) suggests that GPs, which can be understood as a type of kernel regression with smoothness penalties, are often able to mitigate this concern.
Gaussian Process Regression
We temporarily restrict attention to a single-factor model, viewing P (t, S) as a 2D surface in the two coordinates of ('time') and ('stock'), encoded as x = (x 1 , x 2 ) ≡ (t, S), i.e. x → P (x) is a function in R d with d = 2. We treat the two coordinates in a symmetric manner for fitting purposes. As a result, the Delta is viewed as one specific instance of the gradient of P . Multi-factor models (with fully observed factors) would simply correspond to working in higher d > 2.
The curve fitting for P is carried out using a regularized L 2 regression framework, namely finding the best approximator in a given normed space H conditional on the training set D of size N : The last term acts as a regularizer, balancing quality of fit and the prior likelihood of the approximator. We propose to use Gaussian Process regression (GPR) for the purpose of learning the price surface P (x) based on the observation model Above we distinguish between the true price map P (x) and the observed price Y (x) which may/may not be the same. Gaussian process regression is a flexible non-parametric regression method (Rasmussen and Williams 2006) that views the map x → P (x) as a realization of a Gaussian random field so that (in the abstract metamodel probability space, which is independent of the probabilistic structure present in asset stochastic dynamics) any finite collection of {P (x), x ∈ X }, is multivariate Gaussian. For any n ≥ 1 design sites {x i } n i=1 , GPR posits that with mean vector m n := [m(x 1 ; β), . . . , m(x n ; β)] and n × n covariance matrix K n comprised of κ(x i , x i ; β), for 1 ≤ i, i ≤ n. The vector β represents all the hyperparameters for this model. The role of m(·) is to capture the known trends in the response, and the role of κ(·, ·) is to capture the spatial dependence structure in x → P (x).
Given the training dataset , GPR infers the posterior of P (·) by assuming an observation model (3) with a Gaussian noise term (x) ∼ N (0, σ 2 ). Conditioning equations for multivariate normal vectors imply that the posterior predictive distribution at any arbitrary input x * is also Gaussian with the posterior mean m * (x * ) that is the proposed estimator of P (x * ): and K is N × N covariance matrix described through the kernel function κ(·, ·; β). Henceforth we think of m * (·) ≡ P (·) as a (smooth) function, even though it is only defined pointwise via (4). The posterior covariance is where The interpretation is that x → m * (x) is the "most likely" input-output map that is consistent with the training dataset D and Var(P (x)) is the model uncertainty capturing the range of other potential input-output maps that could also be consistent (but less likely) with D.
Specifying a GP Surrogate
Returning to the curve-fitting perspective, the optimization in (2) is available in closedform through the kriging equations (4) and GP fitting in fact corresponds to selecting an appropriate function space H ≡ H ϑ by optimizing the hyper-parameters ϑ. This is done in a hierarchical manner, first fixing a kernel family and then using maximum likelihood optimization to select ϑ.
The GP kernel κ(x, x ) controls the smoothness (in the sense of differentiability) of P and hence the roughness of its gradient. A popular choice for κ(·, ·) is the (anisotropic) squared exponential (SE) family, parametrized by the lengthscales { len,k } d k=1 and the process variance σ 2 p : The SE kernel (6) yields infinitely differentiable fits m * (·). Besides squared exponential kernel described above, other popular kernels include Matérn-3/2 (henceforth, M32) and Matérn-5/2 (M52) (Roustant et al. 2012): A Matérn kernel of order k + 1/2 yields approximators that are in C k . Thus Matérn-3/2 fits are in C 1 and Matérn-5/2 fits are in C 2 . The mean function is often assumed to be constant m(x; β) = β 0 or described using a linear model m(x; β) = K k=1 β k φ(x) with φ(·) representing a polynomial basis. The mean function drives the estimates during extrapolation (far out-of-sample) and also can strongly impact the gradient. For example, incorporating a convex quadratic prior mean compared to a flat linear prior mean modifies the curvature/lengthscales of P and therefore affects the estimated Greek. The overall set of the hyperparameters for the GP surrogate is β : . Typically one estimates β by maximizing the log-likelihood function using the dataset
Obtaining the Greek
Given a fitted GP model f * ∼ GP (m * , K * ), its gradient with respect to the coordinate x j forms another GP, D ∼ GP (g * , K g ). The respective mean at input x * and covariance of D at x * , x * are specified by Thus, the gradient estimator is g * (x * ) in (9) which can be interpreted as formally differentiating the expression for m * (·) with respect to x j . Remarkably, the same procedure yields the posterior variance V g (x * ) = K g (x * , x * ) of g * (x * ) in (10) and therefore we obtain analytically the credible bands around g * (x * ). Namely, the credible band for ∂P ∂x j (x * ) is where z α is the desired quantile of the standard normal distribution, e.g. z 0.95 = 1.96 to obtain 95% CI. The upshot is that once a GP surrogate is fit to option prices, obtaining Greek estimates and their model-based uncertainty reduces to evaluating the formulas (9)-(10).
As an example of such calculations we present the analytic expression for g * (·) for the three most common kernels discussed in Section 2.3. While these computations are not new, we also could not find any handy reference for them in the literature. For the SE kernel (6) we have: For the Matérn-5/2 kernel (7), we find and for the Matérn-3/2 kernel (8): We emphasize that the above formulas work both for the Delta ∂P/∂x 2 and the Theta −∂P/∂x 1 , with the GP model yielding analytic estimates of all gradients simultaneously, without the need for any additional training or computation.
Remark 1. The underlying structure is that differentiation is a linear operator that algebraically "commutes" with the Gaussian distributions defining a GP model. Consequently, one may iterate (by applying the chain rule further on κ(·, ·) and its derivatives, provided they exist) to obtain analytic expressions for the mean and covariance of higher-order partial derivatives of f , yielding second-order and higher option sensitivities, for example the Gamma. Instead of doing so, we implemented a finite-difference estimator for Γ(t, S): for a discretization parameter δ > 0. By predicting the GP model on the triplet of sites {(t, S − δ), (t, S), (t, S + δ)} we obtain the predictive covariance matrix and can use that to compute the variance of Γ f d (t, S; δ) (which is a linear combination of the respective three P values). Note that the Matérn-3/2 kernel is not twice differentiable, so formally there is no second sensitivity and we expect numeric instability in applying (18) to a M32-based model. Figure 1 shows the GP-based ∆(t, ·) for the case of a Call option P (t, S) within a Black-Scholes model with constant coefficients r = 0.04, σ = 0.22, T = 0.4, K = 50, parametrized by time-to-maturity τ and spot price S. The model is trained on a two-dimensional 10×10 grid (so that N = 100) S i ∈ {32, 36, . . . , 68}, τ i ∈ {0.04, 0.08, . . . , 0.4}, using for inputs the exact P (τ i , S i ), i = 1, . . . 100 available via the Black-Scholes formula. We then display two 1-D slices of the resulting estimate of the Delta ∆(τ, S) as a function of spot S, keeping time-to-maturity τ fixed. In the left panel we look at τ = 0.5 which is an extrapolation relative to the training set, maturity being longer thanτ = 0.4. In the right panel we use τ = 0.2 which is one of the training times-to-maturity, and corresponds to in-sample interpolation. Note that with GPs the two computations are implemented completely identically. In Figure 1 we compare ∆(τ, ·) to the exact ground truth ∆(τ, ·) and also display the corresponding 95% posterior credible bands, cf. (11) below. We observe that the GP fit is excellent, being indistinguishable from the ground-truth for most of the test locations. While the goodness-of-fit is relatively good in the middle, towards the edges we have numerical artifacts, such as ∆ being outside the interval [0,1] or not being increasing in S. As expected, the credible bands on the right (the interpolation case, where credible bands are almost invisible in the middle) are narrower than on the left (extrapolation). For example at S = 55 and t = 0.2 we have ∆(t, S) = 0.8681 with a credible band of [0.8635, 0.8727] (the true Delta actually being 0.8642) while at same S = 55 and τ = 0.5 we have ∆(t, S) = 0.7997 (ground truth being 0.7936) and a credible band of [0.7779, 0.8215] more than 4 times wider. In other words, this particular GP surrogate is able to estimate Delta up to ±0.004 in the middle of the training set, but only up to ±0.022 when asked to extrapolate for longer maturity. This reflects the key feature of GPs that the fitted model is "self-aware" and more confident in its estimate in regions that are close to training locations. The latter notion of closeness is algebraically reflected in the fitted covariance kernel κ, specifically its lengthscale 1 . Figure 1 moreover visualizes the dependence of uncertainty quantification on S: in the middle of the training set S ∈ [40, 60], the bands are very tight, indicating that the fitted GP has a high confidence regarding ∆(t, S). The bands get progressively wider at the edges. To fully explain Figure 1, we need to give the specification of the fitted GP described by (4). This includes the GP kernel κ(x, x ), the mean function m(x), and the respective coefficients or hyperparameters ϑ. In the figure, the mean function is m(x) = β 0 + β 1 S = −20.04 + 0.58S and the kernel is squared-exponential (6) with length-scales 1 = 0.626, 2 = 10.00, process variance σ 2 p = 239.71 and noise variance σ 2 = 1.99 · 10 −4 . In this example, although the training outputs Y i are exact, for numerical purposes (namely to stabilize matrix inversion), we allow for a strictly positive observation noise σ in (3). The variance parameter was taken to be an unknown constant and learned as part of maximum likelihood estimation. The MLE was carried out using a genetic-based optimizer from package rgenoud in R and the overall GP fitting via the DiceKriging (Roustant et al. 2012) package.
Illustration
Remark 2. While the example above considers a very simple payoff, our approach trivially generalizes to arbitrary payoff structures, including portfolios of options with varying maturities and strikes. Since the surrogate construction is completely independent of the specifics of the price function P (t, S) and samples Y i , going from a Call option above to a collection of contracts with different (T j , K j ), j = 1, . . . , J only requires adjusting the code that provides the sample Y i while the rest proceeds as-is. Surrogates become attractive for computing sensitivities of large option portfolios even in analytic models, since they have a fixed evaluation cost, while the cost of evaluating a single P (t, S) is linear in the number of contracts J and becomes non-negligible for J large.
Observation Noise
In the cases where P (t, S) is not available exactly, the associated uncertainty will typically depend on (t, S). For real-life training datasets this would be due to varying bid-ask spreads that are driven by contract liquidity. For Monte Carlo based datasets, this would be due to the heteroskedastic conditional variance of the payoff as a function of S. For example, for a Call the conditional simulation variance σ 2 (x) tends to be higher in-themoney, since out-of-the-money nearly all empirical payoffs would be zero, so that σ 2 (x) 0 for S K. GPs are able to straightforwardly handle non-constant σ 2 (x), this just requires replacing the term σ 2 I with a diagonal matrix Σ with Σ ii ≡ σ 2 (x i ) in (4). For the case where the training set is model-based Monte Carlo (Setting b), we may estimate σ 2 (x i ) via the empirical standard deviation that corresponds to the empirical average for Y i : (4)-(5) is known as the Stochastic Kriging approach, see Ankenman et al. (2010). This plug-inσ 2 works well as long asŇ is sufficiently large. The package hetGP (Binois et al. 2018) extends the idea of (20) to simultaneously learn σ 2 (·) and P (t, S) during the fitting step.
The baseline alternative is to assume a constant observation noise σ which is augmented to the GP hyperparameters and estimated as part of MLE optimization. This is also the recommended approach for noiseless observations, where a small amount of noise (the so-called "nugget") is added (learned via MLE) in order to regularize the optimization of the other hyperparameters.
Virtual Training Points
The GP model has no a priori information about the properties of P (t, S) and its fit is fully driven by the training data and the postulated prior mean m(t, S). One way to improve the fit is by adding virtual observations that reflect the structural properties. In particular, we can create "boundary" conditions by putting virtual (in the sense of not coming from any data) observations at the edges of the training space; see the right panel of Figure 2. This ensures more stable and more confident estimates at extreme values of inputs.
Specifically, in our case studies below we: • Add virtual points deep-in-the-money to enforce ∆(t, S) 1 in that region. This is achieved by addingỹ i = S i − e −r(T −t i ) K at two close but distinct, large S i 's.
• Add virtual points deep out-of-the-money to enforce P (t, S) 0 and therefore ∆(t, S) 0. This is achieved by addingỹ i = 0 for two close but distinct, small S i 's.
• Add virtual points at contract maturityỹ i = (S i − K) + for t i = T . This enforces the correct shape of P (t, S) as t → T , in particular the at-the-money kink of the Call payoff.
Above, we use virtual price observations; it is also possible to add virtual observations on the gradients of a GP, which however requires a much more involved model fitting.
Case Studies
In this section we present the set up of two in-depth case studies, explaining the underlying stochastic models, implementation, and assessment metrics.
Black-Scholes
Our first test environment is a Black-Scholes model which provides a ground truth and hence ability to compute related exact errors. Moreover, we can generate arbitrary amount/shape of training data and observation noise.
We consider European Call options, priced via the classical Black-Scholes formula that also yields closed-form expressions for the Delta, Theta and Gamma. For the training data Y i we use Monte Carlo simulation of sizeŇ and a plain sample average estimator: where S n T are i.i.d. samples obtained using the log-normal distribution of S T . While carrying out the Monte Carlo method (which is a proxy for any computationally heavy pricing engine), we also record the empirical standard deviation of theŇ payoffs to obtain the plug-in estimatorσ(x i ) for the input-dependent noise variance parameter as in (20).
Local Volatility Model
For our second case study we consider a nonlinear local volatility model, where Call price P (t, S) is available only via Monte Carlo simulation. In this setup there is no direct ground truth and we obtain a pointwise "gold standard" estimate of ∆(t, S) through a large-scale, computationally expensive Monte Carlo simulation combined with a finitedifference approximation. We consider the Local Volatility (LV) model where the dynamics of S under the physical measure is where B is a Brownian motion. Specifically, in numerical example in Section 5.1, we use the following piecewise local volatility function, see Figure 2: where S * = 50 and T * = 0.4. The risk-free interest rate r is set to 0.05 and the rate of return of S is µ = 0.13. In order to compute a gold-standard benchmark computation for the Delta, we use the central finite-difference approximation with discretization parameter δ = 0.01S 0 . The two terms on the right hand side are computed via a Monte Carlo simulation with same stochastic shocks. Namely, we approximate P (t, S ± δ) by the empirical average over 10 6 paths simulated from the dynamics of S described in (22) with an Euler-Maruyama discretization with ∆t = T /100 and with two different initial values S 0 ± δ, and the same sequence of randomly sampled ∆B t i . One should notice that to implement this benchmark procedure in reality it is necessary to calibrate the local volatility function to the market data. This step is completely avoided with our GP methodology. We again consider a Call option with strike K = 50 and maturity up to T = 0.4.
Assessing Discrete Delta Hedging
To assess hedging quality, we implement a discrete-time delta hedging strategy which consists of rebalancing between the stock and the bank account based on a time step ∆t and the estimated ∆(t, S t ). We start the hedge at t = 0 with the given wealth W 0 = P (0, S 0 ) and update W t according to Repeating this along a discrete sequence of times 0 = t 0 < t 1 < . . . < t K = T we finally compare the payoff Φ(S T ) to the terminal wealth W T , recording the resulting hedging error E T = W T − Φ(S T ). Note that due to time discretization, even though the market is complete in both case studies, we will have E T = 0 almost surely and moreover the distribution of E T is affected by P since Delta hedging is done under the physical measure (stock drift is µ = r).
Our primary comparator for hedging performance is the benchmark/true ∆(t, S). With continuous-time hedging, the latter yields an exact hedge (zero hedging error, almost surely) since the market is complete. The following proposition describes the hedging error E T when using ∆ at discrete times.
Proposition 3.1. Under the local volatility model (22), the hedging error when implementing the approximator ∆ at discrete instants 0 = t 0 < t 1 < · · · < t K = T is given by Proposition 3.1 shows that the overall hedging error E T can be decomposed into two parts: the first, denoted E (d) T , is solely explained by the discrete-time aspect of the hedging strategy while the second one,Ê T is driven by the approximation of the Delta, ∆. Notice . Since we are interested in studying the impact of ∆ vs ∆, we focus our analysis onÊ T . Taking the first and second moment, we T ,Ê T . The next Corollary addresses the contribution fromÊ T .
Corollary 3.2. The mean and variance ofÊ T are given by Moreover, the covariance between E (d) The proofs of Proposition 3.1 and Corollary 3.2 can be found in Appendix A.
Remark 3. Suppose ∆ is an unbiased estimator of ∆ in the following sense: and, for any t ∈ (t k , t k+1 ] and any bounded Borel-measurable functions φ and ψ, for every k ∈ {0, . . . , K − 1}. In that case, conditioning on S t k we find Thus, for unbiased ∆, we expect to see no additional hedging loss and additional hedging variance that is proportional to the approximation variance. In other words, good ∆ approximators should not impact expected hedging loss; while the mean-squared error of ∆ is a proxy for the variance of the hedging loss. Remark 4. Under continuous-time Delta hedging, we have E (d) Another delta-hedging strategy is the so-called implied Delta hedging which relies on the Black-Scholes delta with the current implied volatility. Let IV (t, S) denote the implied volatility satisfying where P BS (t, S, σ) is the Black-Scholes formula price for this option with volatility σ. Then Implied Delta is ∆ I (t, S) := ∆ BS (t, S, IV (t, S)).
Note that the following is true where V BS is the Black-Scholes Vega. So the difference between the true and implied delta is linked to the option Vega and the implied volatility skew. Practically speaking, the implied volatility is a local average of σ(t, S). For the local volatility case study, the implied Delta is too low OTM and too high ITM, generating a non-negligible hedging error as a result. The latter feature demonstrates the importance of properly learning option sensitivities, rather than just calibrating the immediate implied volatility surface. In terms of Proposition 3.1 we note that the Implied Delta is not unbiased and the variance of the approximation part of the hedging error can be written as
Performance Metrics
Let us denote by ∆(t, S) a given GP-based estimate of ∆(t, S). We define the following performance metrics to assess the quality of ∆(t, S). In all cases, D refers to a discrete test set of size N .
Metric I: RIMSE. Assuming that a gold-standard (possibly exact) ∆(t, S) is available, we compare ∆(t, S) to the ground truth ∆(t, S). Our main choice is the root integrated mean-squared error (RIMSE) defined as: for a test set D . RIMSE is the standard L 2 criterion for judging the quality of ∆ over a region of interest. We can similarly define the RIMSE for greek/sensitivity Θ and for the option price P itself, denoted as Θ Err and P Err . Metric II: PnL. We also measure the quality of ∆(t, S) directly through the Deltahedging P&L. We report the variance of the terminal P&L E n T = Φ(S n T ) − W n T . V ar(E T ) is available even without a ground truth. Better Delta forecasts should lead to lower variability of hedging errors, but V ar(E T ) is always bounded away from zero due to time discretization.
Metric III: MAD. We observe that the Greek estimators tend to have a few small regions of large errors around the edges of D which inflates RIMSE in (28). To mitigate this effect, we evaluate the Median Absolute Deviation (MAD) metric where the median is over the discrete test set D . Thus, the L 1 approximation error will be less than MAD at half the test sites.
Metric IV: Coverage. To assess the uncertainty quantification provided by the GP model, we evaluate the accuracy of the associated credible bands. Specifically, a good model will be close to matching the nominal coverage of its bands, i.e. the ground truth should be within the 95% credible bands at 95% of the test locations, cf (11): A model with Cvr < 0.95 has overly narrow credible bands and a model with Cvr > 0.95 has them too wide.
Metric V: NLPD. The Negative Log Probability Density metric blends the testing of the posterior mean (via MSE) and of the posterior standard deviation: where V g is the posterior variance of the Delta estimator, see Section 2.4. Better models will have lower NLPD. NLPD can be viewed as combining RIMSE and Coverage.
Metric VI: Bias. To assess whether the estimator tends to consistently over-or under-estimate the true Greek, we record its statistical bias: Since our ∆ are statistically constructed, we expect minimal bias. Metric VII: Empirical Moments ofÊ T . Reflecting Corollary 3.2 we evaluate the following two quantities related to the hedging loss: where f St (·) is the probability density function of S t . Thus, µ E is an empirical proxy for the average extra hedging loss E[Ê T ] due to ∆ and V E is an empirical proxy for the respective additional hedging variance Var[Ê T ]. Good models should have µ E 0 and low V E .
We generate D by forwarding simulating (S t ) trajectories which allows us to drop the f St term. In both case we use ∆t = 0.02 to sum over t k in (24)-(25) .
Results
In this section we present the experimental results based on the two case studies described above. We start with the Black-Scholes set up where ground truth is known and training inputs are noisy due to a Monte Carlo approximation.
Choice of GP Kernel
We first consider the impact of different GP model components on the quality of the Delta approximation. We begin with the role of the kernel family which is the most important choice to be made by the user. To do so, we compare the use of SE, M52 and M32 families, each of which is fitted in turn via MLE. Recall that these three families imply different degree of smoothness inP (and hence in the fitted Greeks): squared-exponential kernel will lead to very smooth fits, while Matérn kernels allow more roughness. Figure 3 shows the fits and 95% credible bands across the above 3 kernel families and different Greeks. The results are further summarized in Table 1. While the training is done jointly in the t and S dimension, we illustrate with one-dim plots that fix t and show dependence in S only.
The top left panel shows the error P (t, S) − P (t, S) between the fitted and true option prices and therefore provides an immediate sense of the accuracy of the statistical surrogate. We observe that all three GP models perform well out-of-the-money (OTM) and the largest error is in-the-money (ITM). This phenomenon is driven by the higher conditional variance of training inputs Y i ITM where Monte Carlo estimates are less accurate. In essence, the observation noise is proportional to the price and hence estimating the latter is harder when P (t, S) is higher.
Call Price Error
Delta ∂P/∂S Theta −∂P/∂t Gamma ∂ 2 P/∂S 2 Figure 3: Estimated sensitivities at t = 0.1 together with their 95% credible bands for a Black-Scholes Call across three different GP kernel families and using a space-filling experimental design. The Gamma is computed using a finite difference approximation. Ground truth indicated in dashed black line. Training set of N = 400 inputs andŇ = 2500 inner MC simulations.
The top right panel displays the resulting ∆(t, ·)'s which are simply the gradients of the respective surrogates P (·, ·) with respect to the second coordinate. In general, all three kernels perform very well, closely matching the true Delta. We observe the very narrow credible bands of the SE kernel compared to the M52 and M32 ones, with the latter having the widest credible band. The general observation (well known in the surrogate literature) is that the smoother is m * (·), the tighter the CI. Consequently, all the CIs of a SE-family GP will always be narrower compared to M52 or M32. The other feature we see is oscillations of the M32-based Delta deep-ITM, and moreover that all models exhibit reversion to the prior beyond the edge of the training set, manifested by ∆(t, S) < 1 for S 70 and ∆(t, S) > 0 for S 30. The virtual training points are critical in avoiding this issue and enforce ∆(t, S) 1 around S = 70 and ∆(t, S) 0 for S 30. Table 1: Effect of the GP kernel family on learning the Delta in a Black-Scholes model. We report 7 metrics for ∆, as well as the RIMSE for Θ and option price P (last 2 columns, cf. (28)). All metrics are based on a gridded test set of |D | = 80 × 20 = 1600 sites.
Kernel RIMSE MAD 95%Cvr
Bias The bottom left panel illustrates the fitted Θ(t, ·) which uses the exact same GP models as in the first row of the figure, simply computing the gradient in the other coordinate. This is one of the advantages of our framework-once fitted, all sensitivities across the different coordinates are obtained in the same consistent manner. Due to the more complex shape of the Theta, and in particular higher convexity of P (t, S) in t, the quality of Θ is poorer compared to that of Delta. Both the SE and M52 overestimate the steep peak of Θ ATM, estimating Θ(0.1, 50) −5.5 rather than the true −5. We note that the M52/M32 surrogates are aware of this challenge and provide appropriately wide CI bands that contain the ground truth (in fact the M32 band is too wide). In contrast, the SE surrogate overestimates its posterior uncertainty, with the result that its coverage for Θ is much below the nominal 95% level (i.e. the CI frequently does not contain the ground truth). Another region where all models exhibit lack of fit is for S ∈ [60, 70].
Finally, the bottom right panel of Figure 3 illustrates the estimation of Γ(t, S). Numerical estimation of second-order sensitivities is extremely challenging, especially through functional approximators. In that light, the SE and M52 GP surrogates perform quite well given that they were trained on just 400 noisy observations. We do observe significant oscillations in Γ especially for S 55, which is not surprising since the original P 's are not constrained in any way and tend to wiggle or vibrate in the input space. The oscillations are mild for the SE kernel (again, due to the tendency to over-smooth spatially) and are very severe for M32. We note that mathematically m * (·) is only C 1 for the Matérn-3/2 family, and so there is actually no second-order sensitivity for this surrogate. In the plot we obtain an approximation through finite differences, cf. (18), which are in fact the reason for the sharp oscillations. Table 1 reports the error metrics defined in Section 3.4 for the above three surrogates. We concentrate on the estimation of ∆ (where we report 7 different metrics), as well as report the RIMSE for Θ and for option price itself, P . The surrogate utilizing a SE kernel appears to be best in terms of integrated mean squared error and also has slightly lower median absolute deviation and lower bias. However, it also has poor coverage suggesting that it is overconfident and reports too narrow credible bands. This is confirmed by the NLPD score that is worse than that for M52-based surrogate. The latter also beats SE in terms of RIMSE for Θ and essentially yields the same RIMSE for the price P . The M32-based surrogate is worst across the board, and also overestimates uncertainty (its coverage is much higher than 95%).
To summarize, there are two key take-aways. On the one hand, the SE kernel (6) tends to over-smooth and therefore has trouble reproducing the spatial non-stationarity one observes for most option payoffs (namely high convexity ATM and almost linear deep ITM and deep-OTM). It also underestimates posterior uncertainty. On the other hand, the Matérn-3/2 kernel tends to give CIs that are too wide and by its nature is a very poor choice for second order sensitivities, like Gamma. In light of above, we recommend to use the Matérn-5/2 kernel which provides the best compromise in terms of maximizing RIMSE and MAD, minimizing NLPD, and matching coverage.
Size of Training Set
Next, Table 2 shows how the size N of the experimental design affects the fit. Naturally, a larger training set D provides more information and hence should yield a better fit. Consequently, larger N should imply lower error metrics across the board (apart from the Coverage statistic that should converge to its nominal 95% level). Table 2: Effect of training set size on learning the Delta and other Greeks in a Black-Scholes model. We report 8 metrics for ∆, as well as the RIMSE for Θ and option price P (last 2 columns). All metrics are based on a gridded test set of 80 × 20 = 1600 sites, {S 0 : 30, 30.5, . . . , 69.5} × {t : −0.01, 0.01, . . . , 0.37}. Training is based on (21) witȟ N = 2500 inner simulations, plus 50 virtual training points, and the GP surrogates have Matérn-5/2 kernel, linear trend function and estimated constant σ . The reference hedging variance V ar(E T ) (7th column) using exact Delta is 0.265. This pattern is generally observed in Table 2; we find a roughly O(N −1/2 ) rate for RIMSE and MAD (both for Delta, as well as for Theta and Price, see the last two columns). The above trend is quite noisy because learning is not necessarily monotone in N since the estimated GP hyperparameters change across datasets. As a result it is possible that a surrogate with higher N has worse performance, compare N = 200 and N = 240 in Table 2. This occurs because the estimation errors in GP surrogates tend to arise via small spurious oscillations in the predicted response in regions with sparse training data. As D expands, those oscillations can shift abruptly as the MLE optimizer finds new local maxima for the hyperparameters. One very reassuring finding is that all surrogates are unbiased in their estimates of ∆, even for very low N . Another feature we observe is that learning Θ is more challenging, with the respective RIMSE converging quite slowly. This is linked to the spatial nonstationarity, namely the fact that S → Θ(t, S) changes rapidly ATM but slowly ITM/OTM, and moreover goes to −∞ at-the-money at maturity.
Another important observation is that the patterns in all the considered metrics (beyond NLPD/Coverage) are broadly similar and therefore RIMSE is a good overall proxy for approximation quality. In that sense, the standard mean squared error is sufficient for assessment of the point predictions for the Greeks; NLPD is a good complement for assessing uncertainty quantification.
Simulation Design
The GP surrogate is a data-driven spatial model and consequently is sensitive to the geometry of the training set. Therefore, we analyze the impact of the shape of D, whose choice is entirely up to the modeler, on the quality of the Greeks approximation.
The spatial covariance structure driven by κ(·, ·) implies that for a given (t, S), ∆(t, S) is primarily determined by the training points in its vicinity. Consequently, to ensure a good average approximation quality, it is desirable to spread the training points, namely D should reflect the test set D . The respective concept of a space-filling experimental design can be achieved in multiple ways. One obvious candidate is a gridded design, putting {t i , S i } on a two-dimensional lattice. A gridded D can however interfere with fitting of a Gaussian process model, because only a few values of distances |x j − x j | used within κ(x, x ) are then observed, making learning of the lengthscales more difficult. On the flip side, a gridded D makes K of (4) a Kronecker matrix, which can be exploited for computational speed-ups (Flaxman et al. 2015;Wilson and Nickisch 2015).
As an alternative to a training grid, one can utilize space-filling sequences, either deterministic low-discrepancy sequences, such as the (scrambled) Sobol and Halton sequences used widely in the Quasi Monte Carlo (QMC) literature (Lemieux 2009) or Latin Hypercube Sampling (LHS). LHS yields randomized designs that is effectively variance-reduced i.i.d. Uniform sampling. Both approaches allow to specify a training set D of arbitrary size. We find that the choice of how to space-fill plays limited role in overall performance and generically employ Halton sequences in subsequent experiments. Space-filling also generalizes to higher dimensions where gridding becomes infeasible.
A related aspect concerns the impact of simulation noise on learning the Greeks. A natural question is whether it is better to train on a few highly-accurate data points, or on many low-precision inputs. This corresponds to the trade-off between design size N = |D| and the number of MC samplesŇ in (19) (see also (21)). Figure 4 visualizes RIMSE of ∆ as we vary N,Ň . We observe limited gains from increasingŇ , so the spatial effect dominates and the quality of the Delta approximation depends primarily on having a large (in terms of many different S-values) training set. We also note the large improvement in fit quality when the GP model switches from smoothing + interpolation to pure interpolation (the case where training inputs are exact). Indeed we see that using N = 100 exact training points is better than training with N = 500 inputs observed in slight noiseŇ = 16, 000. Figure 4: Left: Impact of simulation design: root integrated mean squared error as a function of design size N and number of inner MC simulationsŇ . Right: Comparing Delta approximation based on a space-filling design with 320 inputs to one based on 16 paths (∆t = 0.02, 320 training inputs) and 32 paths (640 training inputs). All designs are for the Black-Scholes Call case study and are augmented with additional 50 virtual training points.
Quality of Delta Hedging
Recall from Section 3.3 that we may decompose the total hedging loss E T into a component E (d) T due to time discretization, and a componentÊ T due to the Delta approximation error. Taking the representative M52 model from Figure 3 for the Black-Scholes case study, and using n = 1, . . . , 2500 scenarios (S n t ) t∈ [0,T ] , with 20 hedging periods ∆t = 0.02 and S 0 ∼ N (50, 2 2 ), we find that the resulting hedging error has Ave(E T ) = 0.0163 and Var(E T ) = 0.2980. In comparison, hedging with the exact Black-Scholes ∆ on the same set of paths we estimate E[E T ) + V E . As hedging quality increases, we observe the strongest effect on the tail of E T . For example, in Table 2 we report the one-sided L 1 hedging loss for the Call option. We observe strong improvements as training set gets larger and surrogate quality improves. On the other hand, very limited gains would be recorded if we report the L 1 or L 2 norm of E T .
Path-Based Training
A further motivation for the task of estimating the Greeks based on a sparse set of price data is the case where the training set D is the history of the contract price Y i = P (t i , S t i ) along trajectories of the underlying S t 0 , S t 1 , . . . , S tn . The latter is interpreted as historical observations, i.e. a model-free paradigm where one directly uses data to learn price sensitivities. In this setting the training set D is fixed and depends on how much data the modeler was able to collect. Clearly, a single trajectory would be insufficient for good inference; one typically would consider expired options with same strike, indexing data by time-to-maturity τ = T − t of the contract. (Under additional assumptions, one may also switch from asset price S to log-moneyness S/K that allows to simultaneously consider options with multiple strikes.) The resulting training sample is limited by the fact that asset time series tend to be non-stationary over long periods. This setting naturally suggests the possibility of dynamically updating D as more historical data is collected, see Section 5.2 below.
Path-based training makes D to have an irregular pattern in the S dimension. In the right panel of Figure 4 we investigate the resulting impact on Greek approximation quality, by training our GP surrogate on a collection of (S t )-paths, sampled at some fixed time frequency ∆t. The plot shows Deltas fitted on two different datasets: one generated on a grid of (t, S) values as in the previous section, and another sampled at a regular sequence of t's, but along paths of (S t ). In the latter case D = {(t j i , S j i∆t ) : t j i = i∆t} for j = 1, . . . J with (S j · ) being J i.i.d. paths of S started at pre-specified initial locations S j 0 . We observe that training using paths is significantly inferior relative to training using a space-filled design. The path-based D tends to have a lot of "holes" where the model is unable to accurately "see" the gradient. This leads to worse estimates of the GP hyperparameters β, as well as in wider credible bands. We find that without a lot of finetuning (such as setting up judicious bounds on β and carefully selecting the observation noise which must be bounded away from zero), the GP optimizer is unable to find a reasonable fit as far as the Greeks are concerned. Instead, path-based design causes the GP surrogate to generate unstable and strongly oscillatory ∆ and Θ, making them practically unusable. This outcome is almost unavoidable for low N , but also manifests itself even with several hundred training points. Overall, we need to more than double the training set size in order to make path-based experimental design comparable to a spacefilling one. Moreover, with an irregular path-based design, the GP model has a difficulty distinguishing signal from noise. Thus, increasingŇ has only minor effect on learning Delta, instead the GP surrogate consistently overestimates the noise. This over-smoothes the data and removes most benefit of more precise inputs (higherŇ in the experiment). Table 5 in the Appendix contains the full summary statistics as we vary the design size. Table 5 considers two different sampling frequencies in time which translate into different rectangular shapes for the training D. We observe a clear trade-off in the quality of ∆ versus quality of Θ: if we have more paths and lower sampling in time then the Delta estimation is better and Theta is worse. Conversely, training on fewer paths but with more frequent sampling in t has adverse effect on ∆. This pattern is intuitive for a data-driven method where quality of the approximation is explicitly linked to how much relevant information is provided in the training set. Other things being equal, we conclude that to learn Delta it is essential to have longer history rather than higher-frequency data.
Results for the Local Volatility Model
To illustrate path-based training we take up the local volatility (LV) case study, where we train on an irregular grid obtained by generating 25 trajectories of (S t ), saved at frequency ∆t = 0.04, for a total of 250 training (t n , S n t ) pairs. Figure 5 shows the resulting Delta, Theta and Gamma approximatiors across three GP kernel families. As in the BS case study, the SE kernel has much too narrow credible bands, while the M32 kernel yields bands that are too wide. Unlike the first study, where SE-based model overcame the poor uncertainty quantification to yield the lowest RIMSE, here the SE kernel has clear trouble in providing a good fit, see the significant error in estimating all three Greeks at both edges, especially for S 60. This is confirmed by Table 3 which shows that the SE kernel gives the worst fit among the three. We highlight the very high NLPD and very low coverage (i.e. dramatic underestimation of posterior variance). The M52 and M32 kernels perform similarly for Delta, but M52 clearly outperforms both for Theta (where the credible band of the M32 model is absurdly wide) and for Gamma (where M32 is unstable, as expected). Table 6 in the Appendix shows the impact of design size N on the approximation quality. Overall, we thus again find Matérn-5/2 to be the most appropriate kernel family.
For assessing Delta hedging, because we do not have the exact Delta instead of reporting (33)-(34) in Tables 3-6 we report the variance of terminal hedging loss E T = W T − Φ(S T ). Lower E T indicates better hedging; in Table 3 this is achieved with a M32 kernel. We note that in this case study, the approximation variance overestimates the impact on hedging variance because there is a positive correlation between surrogate squared error ( ∆ − ∆) 2 (which is largest far from the strike K) and the specific form of σ(t, S t ) which is also largest away from K. As a result, in the context of Remark 3 we obtain We next use this LV case study to test further variations of the GP surrogates that are concerned with (i) role of the virtual training points; (ii) learning the observation noise; (iii) checking alternative GP regression tools. To do so, we construct several alternative GP models with results reported in Table 4 MLE. Henceforth, it is labeled as model M1. We then consider the following variants: • M2: same setup but with no virtual points at all (training set of size 200).
• M3: same setup, but only with 30 virtual points (10 deep ITM, 10 deep OTM, 10 at maturity). The alternatives M2/M3 test the impact of virtual points, namely using fewer of them relative to the base M1.
• M4: uses the given location-dependent observation noiseσ(x n ) from the MC samples instead of a constant σ .
• M6: pre-specified de-trending using a reference Black-Scholes model. Specifically, we de-trend by subtracting a Black-Scholes Call price based on a constant σ = 0.3, utilizing the known maturity and spot. The GP surrogate is then fit to the "residual". M5/M6 illustrate the impact of the trend m(·) on the results. m(·) affects the hyperparameters of the surrogate and consequently has (an ambiguous) indirect effect on approximation quality. The following observations can be made regarding Table 4. First, the addition of virtual points has a very strong positive effect. Without them (case M2), the surrogate performs very poorly. Thus, this is a "zero-order" feature of our approach. Moreover, the model strongly benefits from having plenty of virtual points (M3 vs M1) which are necessary to enforce the 0/1 gradient of the price surface at the edges of the domain in the asset coordinate. Second, specifying state-dependent observation noise degrades performance by introducing high-order fluctuations into the surrogate. Similarly, a more sophisticated GP method targeting heteroskedasticity is not beneficial; there is no observed gain from adding complexity and the simpler base model wins out (M1 vs M4 or M7). Third, we observe that there are gains from having a reasonable trend function, in particular to capture the dominant trend in the asset coordinate. Such de-trending helps with spatial stationarity that GPs rely on. Thus, M5, which uses m(x) = β 0 , performs worse than M1, while M6, which provides a highly accurate de-trending, helps the fit.
Pathwise Hedging and Online Training
The left panel of Figure 6 illustrates using ∆ to carry out Delta hedging along a sample trajectory of (S t ) as would be done in practice. We consider the local volatility case study; in this scenario S 0 = 44.70 and S T = 41.66, so the Call ends up OTM and terminal payoff and Delta are zero. We plot the benchmark ∆(t k , S t k ) (red circles) and the GP-based ∆(t k S t k ) (blue diamonds) along the 10 time-steps t k = k∆t with ∆t = 0.04. We note that at the latter stages we have S t 35 where the GP approximation is not so good (confirmed by the wide credible band of ∆), however this has little effect on the hedging strategy since by that point Delta is almost zero anyway. On this particular path, we start with initial wealth of W 0 = P (0, S 0 ) = 1.418 and end up with the benchmark wealth of W T = E T = −0.078 (this error is driven by discrete hedging periods) and GP-based error of E T = −0.006, i.e. a difference of about 7 cents, in particular the GP strategy coming ahead. Figure 6: Left: a sample path showing Delta hedging in the local volatility model with 10 discretization periods (∆t = 0.04). Red circles indicate the benchmark Delta; the blue vertical lines (resp. blue diamonds) indicate the 95% posterior bands (resp. posterior mean) of the estimated GP Delta ∆(t k , S t k ) based on 200+50 training inputs. Right: Illustrating online learning of Delta along a high-frequency sampled price path. We plot the estimation error (relative to the ground truth ∆) of the original ∆(t k , S t k ) (in red) and of the recursively updated ∆ online (t k , S t k ) (in blue), along with the respective 95% credible intervals.
Remark 5. The outputted uncertainty quantification (11) for ∆ can be used to implement a "sticky" hedge, where portfolio rebalancing is done only if there is a substantial trade needed, so as to save on transaction costs. Specifically, one could assume that rebalancing is carried out only when the old hedging position is outside the credible band CI α of ∆(t k , S t k ). In Figure 6, this would imply no trading in the last 4 periods (t > 0.24), where ∆(t k , S t k ) 0.
To aid in such Delta-hedging along a path, GP models are amenable to fast updating in the context of augmenting with new data. Namely, the matrix form of the GP predictive equations (4) can be exploited to facilitate adding new observations to improve the fit. At the initial stage, the GP surrogate is trained on N historical stock paths. Then one wishes to Delta hedge "in real-time" along a new (S t )-trajectory. To do so, we sequentially collect (k∆t, S k∆t , P k∆t ) values at regular intervals and then simultaneously estimate the "in-sample" ∆(k∆t, S k∆t ) in order to find the new amount of shares to hedge with. In other words, at each hedging time instance we augment our training with the just-observed data and immediately estimate the Delta at the latest (t, S t ) values. Such dynamic hedging mimics the online calibration that practitioners often carry out and amounts to recursively updating the original GP surrogate.
Adding a new training point (x n+1 , y n+1 ) to an existing GP model corresponds to augmenting the kernel matrix K with an extra row/column and analogously augmenting the other terms in the GP predictive equations. This can be done very efficiently through the so-called rank-1 update if the GP hyperparameters are kept fixed, and requires just O(N 2 ) effort compared to O(N 3 ) effort to invert the full covariance matrix K in (4).
The right panel of Figure 6 illustrates dynamic hedging through the above GP surrogate updating. We start with 20 historical paths sampled at ∆t = 0.04 and then sequentially augment with high-frequency real-time trajectory sampled at ∆t = 0.004 (reflecting the idea that the trader is now closely monitoring the option compared to originally downloading a fixed dataset). In Figure 6 we compare the initial ∆ based on the 20 original paths versus the "online" ∆, demonstrating how the quality of the fit improves thanks to data fusion. Online learning of the Delta makes the estimation errors smaller (closer to zero in the figure) and furthermore narrows the posterior credible bands, hence doubly improving model fit: lower bias and higher credibility.
Remark 6. One can of course proceed by brute force by simply re-estimating the entire GP surrogate as more data becomes available. That will likely give a slightly better fit. In comparison, online updating is more elegant conceptually and moreover is lightning fast since we do not need to keep re-running the MLE optimizer for the hyperparameters β.
Extending to Real-life Options Data
Our method is directly applicable to dealing with observed option data since it requires no calibration beyond fitting the GP surrogate and is predicated on training using option prices, a quantity that is readily available in real life. To do so, one would switch to timeto-maturity τ parametrization, using historical data about options that already expired to generate a training set in the (τ, S) coordinates. Nevertheless, multiple challenges must be addressed before operationalizing this idea.
First, one must decide what does an "option price" mean, distinguishing between quotes, executed transactions and the issue of associated non-synchronous time stamps (e.g. a market close price might not actually be a price that is directly relevant at any given fixed time of day). Moreover, quoted prices have bid/ask spreads which could be viewed as upper/lower bounds for P (t, S). A related issue is the traded volume/open interest which could be interpreted as a proxy for quote quality.
There are several ways to match these features with the GPR setting: • Take σ(x) to be proportional to the bid/ask spread (probabilistically ensuring that the fitted P is within the spread) • Take σ(x) to be a function of Traded Volume/Order Imbalance to ensure that price of more liquid options are given more weight; • Modify the Gaussian likelihood in (3) to account for the bid/ask spread. For example the GPML Matlab package implements a Beta likelihood that is appropriate for "range regression".
• Use a cut-off criterion to separate liquid contracts (where prices have to be matched either exactly or within bid/ask) and illiquid ones, where observations are treated only as "vague" suggestions.
We remark that taking non-constant σ(x) is statistically equivalent to a weighted leastsquares criterion, i.e. penalizing fitting errors more (resp. less) when σ(x) is small (resp. large). Second, one would have to contend with the irregular time series of financial data, with gaps due to weekends, holidays, missing data, etc. Of note, GPR is perfectly suited for that purpose since it does not assume or require any specific shape of the training set. At the same time, as demonstrated above in the context of irregular grid in the Scoordinate, irregular shapes can materially worsen the quality of the GP surrogate and its Greek estimators.
Finally, the described procedure so far assumed that time-to-maturity τ and asset price S are sufficient statistics for determining the option price P . For historical data, such as SPX options, we do observe strong time dependence that can be termed "VIX effects": for essentially same (τ, S) pairs the historical prices will be quite different (i.e. different implied vol) on different days, indicating the presence of a further latent factor. As a first step, one would need to include calendar time as another covariate, working with the triple (t, S, T ) as postulated in a local volatility model. Another way to handle temporal nonstationarity would be to use a weighted regression, putting more weight on more recent data and discounting old data, which might minimize model mis-specification. A more complex extension would be to directly input VIX or other (stochastic volatility) factors when fitting the surrogate.
Conclusion and Open Problems
To conclude, we presented a framework of constructing GP surrogates for the purpose of learning option price sensitivities. Our method is completely statistical and fully generic, requiring simply a training set of (noisy) option prices. The GP surrogate is able to simultaneously provide estimates of Delta and Theta, along with their rigorously defined posterior uncertainty. Our case studies suggest that it is important to pick an appropriate kernel family, with the Matérn-5/2 striking the best compromise across the numerous performance metrics we considered. A GP M52 approximation offers a twice-differentiable surrogate for the option price that is smooth enough for Greek computation and flexible enough to capture the price surface. Our analysis further highlights the importance of boundary conditions (specifically the gains provided by including virtual training points) and careful noise modeling (in particular letting the algorithm estimate observation variance). Another striking feature we observed is the significant impact of training set shape on quality of the Greeks approximation, including the benefit of space-filling.
An open problem is how to handle the several well-known no-arbitrage constraints for the option price and its sensitivities. For example, a Call price must be convex monotone increasing in S (∆ ≥ 0, Γ ≥ 0), with slope less than unity (∆ ≤ 1). It is also monotone decreasing in t, Θ ≤ 0. To incorporate such features into a GP surrogate, one may consider monotonic GPs (see e.g. Riihimäki and Vehtari (2010)) who make use of virtual GP-gradient observations, or finite-dimensional shape-constrained GPs . Extending our R implementation to cover these is left for future research. Another related work on incorporating gradient observations into a GP model is by Chen et al. (2013).
A different comparator to the GP methodology are neural networks. In this framework, one runs a neural network (NN) regression to build a surrogate for the option price and then applies auto-differentiation to get the Greeks, see e.g. (Chataigner 2021, Ch 2). The latter step is available as a native function call for any NN architecture (i.e. no analytic derivations necessary) in modern machine learning suites such as TensorFlow. Based on our preliminary experiments, NN-based Greeks tend to be unstable for small training sets (N 500) as considered here, but perform very well for N ≥ 1000. Full investigation of NN Greek approximators and respective uncertainty quantification for Delta hedging is left to future research.
A Proofs
Proof of Proposition 3.1. Under the physical measure, we are assuming where B is a Brownian motion. We denote the price of a vanilla derivative with maturity T by P (t, S). The Delta hedging strategy perfectly replicates the derivative and can be described as Let us consider an approximated Delta ∆. The hedging error in continuous time follows the dynamics with E(0) = 0. By the Delta-hedging replication Equation (35), we find where dX t = dS t − rS t dt = (µ − r)S t dt + σ(t, S t )S t dB t . Then Under discrete-time delta hedging, we have and we find Adding and subtracting ∆(t k , S t k ) yields the result.
Proof of Corollary 3.2. The result follows from conditioning on S t k and using the first two moments of X. Tables Table 5: Effect of training set size on estimated Delta in a Black-Scholes model with learning based on S-paths. We report 8 metrics for ∆, as well as the RIMSE for Θ and option price P (last 2 columns, cf. (28) Table 6: Effect of training set size N = |D| on learning the Delta in the local volatility case study. We report 6 metrics for ∆, as well as the RIMSE for Θ and option price P (last 2 columns, cf. (28)). All metrics are based on a gridded test set D of 31 · 11 = 341 sites, This short RMarkdown file presents an illustrative use of Gaussian Process surrogates for estimation of option sensitivities. We directly embed R code snippets to showcase the straightforward use of the methodology.
Training Dataset
set.seed(101) dx <-0.01 xTest <-seq(27,73,by=1); tTest = 0.1; # test set to be plotted r <-0.04 # interest rate We consider learning the Greeks of a Call option within a Black-Scholes model. The Call has strike K = 50 and maturity T = 0.4. To do so, we employ a training set of 450 total training locations, with 400 actual inputs plus another 50 "virtual" inputs to capture the boundary conditions. Our task is to learn the Delta/Theta/Gamma of a Call as a function of current stock price S t (henceforth the spot) and time t. The inputs themselves are in the rectangle [30, 70] × [−0.01, 0.38].
The 400 training input-output tuples are constructed by sampling 400 locations via the space-filling Halton sequence (available in randtoolbox package) and then running Monte Carlo approximation of the respective option price through a plain Monte Carlo draw of 2500 i.i.d. samples based on the log-normal law of S(T ) (the simulation engine is viewed as a black-box for the modeler).
Training the GP surrogate
With the training set of approximate option prices constructed, we are ready to train a GP surrogate. Below we employ the DiceKriging package and select the Matern-52 kernel family, linear trend function, estimated constant observation noise (nugget) and genetic-algorithm optimizer for maximum likelihood estimation of the GP hyperparameters.
|
2020-10-19T01:00:16.144Z
|
2020-10-16T00:00:00.000
|
{
"year": 2020,
"sha1": "5e53dc7a9d2cabe9580014a2d12ed21c86b9ab8e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "5e53dc7a9d2cabe9580014a2d12ed21c86b9ab8e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Economics",
"Mathematics",
"Computer Science"
]
}
|
220122610
|
pes2o/s2orc
|
v3-fos-license
|
GNSS mobile road dam surveying for TanDEM-X correction to improve the database for floodwater modeling in northern Namibia
The aim of this study is the improvement of the TanDEM-X elevation model for future floodwater modeling by implementing surveyed road dams and the use of filter algorithms. Modern satellite systems like TanDEM-X deliver high-resolution images with a high vertical and horizontal accuracy. Nevertheless, regarding special usage they sometimes reach their limits in documenting important features that are smaller than the grid size. Especially in the context of 2D-hydrodynamic flood modelling, the features that influence the runoff processes, e.g. road dams and culverts, have to be included for precise calculations. To fulfil the objective, the main road dams were surveyed, especially those that are blocking the flood water flowing from south Angola to the Etosha Pan in northern Namibia. First, a Leica GS 16 Sensor was installed on the roof of a car recording position data in real time while driving on the road dams in the Cuvelai Basin. In total, 532 km of road dams have been investigated during 4 days while driving at a top speed of 80 km/h. Due to the long driving distances, the daily regular adjustment of the base station would have been necessary but logistically not possible. Moreover, the lack of reference stations made a RTK and Network-RTK solution likewise impossible. For that reasons, the Leica SmartLink function was used. This method is not dependent on classic reference stations next to the GNSS sensor but instead works with geostationary satellites sending correction data in real time. The surveyed road dam elevation data have a vertical accuracy of 4.3 cm up to 10 cm. These precise measurements contribute to rectifying the TanDEM-X elevation data and thus improve the surface runoff network for the future floodwater model and should enhance the floodwater prediction for the Cuvelai Basin.
Introduction
This study is part of a broader research project in northern Namibia concerning the topics of floods and droughts and their effects on water quality in the Iishana region. Regarding floods, ongoing investigations are made to understand the hydrological system characterized by ephemeral Iishana. These net-like distributed pans may fill with water during the rainy season and inundate leading to large floods that negatively affect the Iishana region and its inhabitants. Consequences include the loss of lives and high economic losses.
Up to now, a synoptic flood forecast system is only available for the entire African continent, produced by the National Oceanic and Atmospheric Administration and its Climate Prediction Center. Detailed flood routing and prediction systems are neither available at a regional level, nor for the study area. Nevertheless, efforts have been made to prevent susceptible areas from floods: for example, the Namibian Early Flood Warning System, which was a sensor web-based pilot project starting in 2008. However, due to the large costs the project was not founded until the end (Mandl et al. 2012) and no longer exists. Further work has been done by Skakun et al. (2012), who extracted the maximum flood extents from satellite images for flood risk assessment. In addition, Awadallah and Tabet (2015) estimated flood extents via remote sensing data. Goormans et al. (2015) set up the first hydrological and hydrodynamic model for the Iihana region. They focused on the city region of Oshakati and modeled the effect of a dyke, which was planned by the Ministry of Regional and Local Government, Housing and Rural Development-Oshakati Town Council in 2012 (Bethune et al. 2012) and was in some parts realized. Nevertheless, it still does not cover the whole Iishana region. For this reason, the aim is to develop a 2D-hydrodynamic flood model covering the whole Iishana area (Fig. 1). The objective is to provide flood risk maps as well as routable flood paths. Finally, retaining-measures will be suggested and the flood model will contribute to an early warning system in the context of a broader flood risk management plan. Therefore, the model requires many input parameters like data on land use, hydrology, topography and others.
This paper focuses on the topography in the study area and discusses the digital elevation model and its improvement in accuracy for flood model calculations in a further step by correcting TanDEM-X elevation data via GNSS data. Therefore, this study is less about the detailed approach of GNSS physics itself and more about the application and combination of GNSS techniques for enhancing database for further flood modeling.
Today's modern satellites provide high-resolution images and have a high vertical and horizontal accuracy. For example, the TanDEM-X mission which delivers a digital elevation model for the whole planet with a horizontal accuracy of around 12 m and in vertical up to 2 m at least (Rizzoli et al. 2017). From the hydrological point of view, this is a great improvement to former resolutions like 20 m horizontal and 16 m to 10 m vertical accuracies of the Shuttle Radar Topography Mission (Smith 2003;Mukul et al. 2017). These advancements are important for experts modelling in the field of floods, creating more exact flood risk maps and improving the results of flood models. It becomes clear that accuracy is a factor of certainty, even more so if model results are later used for flood forecasts.
To enhance the vertical accuracy of the entire DEM, different filter algorithms have been used. Moreover, water surface interferences have been corrected by masking out (Wendleder et al. 2013) and using interpolation procedures. With these methods, natural and anthropogenic hydrological obstacles, like parts of vegetation and water-induced interferences or buildings, could be equalized. Anthropogenic obstacles, in this specific case road dams (Fig. 2), have been surveyed in real time by a GNSS sensor. These road dams play a significant role in the Iishana region, which is characterized by a very low relief where almost every small sink or elevation influences the flow dynamic. Using the GNSS sensor to localize specific points precisely on the ground has been a common method used in industry and science for years (Bisnath et al. 2003;Arroyo et al. 2005;Gao et al. 2005;Knoop et al. 2017). If using a differential system an accuracy in a millimeter range is readily achieved. However, differential GNS-systems always need a base station in a certain range, otherwise the accuracy decreases dramatically up to a loss of connection. For example, the Leica GS16 System works properly up to 1 km in differential mode. If this is not wide enough for a certain use, it is possible to use the common Real-Time Kinematic (RTK) or Network-RTK mode, like other GNS-Systems. Both RTK solutions receive their correction data either from a single or even from a network of reference stations a few kilometers nearby. These systems use UHF/VHF radio signals or mobile phone networks to send their correction data to the GNSS sensor. This technique is very well known in vehicle navigation and in cases of precision farming (Knoop et al. 2017;Shrivathsa and Panjwani 2017;Skog and Handel 2009;Gebbers and Adamchuk 2010;Dixon 2006). The accuracy of (Network-) RTK generated GNSS data differs in a sub-centimeter range. In terms of precision farming, the vehicles are driving slowly with a maximum speed of 11 km/h. Even here, their work range is limited by the availability of reference stations and their application is expensive (Perez-Ruiz and Upadhyaya 2012). Moreover, potential connection losses lead to reduced accuracy up to a meter range.
To become more independent, the Satellite Based Augmentation System (SBAS) was formerly invented for the offshore industry in the early 1990s (Barboux 2000). Offshore platforms need to be constructed precisely, although they are usually located far away from the coast. In these remote areas, no reference stations are available. This method nowadays is called Real-Time Precise Point Positioning. In this case, the GNSS system receives the correction data directly from a geostationary satellite, which makes it applicable all around the world (Skog and Handel 2009;Perez-Ruiz and Upadhyaya 2012).
In this study, a Leica GS16 Sensor was installed on the roof of a car. While driving along the road dams with a maximum speed of 80 km/h, elevation surveys were automatically corrected in real time via the commercial Leica SmartLink function within a centimeter of accuracy. After the days of recording, the data were corrected by eliminating outliers and redundant measurements. In a next step, the elevation data were implemented into the post processed TanDEM-dataset for further hydrodynamic calculations.
Study area
Namibia's climate is strongly influenced by the cold Benguela current along the west coast of South Africa leading to arid and semi-arid conditions. Determined by the Intertropical Convergence Zone (ITCZ), rainy season runs from October to April and dry season from May to September.
A hydrologically diverse region therein is the transboundary Cuvelai Basin, which itself is bounded to the north by the Kunene River and Okavango River in Angola and by the Etosha Pan on the south in Namibia. The western edge is defined by the city of Ruacana and the eastern edge lies between the cities of Okongo and Mpungu Vlei. The Cuvelai Basin is divided into eight major drainage zones (Mendelsohn et al. 2013).
In this case, the study area is part of the Iishana region, consisting of the Western Oshana Zone and the Central Oshana Zone (Cunningham et al. 1992). Reduced to the Namibian state territory, it is marked to the north by the Namibian-Angolan boarder. The western edge is marked by the city of Ruacana and in the east by the city of Oshakati (Faulstich et al. 2018) (Fig. 1). The annual amount of precipitation is between 350 up to 550 mm increasing from west to east and is characterized by high rainfall variability. The potential evaporation rises in the same direction from 2.600 to 3.200 mm (Mendelsohn et al. 2013;Persendt et al. 2015). The terrain is about 1.100-1.200 m a.s.l. and has a very flat slope ranging from 0.5 to 1.0 m/km (Mendelsohn et al. 2013). The surface hydrology is featured by a large ephemeral river system and is affected by irregular floods during rainy seasons (Sakakun et al. 2012;Kuliwoye 2010;Kundzewicz et al. 2014;Shifidi 2014). These low slope, net-like troughs and sinks run in a northwesterly to southeasterly direction. The water depth during floods is about 1-7 m. Namibia gains most of its rainfall in the study area. The soils are profitable for agricultural use. These circumstances lead to the fact that around 40% of Namibia's population lives in the northern region (Mendelsohn et al. 2013). Nevertheless, next to near-surface Aeolian and fluvial sediments of clay deposits, lime and silicate crusts, produce low infiltration capacities and high surface runoff (Hüser et al. 2001;Nguno and Angombe 2011;Goudie and Viles 2015). These facts result in widespread flooding during rainy season. While most of the surface water comes from the Angolan side of the Iishana region, floods are intensified by local convective rain events. Fatal floods in the recent past have occurred in 2008, 2009, 2010, 2011 and 2013. The large inundations affected human lives, caused huge damage to health, extensive damage to property and technical infrastructures like roads, bridges and dams (Mandl et al. 2012;Skakun et al. 2012;Awdalla and Tabet 2015;Persendt et al. 2015;Bischofberger et al. 2015;Filali-Meknassi and Ouarda 2014;Mufeti 2013).
The hydrodynamic model
In this particular study, the specific flood prediction problem is a combination of multiple factors. The large catchment of about 10.000 km 2 and the TanDEM-X raster resolution of 12.5 m needs a model that is able to handle the amount of data in an adequate time. Moreover, the model needs the ability to incorporate culverts and bridges. Furthermore, the relief is very flat (slope ranging from 0.5 to 1.0 m/km), the flow speeds of floods is low (Cunningham et al. 1992) and even low obstacles can substantially change the flow behavior instantly. Nonetheless, at the same time, these areas affected by floods should be recorded as precisely as possible. Therefore, the hydrodynamic model 'FloodArea' will be used in a future step to calculate the runoff, flow concentration, flow velocity, backwater situations and inundation depth for variable time steps. 'FloodArea' is based on a simplified hydraulic approach including hydrodynamic calculations for the simulation of large areas with high spatial resolution (< 1 m) . Successful applications have been made in large-scale simulations of catchments sized up to 3.000 km 2 in Germany (Assmann et al. 2013). As mentioned above, the hydrodynamic calculation itself will not be discussed in this work. Further literature regarding 'FloodArea' is reported in Anders et al. (2016);Assman et al. (2013) and Fritsch et al. (2016).
Road dams and culverts
According to the cited TanDEM-X accuracy, small objects like road dams are not displayed. These road dams are elongated artificial fillings of earth material or rock on which a road runs. A road dam is designed to elevate and thus overcome geomorphological and topographical obstacles and thus has a landscape-shaping effect. It must not be compared to flood protection dams, because its construction is altogether simpler and not suitable as a flood protection measure. The road embankment has no overflows or other auxiliary structures, with the exception of culverts. Its structure is not designed for the lateral pressure of large water masses and additionally hinders natural flow processes and drainage channels.
Most of the roads in the study area are almost orthogonally affected by the Iishana system ( Fig. 3), and the major cities are connected by the shortest route and lowest construction costs. During most of the year (normal rainy season and dry season), this infrastructure is not negatively affected by water. In the case of an "Efundja", a local name for a major flood, they become vulnerable to overspilling and erosion (Mendelsohn et al. 2013).
On one hand, these road dams change the flow directions and hinder flow processes, which have to be taken into account by the hydrodynamic model. There is a water buildup and settlement structures in this area are more affected by the floods. Of course, during this time, these important infrastructure facilities cannot be used. As an example for the benefit of measuring and implementing these road dams into the DEM, the divergence in elevation between the Tan-DEM-X raw data and the GNSS measured values shows differences ranging from 0.125 up to 1.955 m ("GNSS and TanDEM-X cross validation" section).
On the other hand, culverts and bridges have to be included and incorporated into the model. The consideration of small structures like low walls or road dams is crucial to analyze flow paths and assess the influence of flow velocity. . Therefore, over 1000 culverts and bridges were mapped via google earth satellite images and have been cross-validated during the field trip via a hand hold Garmin e-Trex GPS (Fig. 3).
A high-resolution DEM including significant obstacles is essential for the accuracy in modeling surface runoff. An accurate DEM increases the predictive capacity of the flood model in terms of a better identification of main flow paths and local sinks, the locations of high flow depths and accelerated flow velocities . The exact magnitude of the advantage cannot yet be described until the model has been run, but can be assumed as enhanced as the flow path analysis shows in the results ("Corrected TanDEM-X" section). This analysis was done three times with different pre-settings. The fist flow path calculation considered the TanDEM-X raw data set by using just the 'Fill' function in advance. Otherwise, there would be no flow path calculation possible. The second calculation also used the filter algorithms ("DEM pre-processing and correction" section). The third calculation included more than 1000 culverts and bridges (Fig. 3). The flow path calculations were done according to the work of Persendt and Gomez (2016) and used the D8 algorithm as a simple approach for a validation process.
GNSS and real-time kinematic precise point positioning
The surveying of positions with the Global Navigation Satellite System (GNSS) is a common method for personal navigation and is ubiquitous. It is omnipresent, whether while driving with cars or for navigation during outdoor activities, in science and industrial sectors, or for surveying and lane detection (Knoop et al. 2017). Currently, few GNS Systems are available. The most prominent and first developed GNS System is the American Global Positioning System (GPS) that emerged in the 1990s (Zumberge et al. 1997). At the same time, many different positioning systems are available. The European Space Agency (ESA) developed the Galileo Program. There is the Chinese system BeiDou and the Russian system GLONASS, which have been working parallel to GPS for many years (Skog and Handel 2009;Li et al. 2015;Liu et al. 2017). GNSS positioning is the most basic technique and founded on the measurement of pseudo-range (Shrivathsa and Paniwani 2017).
Main disturbances in positioning appear in the ionosphere and the troposphere. Therefore, different methods exist to decrease these errors. One strategies is the use of an a-priori model to estimate the residual effects from measurements (Shrivathsa and Paniwani 2017). In case of ionospheric delays, only around half of it can be deleted and errors of several meters can still occur. Another strategy to improve the accuracy is to measure the delay by collecting data with a dual-frequency receiver. These devices are more expensive (Shrivathsa and Paniwani 2017;Skog and Handel 2009).
Higher accuracies can be reached by Relative Positioning with the integration of proximate static reference stations, e.g. with Network-RTK stations. With these techniques, common mode errors for GNSS receivers in a terminated area are estimated by a nearby stationary reference station that transmits the corrected information to the rover typically via UHF radio signals (Xu 2012). The common mode error increases with distance between the rover and the base station (Skog and Handel 2009). In differential mode, a pseudorange measurement can reach accuracies within a centimeter up to millimeter dimension (Shrivathsa and Paniwani 2017). Combining the pseudo-range measurements and carrierphase measurements improves the accuracy even more by providing unknown carrier phase cycle ambiguities and fix them to their integer value (Knoop et al. 2017;Ge et al. 2008;Teunissen and Khodabandeh 2014). Negative aspects of differential or relative GNSS positioning are the large and costly data transfer and the need of a dense local network of reference stations in a (Network-) RTK mode (Knoop et al. 2017;Shrivathsa and Paniwani 2017).
During the last 20 years, precise point positioning (PPP) achieved powerful advancements, which significantly led to increasing accuracy. It developed from former single-frequency constellation (just GPS) to multiple GNSS constellations like GPS, GLONASS, BeiDou and Galileo (Li et al. 2015;Cai and Gao 2007;He 2015). It further enhanced from single over dual up to triple frequencies and from ambiguityfloat to ambiguity-fixed solutions (Liu et al. 2018). Moreover, as a key for base station-independent work, it evolved from post-processing to real-time processing (Dixon 2006), usable in a static or even in a kinematic mode, via a globally distributed network of reference stations and geostationary satellites (Liu et al. 2018;Bisnath and Gao 2009;Kaplan and Hegarty 2005). Thus, PPP developed into a method able to measure precise point positions in a kinematic mode and correcting in real-time-real-time kinematic precise point positioning (RTKPPP) (Bisnath and Gao 2009). The main applications for RTKPPP are in the field of precision farming (Dixon 2006; Perez-Ruiz and Upadhyaya 2012) and marine constructions (Arroyo-Suarez et al. 2005).
PPP almost works like a Differential GNSS but instead of calculating the position relatively to a nearby ground or base station, it uses correction data send via a geostationary satellite. This technique provides accurate position data almost all over the globe in a decimeter, up to centimeter quality (Knoop et al. 2017;Shrivathsa and Panjwani 2017;Liu et al. 2018;Bisnath and Gao 2009;Kaplan and Hegarty 2005;Hatch et al. 2003). In a PPP system, the receiver uses the clock and orbit information of the satellite itself, and also the correction data from a relatively sparse global tracking network on the ground (Li et al. 2015). This stationary network calculates the correction for all visible satellites and the ensemble of this correction data is formatted into a binary message. This message is then sent to the Hub where it is uplinked to the geostationary satellite (Knoop et al. 2017;Shrivathsa and Panjwani 2017;Liu et al. 2018;Bisnath and Gao 2009;Kaplan and Hegarty 2005;Hatch et al. 2003). The geostationary satellite broadcasts the information to the receiving antenna on the ground (Fig. 4). The communication runs in the L-band format for GPS constellation (Skog and Handel 2009;Li et al. 2015;Kaplan and Hegarty 2005;Hatch et al. 2003). A dual-frequency receiver reduces the amount of reference stations needed to attain high accuracy (Shrivathsa and Panjwani 2017;Hatch et al. 2003). The satellite-based correction data is usually offered by commercial service providers (Liu et al. 2018;Bisnath and Gao 2009) Wang et al. (2018). A cutout of the globally distributed reference stations can be seen in Li et al. (2015) and are further explained in Skog and Handel (2009).
To achieve an accuracy of up to 5 cm a convergence time of 20-60 min is normally needed, depending on the actual number and geometry of visible satellites. To prevent a loss of tracking or a lock on a minimum of satellites, an open sky is needed as well as a continuously unobstructed environment (Knoop et al. 2017;Bisnath and Gao 2009). On the other hand, the use of PPP has its advantages in isolated and remote locations, expansive areas and regions where a reference station infrastructure is not generally available or is too costly (Bisnath and Gao 2009).
Sensor specifications
The Leica Viva GS16 smart antenna is a multi-frequency GNSS Sensor running together with the Leica CS20 Field Controller. Signal tracking is working with GPS (L1, L2, L2C, L5), GLONASS (L1, L2), BeiDou (B1, B2, B3 limited), Galileo (E1, E5a, E5b, Alt-BOC, E6 limited), QZSS (in future), SBAS (WAAS, EGNOS, MSAS, GAGAN) and L-band. The intern Satel M3-TR4 radio modem has 555 channels which leads to a high sensitivity and fast acquisition. The sensor can be used as a normal GNSS (absolute point positioning), in differential modus as a base station or rover as well as in a real-time-kinematic modus (RTK) with adaptive on-the-fly satellite selection (relative positioning). The RTK single baseline performance is about 8 mm + 1 ppm horizontal and 15 mm + 1 ppm in vertical. In the Network RTK mode an accuracy of 8 mm + 0.5 ppm horizontal and 15 mm + 0.5 ppm in vertical can be reached. Post-processed data measured in a static phase with long observation time can have an accuracy of about 3 mm + 0.1 ppm horizontal and 3.5 mm + 0.4 ppm vertically. The specialty of the sensor is its commercial Smart-Link worldwide correction service based on the VERIPOS Apex correction service (In this study Apex 2 service). The SmartLink and SmartLink fill function is a remote Precise Point Positioning system with an accuracy up to 3 cm in 2D. The measurement precision, accuracy and reliability is dependent on various factors like the number of available satellites, observation time, atmospheric conditions, multipath and others. The convergence time needed to gain full accuracy can take 20 up to 40 min. More general sensor information can be read in Leica GS 16 manual (Leica Geosystems AG 2018).
Using the SmartLink function means that permanent corrected data can be received from a geostationary satellite (RTKPPP, derived via 7 different geostationary satellites-25E, 98 W, 143.5E, AORE, AORW, IOR, POR) (Fig. 4). The accuracy in the commercial SmartLink mode is lower than in the RTK mode but it is independent of proximate reference stations and applicable all over the world. A statistical postprocessing is not required because the SmartLink software automatically corrects the ionospheric and tropospheric delay as well as orbit and clock lags with help of the geostationary satellite information.
The geometric arrangement of the satellites, called Dilution of Precision (DOP), is given as GDOP (3 position coordinates plus clock offset in solution) and PDOP (Position of 3 coordinates) (Hurn 1989). A negative aspect of this commercial service is the stochastic black box, where the user has no option to look at the correct functional model and the proper intra-and inter-system weighting of observations behind Kazmierski 2018).
Further negative aspects of using commercial correction data instead of the free available data, for example of the CNES are its high costs and less accurate results against post-processed data.
However, depending on the task and its required accuracy it can make investigations less complex for the end-user.
Measurement setup and processing strategy
For mapping the road dams, the GS 16 Sensor was installed on the roof of a 4 × 4 car (Figs. 5, 6). The offset between the sensor and the street surface was measured with a Leica LaserDisto 8 and results in 1.93 m (Fig. 6). The sensor was connected via Bluetooth with the Leica CS20 Field-Controller in the car. This was programmed to take points automatically every 25 m or more frequently if there is an elevation change of more than 0.3 m in between that distance. The range was chosen according to the TanDEM-X digital elevation model, which has a raster resolution of 12.5 m. A higher waypoint resolution would have been possible but was not necessary because most of the time no significant elevation change was recognized in between a higher raster resolution, as previous fieldwork has shown.
The coordinate reference frame used by Apex 2 is ITRF2014. Due to the settings of the Field Controller CS20 it was automatically transferred into ITRF2008 and later measurements were reassigned in a geodetic format with ellipsoid and datum WGS 84, further given in Cartesian projected coordinate system UTM 33 S. Selectable satellite systems were GPS, GLONASS, Galileo and BeiDou. The Leica SmartLink option was selected by activating the Augmentation System so signals from geostationary satellites and their real-time correction data could be received. As mentioned above, the used commercial function is based on the VERIPOS Apex 2 system that only allows the receiving of real-time correction data for GPS and GLONASS. That is why real-time correction data for BeiDou and Galileo could not been received. The quality of the received correction data is guaranteed by the ISO 17123-8. More information about the processing strategy can be read in Table 1. The satellite cut-off angle was set on 10°. A DOP limit was not determined in advance but verified in the post-evaluation.
Each of the four surveying days (Fig. 3) started and ended at the campus site of the University of Namibia in Ongwediva. Before driving commenced, the GS16 was installed on the car and a convergence time of 20 to 40 min was given until the sensor achieved a 1D accuracy of lower than 10 cm.
Due to the limitation in time and parallel projects, the focus was on the main roads and required: 1. Transverse course to the flow direction of the Iishana and 2. Security for the drivers, passengers and the car. Therefore, some remote gravel roads lacking security had to be excluded, like the part between Okahao and Outapi, which was too dangerous to drive on. The road dams east of Oshakati and Ongwediva have been excluded, because the hydrological network is different from the western system and will not be investigated further (Fig. 3). In addition, a part of short track between Tsandi and Ruacana is ignored as the system is flowing almost parallel to the road, so that the road dam will not have any important influence on the flow dynamic in this part of the study area (Fig. 3). A longer track starting west of the city of Okahao leading east of Ogongo has been omitted in consequence of its flatness. The driving speed spanned between 0 and 80 km/h. During breaks, for example at traffic lights, the speed was 0 meanwhile the speed on long straight route sections was constantly about 80 km/h. In general, the speed was adapted to the road conditions and traffic situation. Outliers like waypoints leading to parking lots have been excluded manually. Furthermore, it has to be considered that the transverse gradient of the roads, especially in wide curves, could not always be driven along the highest positions due to road traffic regulations. To improve the process of validation, data from the Namibian Roads Authority (NRA) were utilized for cross validation. Three datasets M0092, M0133, M0123 were provided.
The NRA partly measured the same roads and their heights within a distance of 500 m with the TRIMBLE R10 GNSS receiver in 2015. Likewise using a proprietary provider for real time correction in motion, they took the Trimble Continental CenterPoint RTX data. Similar to the measuring campaign of this study, they drove along the roads with a maximum speed of 80 km/h recorded in ITRF2014 coordinate reference frame, which was later converted into a WGS 84 format. They also measured while connected to the GPS and GLONASS systems. Another part in the set up was the PDOP value. If the value rose more than 0.5 m surveying ceased. Even in this investigation, post-processing was not necessarily done because of the real-time correction. The NRA could not give information that is more detailed. Nevertheless, both datasets measured nearly the same coordinates, just with another commercial device and software but still in the same mode.
Other cross validation methods, such as double and triple measurements or single point long time static measurements, have not been executed due to the lack of time in the field campaign.
The results of the two kinematic measurement campaigns of 2015 and 2018 were later compared for cross checking and to validate their deviations and similarities giving a hint of the overall reliability of the results.
The major challenge was to find coordinates that are at an equal location for further comparison. Most of the surveyed points could not been taken for validation because both campaigns measured in opposite directions and at different distances (25-500 m). Therefore, a buffer of 3 m was set around the coordinates of the 2018 campaign and intersected with the NRA data. The setting of 3 m was chosen under the condition that every pair of points had to be as close as possible for a significant comparison. Otherwise, if the buffer was set smaller almost no pairs of points were left for comparison. Finally, 18 points were close enough to each other for further comparison imitating a double measurement at these locations.
DEM pre-processing and correction
The correction of the TanDEM-X data set took place in several steps. In step one, outliers in elevation and indifferences caused by water were sought for exclusion. Therefore, the data package of the TanDEM-X data set includes different additional information layers (Wessel 2016). In this case, the TanDEM-X Water Indication Mask was used for excluding the strong inconsistencies (Wendleder et al. 2013).
The generated holes have to be interpolated to adapt the surface to the surrounding area. This was done with the Image Analyzer in ArcGIS. Here, the mask function was used with a minimum value of 0 and a maximum value of 3034 for band 1. As a result, missing values were set to NoData. Afterwards gaps were filled with the Elevation void fill function (Short Range IDW Radius "off" and Max. Void Width "fill all"). Then, smaller sinks were filled with the standard Fill function of ArcGIS to eliminate further voids. Subsequently extreme values were reduced by a low-pass filter 3 × 3 (Pipaud et al. 2015). The last step is about the implementation of the GNSS records. Therein a 25 m buffer was set around the GNSS point data. Afterwards, a Kernel interpolation between the GNSS points was set in between the buffer area. Then, the new dataset was implemented by the ArcGIS function Mosaic to new raster.
GNSS performance
The measurement setup could be applied on all road dams as planned. At the end, 16.590 points were taken and around 532 km of main road dams were tracked (Fig. 3). Nevertheless, in the post-evaluation of the recorded data, it became clear that points were not taken exactly every 25 m as was defined in the automatic mode. Most points were taken in a range between 25 and 27 m. Moreover, a large amount of points was taken in distances up to 49 m. Statistically, points have been collected within a mean of every 32.08 m.
Nevertheless, the overall GNSS performance shows PDOP and GDOP values within an excellent to ideal rating between 3.5 and 1.2 (Tables 2, 3) (Dutt et al. 2009;Langley 1999). It should be mentioned that the performance rating originally was invented for GPS only (Langley 1999).
In addition, the 1D, 2D and 3D coordinate quality (CQ) is also determined in a decimeter to a sub-decimeter range, where 1D describes the vertical accuracy, 2D the horizontal accuracy and 3D the spherical (vertical and horizontal) accuracy. As reported in Table 2, the range of total accuracy in the sector of 3D coordinate quality spans from 5.0 to 10.0 cm, where 10 cm was the cut off limit. Points with a lower accuracy were excluded in the post-evaluation process.
The calculated 3DCQ mean is about 7.1 cm with a standard deviation of 1.1 cm and a skewness of 0.544. The 1D and 2D coordinate quality is even better (Table 2).
GNSS and TanDEM-X cross validation
The cross-validation of the two surveying campaigns (Table 4) between Leica and Trimble shows differences in height between 0.006 and 0.585 m with a mean difference of 0.154 m and a median difference of 0.091 m. The standard deviation is about 0.160 m with a skewness of 1.386 and a variance of 0.026. The calculated RMSE is about 0.219.
Moreover, the measurements of the 2018 campaign with the Leica instrument were compared to the TanDEM-X raw data at the same positions. Obviously, the range is larger. The minimum deviation is 0.125 m and the maximum deviation is up to 1.955 m, with a mean difference of 0.418 m and a median of 0.471 m. The standard deviation is 0.973 m with a skewness of − 1.138 and a variance of 0.947. The calculated RMSE is about 1.059.
Corrected TanDEM-X
The TanDEM-X Transect Elevation Check of Fig. 7 shows two different lines and a blue plain of height along a transect, which can be found in Figs. 8,9. The black line indicates the height of the TanDEM-X raw data set before any calculation. The red dotted line designates the same path after the correction of water interferences and the application of filtering algorithms as described before (DEM pre-processing and correction). Up to here, the topography is smoothed and represents a more realistic and natural image. Sharpe edges could be equalized and some overestimated parts (water interferences and vegetation patches) have been lowered while other underestimated parts have been raised. Until that point, the road dam is still not clearly visible.
The plain blue specifies the final corrected DEM including the surveyed road dam, which is visible also in Fig. 9. Except for the road dam where more than a meter deviation is evident, differences in a sub-meter range could be detected. As shown in Figs. 8, 9, the application of the different filter algorithms and the correction of the water-induced interferences have been successful, so the road dam is clearly visible and the high outrage peaks are eliminated.
The value of the results becomes clearer in Fig. 10. The effect of the road dams and the modifications including culverts and bridges leads to a change in flow behavior, here emphasized in a change of flow direction. The calculated flow paths change significantly between the raw TanDEM-X model and the two edited models where additional filter algorithms were applied and culverts and bridges have been included.
Discussion
Due to left hand-traffic, data points were usually collected while driving on the left side of the roads. It has to be acknowledged, that road dams are normally transversely inclined in curves. Therefore, some of the collected point data in curves are not always representing the highest part of the road dam cross section. Nevertheless, in a few parts, the attempt was made to drive the highest profile through the curves to gain the maximum height of the road dam. These circumstances led to small differences between the left and right roadsides, which theoretically has to be added to the general vertical accuracy (1DCQ).
Airfreight costs could be saved, as just one sensor was necessary instead of two and no additional equipment. Moreover, time and work force was saved, because no differential station had to be set up every 1 km nor always had to be guarded by staff. Even the long convergence time of 30-40 min at the beginning of the measurement session was no problem at all, since this time was used to prepare the daily fieldwork.
According to Langley (1999), the GDOP and PDOP values had excellent to ideal rates. This explanation of quality was formerly developed for GPS only. Even the 1D, 2D and It can be assumed that multipath effects were not important, due to the sparsely vegetated and very flat landscape being only rarely interrupted by trees or buildings. Multipath effects caused by the white metal roof of the car cannot be determined nor excluded. Literature did not give any hints about necessary protection measures for roof top installed GNSS sensors, nor any numbers for multipath measurement errors.
The filter algorithm used leads to a substantial improvement. The evaluation of the transect as well as Figs. 7,8,9 shows that most of the interferences could be eliminated successfully and the geomorphology is obviously more perceptible. Nonetheless, the delivered water mask did not cover all water surfaces, especially very small ones sometimes were not covered. As a consequence, the low-pass filter smoothed these objects but some of them still exist as small dunes. However, regarding the work of Wendleder et al. (2013), it can be assumed that around 70% of the water bodies were detected. Further statistics regarding the used water mask are pending and have to be evaluated in a next step.
The validation of the measured data was limited due to the use of commercial real-time correction data. This caused a black-box effect in which not all data quality parameters were visible. Long-term single point measurements and double and triple measurements could not have been done due to limitation of time and parallel investigations in the field. Therefore, the only chance to get an idea of the vertical accuracy was the comparison of the data with former measured data from the Namibian Roads Authority. In this case, different systems were used but with almost the same settings, Fig. 8 TanDEM-X scene before correction. Both scenes face in a western direction with exaggeration × 12 and the indication of the transect of Fig. 7 (also located in Fig. 3) Fig. 9 TanDEM-X scene after correction. Both scenes face in a western direction with exaggeration × 12 and the indication of the transect of Fig. 7 (also located in Fig. 3) Fig. 10 Three different flow path calculations with ArcGIS illustrated in one image. Red stream definition based on the raw TanDEM-X model only used the 'Fill' function. Purple dotted Stream is calculated after the application of additional filter algorithms and the inclusion of the road dams. The blue Stream also includes the culverts and bridges. The road dams and culverts are influencing the flow dynamics measuring in motion with the same speed and the use of a commercial real-time correction service. Statistical analyses showed a sub-decimeter median value. Nonetheless, even for the data of the NRA individual quality values were missing due to the commercial real-time correction service used.
To achieve real-time correction data that is free of charge different providers like the CNES can be addressed for future measurements. In addition, post-processing would contribute to increased accuracy (Wang et al. 2018;Kazmierski et al. 2018;Kazmierski 2018). For example, with the new VERI-POS Apex 5 system, receiving of correction data including BeiDou and Galileo would have been possible to increase data quality. Even with the CLK91 stream via CNES free available correction data including all 4 satellite systems could have been received (Wang et al. 2018).
To some extent, the final total estimated error in height consists of different single errors and cannot be assessed in a concluding number due to various factors: the position of the car on the street (top or down of the turn), the uncertainty of the commercial real-time correction data (partially blackbox), the TanDEM-X itself (vertical/horizontal accuracy), and filter algorithms and interpolation procedures used. It becomes clear that a millimeter accuracy cannot be reached at all. Nonetheless, the aim of the work-to improve the TanDEM-X data set for further flood modelling-has been achieved.
Conclusions
The application of the different filter algorithms and the correction of the water-induced interferences have been successful, so road dams have been made clearly visible and high outrage peaks were eliminated. These modifications including culverts and bridges also lead to a change of flow directions. The automatically recoded points present excellent up to ideal GDOP and PDOP values with vertical accuracies about 4.3 cm up to 10.0 cm. The validation procedure shows huge vertical differences between GNSS measurements and the TanDEM-X raw model ranging from about 0.125 m up to 1.955 m with a median of 0.471 m. Elevation differences between two GNSS systems have shown significantly less differences in accuracy ranging between 0.006 and 0.585 m with a median of 0.091 m. These circumstances clearly illustrate the necessity of measuring smaller infrastructures for an accurate representation in the digital elevation model.
With this investigation, a scientific contribution has been made towards understanding the topographic basis of the Iishana system. Previously developed hydrological models in the Iishana region were focused on small sub-basins rather than on the entire system. The survey results reported here provide a scientific base for developing a trans-boundary flood model in the Iishana region of Angola and Namibia which would contribute towards an innovative adaptation of water management to climate change. The potential and originality of this research project is to bridge the knowledge gaps and to contribute to the scientific base for sustainable water resource management in the region.
The results underline the significance of a preprocessing of TanDEM-X data and the importance of incorporating road dams, culverts and bridges when a high quality floodwater prediction model is sought.
|
2020-06-28T13:43:06.101Z
|
2020-06-28T00:00:00.000
|
{
"year": 2020,
"sha1": "e0c17a9c3afe6d71d573cb443929f07df4f1aeca",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12665-020-09057-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "e0c17a9c3afe6d71d573cb443929f07df4f1aeca",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
259464894
|
pes2o/s2orc
|
v3-fos-license
|
Unprecedented rainfall index for reducing fatalities caused by sediment-related disasters
This study evaluated effectiveness of the unprecedented rainfall index, TP, proposed by Kosugi (2022) in achieving appropriate evacuation actions by municipalities and residents for reducing victims caused by rainfall-induced landslides and debris flows. TP is defined as the time stamp representing the situation that “current rainfall is the largest since TP”; namely, if we go back to time before TP, the rainfall at the current time is of a magnitude that has been experienced in the past (i.e., the situation is within the historical range) in every evaluation criterion used in analyses. In other words, the rainfall at the current time is of a magnitude never experienced in the period from TP through the present for at least one criterion. TP was computed for all disasters that caused death and missing of people in Japan in 2021: the Atami, Unzen, and Okaya disasters. In every disaster, at the time of the landslide and debris flow occurrences, TP reached the beginning of rainfall record, indicating that the rainfall reached a magnitude never experienced in the past. Based on these results, it was confirmed that (1) TP time series has a large potential to reduce the normalcy bias in all 3 disasters studied, (2) for evaluating anomaly in rainfall, rainfall features should be analyzed based on various evaluation criteria, and (3) the time of past land alteration at an individual location, such as an occurrence time of historical storm event and a construction time of embankment slope, might be set as a TP threshold.
Introduction
Every year, many lives are lost due to landslides and debris flows. Heavy rainfall is the major cause of these sediment-related disasters (SRDs), and it has been pointed out that climate change may increase risks of SRDs. In order to predict landslides and debris flows based on rainfall observations, various method to analyze rainfall anomalies have been proposed. One of the widespread methods is to select two parameters that represent rainfall characteristics (rainfall parameters), such as rainfall intensity, rainfall duration, and cumulative event rainfall, and set a rainfall threshold on a diagram showing their correlation (Fig. 1). In this method, if the plot on the diagram exceeds the threshold, the risk of SRDs is considered high.
Previous studies have discussed different rainfall thresholds, such as the rainfall intensity versus duration (I-D), the cumulative event rainfall versus duration (E-D), the cumulative event rainfall versus intensity (E-I), the normalized intensity versus duration (NI-D) thresholds (Maturidi et al. 2021b). Among them, perhaps the most frequently used one is the I-D threshold. For instance, Maturidi et al. (2021a) established the I-D threshold for landslide forecasting in Peninsular Malaysia. Zhang et al. (2022) proposed a screening method for defining the I-D threshold for shallow landslides in Guizhou Province, China. Other than these studies, the method is applied worldwide, including India (Mandal and Sarkar 2021;Abraham et al. 2021), Italy (Berti et al. 2012;Sala et al. 2021), Japan (Saito et al. 2010), and USA (Dennis et al. 2015). On the other hand, the E-D threshold was established for landslides in Chuncheon, Republic of Korea (Lee et al. 2021). Calvello et al. (2015) analyzed the correlation between cumulative event rainfall and duration for studying landsides in Rio de Janeiro, Brazil.
With regard to rainfall parameters used for SRD predictions, previous studies have considered various parameters other than rainfall intensity, duration, and cumulative event rainfall. For example, Kalubowila et al. (2021) used daily rainfall on 0 to 4 days before landslide occurrences for analyzing landslides in Matara district of Sri Lanka. In Japan, the soil water index (SWI) was proposed by the Japan Meteorological Agency as a rainfall parameter considering effects of current and antecedent precipitation (Okada Fig. 1 Schematic diagram showing correlation between two rainfall parameters, and a rainfall threshold for SRD risks. Envelope of the plots is shown by blue line which represents the historical maximum Rainfall parameter #1 Rainfall parameter #2 Envelope Threshold et al. 2001). A three-layer tank model with fixed parameters is used to calculate SWI, where SWI represents the sum of the water depths in each layer. SWI is practically used under the current Japanese system for issuing alert information against SRDs. SWI is also used in the study for establishing a SRD warning system in Taiwan ). More detailed analyses on rainfall parameters have conducted by Zhao et al. (2022), who examined 143 parameters related to rainfall observed in a catchment located in central China and concluded that the use of more rainfall parameters can improve accuracy and performance of debris flow prediction models. Moreover, based on comparisons between rainfall parameters and groundwater levels observed on mountain slopes, Kosugi (2015) showed that parameters for predicting SRDs should vary by location. Thus, results of these studies suggest that, in order to assess the risk of SRDs, it is essential to analyze rainfall features based on various criteria by using many rainfall parameters.
As for the determinations of threshold for SRD forecasting, the most common approach is to use the receiver operating characteristic (ROC) analysis which can identify the optimal threshold that minimize the false alarms while maximizing the correct alarms. Realistically, however, it is necessary to find an appropriate threshold that takes into account overall social costs of missed-and false-alarms. By developing costsensitive rainfall threshold curves, Sala et al. (2021) showed that the most appropriate threshold is derived by lowering the ROC-based threshold and reducing the cost of missed-alarms which is usually greater than the cost of false-alarms. In practices in Japan, thresholds are often set low so that landslides and debris flows will not be missed. However, this leads to many cases of false-alarms; it is reported that the ratio of false-alarm is 95.3% over the 10-year period from 2010 to 2019 (The Asahi Shimbun Company 2021). As a result, the reliability of alarm is dramatically reduced, preventing municipalities and residents to make appropriate evacuation actions.
In view of this current situation, Kosugi (2015) proposed to set the threshold to the envelope of plots on a diagram drawn by a combination of two rainfall parameters (Fig. 1). Based on this threshold, the announcement of a warning alarm implies that rainfall is on a scale never experienced in the past and, therefore, provides a clear rationale to residents for taking evacuation actions. On the other hand, this method arises a concern that the threshold becomes too high, increasing the number of missed-alarms. In order to cope with this problem, Kosugi (2015) proposed to obtain an envelope for each of various diagrams drawn by using different combinations of various rainfall parameters. If every envelope is used as the threshold, an alert will be issued when the current rainfall exceeds the historical maximum at least in one diagram. This method seems reasonable because, in order to assess the risks of SRDs, it is essential to analyze rainfall features based on various criteria, as described above. Furthermore, a recent study by Kosugi (2022) proposed a novel rainfall parameter, the unprecedented rainfall index, T P , which is defined as the time stamp representing the situation that "current rainfall is the largest since T P ". By using this index, it is possible to issue alarms based on the knowledge of at what point in time in the past rainfall was about the same as or greater than the present.
The purpose of this study is to evaluate effectiveness of the unprecedented rainfall index in achieving appropriate evacuation actions in the face of landslide and debris flow risks. In the following, we first present the definition and calculation method of T P . Next, T P values are computed for every storm event that caused landslides and debris flows which resulted in fatalities in Japan in 2021. Based on the results, we discuss usefulness of the unprecedented rainfall index and how it can be used to reduce losses of human lives due to SRDs.
Definition and behavior of API
The antecedent precipitation index (API), which is a rainfall parameter to evaluate the residual effects of previous precipitation, has been used frequently to analyze soil moisture dynamics (Sittner 1969), groundwater behaviors (Kosugi et al. 2008), storm runoff generations (Fedora and Beschta 1989) as well as landslide occurrences (Suzuki and Kobashi 1981). The value of API, and R(t) [mm] is the total precipitation from t-1 to t (equivalent to the 1-h rainfall intensity, I [mm/h]). The value of M indicates that precipitation decays to one-half of its initial value M h after its occurrence (Suzuki and Kobashi 1981;Descroix et al. 2002). Figure 2 exhibits the behavior of X(M, t) for various M values. When M is short, API strongly reflects the intensity of the current rainfall. As M is lengthened, the effect of past rainfall becomes more persistent, and API is an index that more strongly reflects the cumulative rainfall.
The API diagram
In this study, the API diagram is defined as the correlation diagram with two types of API (X(M 1 ,t) and X(M 2 ,t)) with different half-lives M 1 and M 2 (M 1 ≥ M 2 ) on the horizontal and vertical axes, respectively (Fig. 3). Values of X(M 1 ,t) on the horizontal axis evaluate effects of cumulative rainfall, and X(M 2 ,t) on the vertical axis evaluates effects of rainfall intensity. In Japan, the API diagram had long been used to announce warning alarms for SRDs. That is, when a plot on the API diagram exceeds a threshold, the risk of disaster is considered high. The employed M 1 and M 2 values had been 72 h and 1.5 h, respectively (Fig. 3a). Since groundwater levels on slopes dominantly control landslide occurrences, Kosugi et al. (2013) analyzed correspondences between API diagrams and groundwater levels observed on slopes. Figure 3 shows an example: with values of M 1 and M 2 of 72 h and 1.5 h, respectively, the API diagram could not reproduce groundwater levels observed on a steep mountain slope with high accuracy (Fig. 3a). Then, Kosugi et al. (2013) optimized M 1 and M 2 to obtain values of 1061 h and 77 h, respectively, which reproduced the groundwater levels with higher accuracy (Fig. 3b). Furthermore, Kosugi (2015) compared API diagrams and groundwater hydrographs observed at 33 locations on mountain slopes and estimated the optimal M 1 and M 2 ranges to be 2.7-1398.2 h and 2.7-1138.7 h, respectively. Based on these results, Kosugi (2015) concluded that rainfall parameters for predicting SRDs should vary by location. In other words, in order to assess SRD risks, it is essential to analyze rainfall features based on various evaluation criteria by using various API diagrams with different values of M 1 and M 2 . 1 3
Definition and calculation of the unprecedented rainfall index
The unprecedented rainfall index, T P , is defined as the time stamp representing the situation that "current rainfall is the largest since T P ". Kosugi (2022) proposed the following procedure for computing T P .
Step 1: Calculate APIs, X(M 1 , t) and X(M 2 , t), for two different half-lives of M 1 and M 2 (M 1 ≥ M 2 ), respectively, using rainfall data for the entire period from the start of observation to the present time. Then, draw a API diagram as shown in Fig. 4, by plotting data during the current and past storm events with X(M 1 , t) on the horizontal axis and X(M 2 , t) on the vertical axis. In this study, a storm event is considered to be in progress when the rainfall in the past 24 h is greater than zero, and a storm event is delimited by a rainfall-free period of 24 h or longer.
Step 2: On the API diagram shown in Fig. 4, define the upper right region (indicated by the red square) of the plot at the current time. Then, extract individual plots during the past storm events that are located in this region (including the boundary). As you can see from the figure, the plot at the current time is located in the same or smaller region in comparison with each of the extracted plots.
Step 3: Among the plots extracted in the Step 2, chose the plot which occurred closest to the current time. The occurrence time of the chosen plot is a candidate for T P . In the case of the example shown in Fig. 4, the candidate T P is "3:00 a.m., Jul. 4, 2004".
Step 4: Set the combination of M 1 and M 2 to a different combination from that in the Step 1. Then, by performing the Steps 1 through 3 above, determine a new candidate T P . This process is repeated for various combinations of M 1 and M 2 , and a T P candidates is determined for each of the various API diagrams.
Step 5: The earliest T P among the candidates obtained from the above processes is designated as T P .
The use of various API diagrams in the Step 4 corresponds to the aforementioned analysis of rainfall features using various evaluation criteria effective for predicting SRDs (Kosugi 2015). Adopting the time stamp that is farthest back in time among the various T P candidates (i.e., the Step 5) results in that, for at least one criterion, the rainfall at the current time is of a magnitude never experienced in the period from T P through the present (T P is not included). In other words, if we go back to time before T P (T P is included), the rainfall at the current time is of a magnitude that has been experienced in the past (i.e., the situation is within the historical range) in every evaluation criterion used in the analysis. In this sense, we can say that the rainfall at the current time is the largest since T P .
Regarding the evaluation criteria effective for predicting SRDs, 1326 different API diagrams were used to analyze rainfall features in this study. That is, the half-lives M 1 and M 2 were set in the range of 0.1 to 1500 h, in a total of 51 ways so that the change in the logarithmic value is constant. Every combination of M 1 and M 2 (M 1 ≥ M 2 ) was employed, resulting in the 1326 different API diagrams. The minimum value of M 1 and M 2 (i.e., 0.1 h) was determined so that API is almost the same as the 1-h rainfall intensity, while the maximum value was determined to be 1500 h with reference to the maximum value (1398.2 h) estimated by (Kosugi 2015).
We used the following procedure to obtain an initial X value in Eq. (1) for calculating the API time series for each half-life value. First, the API time series was calculated by assuming that the initial value is equal to zero. We found that, regardless of the half-life value, effects of this initial value on API were almost eliminated after three years from the start of the calculation. Therefore, from the API time series for the period after 3 years from the start of calculation, we extracted APIs with the same month, day, and hour as the start of the time series. Then, the average value of the extracted APIs was used as the initial value for calculating the API time series that was subsequently used for the T P calculations.
Studied disasters
In Japan, the number of SRDs in 2021 was reported to be 972, which resulted in death of 32 people and one person missing as reported by Ministry of Land, Infrastructure, Transport and Tourism (MLIT) (2022). These victims were brought by 3 disasters shown in Fig. 5. This study computed T P values for these 3 disasters for the purpose of discussing usefulness of the unprecedented rainfall index for reducing death and missing of people. Precipitation data from reliable stations close to each disaster site were used to calculate T P values. As a result of collecting data as far back in time as possible, we were able to obtain data for 66.5, 47.4, and 43.4 years for the Atami, Unzen, and Okaya disasters, respectively.
The Atami disaster
From Jun. 28 through Jul. 6, 2021, a stationary front brought a heavy storm event with a total precipitation of 528 mm and the maximal 1-h intensity of 24 mm/h at Atami City, Shizuoka Prefecture, Japan. In the morning of Jul. 3, a large debris flow occurred at the Aizome River in Izusan district in Atami City. According to Imaizumi et al. (2022), the debris flow occurred owing to a failure of an embankment slope in the upper reaches of the basin (Fig. 6a) which was constructed in or around 2010. In the downstream region from the valley outlet, the debris flow engulfed houses on both sides of the channel, and although some concrete structures remained, many houses in the center part of the flow area were washed away (Fig. 6b). Thus, the debris flow caused extensive damages, resulting in the death of 26 people, leaving one person missing.
For calculating T P values, we used the 1-h precipitation data from Mar. 1, 1985 through Jul. 3, 2021 observed at the Atami observatory operated by Shizuoka Prefecture, which is located approximately 2.9 km away from the collapsed slope. Because the data at the Atami observatory can only be traced back to the last 36 years, we supplementary used the 1-h precipitation data observed at the Taga observatory (from Jan. 1, 19821, through Mar. 1, 1985, Tanna observatory (from Jan. 1, 1976through Dec. 31, 1981, and Ajiro observatory (from Jan. 1, 19551, through Dec. 31, 1975. The Taga, Tanna, and Ajiro observatories are located approximately 5.2, 5.6, and 6.1 km, respectively, away from the Atami observatory, and operated by Shizuoka Prefecture, MLIT, and the Japan Meteorological Agency, respectively. Missing data were either supplemented with data from the nearest station or interpolated based on daily precipitation.
The Unzen disaster
From Aug. 7 through 20, 2021, a stationary front brought a heavy storm event with a total precipitation of 1484.5 mm and the maximal 1-h rainfall intensity of 81 mm/h at Unzen City, Nagasaki Prefecture, Japan. In the early morning of Aug. 13, a landslide occurred at Kojigoku district in Unzen City (Fig. 7a). Debris from the collapsed slope flowed down
Fig. 5
Locations of the three disasters studied. Background map was provided by Geospatial Information Authority of Japan a small valley, and 2 houses located at the exit of the valley were completely destroyed, resulting in 3 deaths (Fig. 7b) (Jitousono et al. 2021).
For the calculation of T P , we used the 1-h precipitation data from Apr. 1, 1974 through Aug. 20, 2021 observed at the Unzendake observatory operated by the Japan Meteorological Agency, which is located approximately 0.6 km away from the collapsed slope. Missing data were either supplemented with data from the nearest station or interpolated based on daily precipitation.
The Okaya disaster
From Aug. 12 through 20, 2021, a stationary front brought a heavy storm event with a total precipitation of 423.5 mm and the maximal 1-h rainfall intensity of 44 mm/h at Okaya City, Nagano Prefecture, Japan. In the early morning of Aug. 15, debris flows occurred at Kawagishihigashi district in Okaya City. The debris flows were observed at the Honzawa Valley, Nakaohkubo Valley, and Ohkubo Valley as shown in Fig. 8a. Among them the debris flow at the Nakaohkubo Valley swept into the dwellings, resulting in a tragedy that killed 3 people (Fig. 8b) (Hiramatsu et al. 2022).
For calculating T P , we used the 1-h precipitation data observed at Kawagishi observatory operated by Okaya City, which is located approximately 0.7 km away from the collapsed slope. Because the data from the Kawagishi observatory are available only from Mar. 28, 2021, we used data from the Shimohama observatory for the period from Mar. 18, 1978through Mar. 27, 2021. The Shimohama observatory is about 2.8 km away from the Kawagishi observatory, and is operated by MLIT. Missing data were supplemented with data from the nearest station. The cumulative rainfall at 10:00 on Jul. 3 reached 488 mm, and the rainfall intensity from 9:00 to 10:00 a.m. was as high as 24 mm/h. About 10 debris flows, in total, were observed by around noon on Jul. 3 (Imaizumi et al. 2022). Figure 9b shows the time series of computed T P values. Between 12:00 a.m. and 5:00 a.m. on Jul. 1, T P reached Jun. or May of 2021. This indicated that the magnitude of the Atami 2021 storm event was still small compared to the magnitude of storm events that occurred within about 1-2 months prior to it. Then, between 6:00 a.m. on Jul. 1 and 5:00 a.m. on Jul. 2, T P went back about a year, indicating that the Atami 2021 storm event was the largest in approximately one year. Thus, as time progresses, T P generally went back in time. From 3:00 a.m. through 6:00 a.m. on Jul. 3, T P reached Aug. of 1983. That is, the Atami 2021 storm event was the largest since the Aug. 1983 storm event. And, after 7:00 a.m. on Jul. 3, T P reached Jan. 1 in 1955, which corresponds to the beginning of the rainfall record. This indicates that, during the time period of the debris (2021)) in the Okaya disaster flow occurrences (i.e., around 10:30 a.m. to noon on Jul. 3), the Atami 2021 storm event was on a scale never experienced during the period for which rainfall records exist. Figure 10a shows the hyetograph for the storm event which caused the the Unzen disaster (the Unzen 2021 storm event). The landslide at Kojigoku district reportedly occurred at around 3:55 a.m. on Aug. 13 (Nagasakishimbunsha 2021), that was about 128 h after the start of the storm event. The cumulative rainfall at 4:00 a.m. on Aug. 13 reached 863 mm. While the maximal 1-h rainfall intensity of 81 mm/h was observed between 11:00 p.m. to midnight on Aug. 12, the rainfall intensity around the occurrence of landslide (i.e., 3:00-4:00 a.m. on Aug. 13) was 28.5 mm/h. Figure 10b shows the time series of computed T P . The value of T P went back to Jul. of 1982 during the period of 7:00-11:00 p.m. on Aug. 12. Then, after 12:00 a.m. on Aug. 13, T P reached Apr. 1 in 1974, which corresponds to the beginning of the rainfall record, indicating that the Unzen 2021 storm event was on a scale never experienced before at the time of landslide occurrence. Figure 11a shows the hyetograph for the storm event which caused the the Okaya disaster (the Okaya 2021 storm event). The Shinano Mainichi Shimbun (2021) reported that the debris flow at the Nakaohkubo Valley occurred at around 5:15 a.m. on Aug. 15, that was about 54 h after the start of the storm event. The cumulative rainfall at 5:00 a.m. on Aug. 15 reached 375.5 mm. While the maximal 1-h rainfall intensity of 44.0 mm/h was observed between 3:00 to 4:00 a.m. on Aug. 15, the rainfall intensity just before the occurrence of debris flow (i.e., 4:00-5:00 a.m.) was not so large (i.e., 6.5 mm/h). The time series of the unprecedented rainfall index, shown in Fig. 11b, indicates that, at noon on Aug. 14, T P went back to Sep. of 1980. During the period of 1:00-8:00 p.m., T P went back to Jul. of 2006 or Sep. of 1999. Then, after 9:00 p.m. on Aug. 14, T P went back to Mar. 18 in 1978 (i.e., the beginning of the rainfall record). Thus, this figure clearly indicates that, at the occurrence of the debris flow, the Okaya 2021 storm event was on a scale never experienced before.
The Atami disaster
Under the current Japanese system, each prefectural government, in cooperation with each local meteorological observatory, issues alert information against SRDs, so-called the landslide alert information (LAI), based on rainfall analyses using a diagram that shows correlation between SWI and 60-min cumulative rainfall. In response to the issue of LAI, municipalities are recommended to issue the evacuation alert which require all residents to evacuate from possible affected areas (Japan Meteorological Agency 2022). In case of the Atami disaster, although LAI was issued at 12:30 p.m. on Jul. 2, the evacuation alert was not issued before the disaster occurrence (Fig. 9b) (Imaizumi et al. 2022). The municipality issued the emergency alert, which is positioned as the next step after the evacuation alert, at 11:05 a.m. on Jul. 3, that was about 30 min after the first report of the debris flow occurrence (NHK news web 2021). Thus, in the case of the Atami disaster, the LAI announcement did not lead to effective evacuation actions by the municipality and residents. The voluntary disaster prevention association, which is organized in the district directly hit by the debris flow, had received an award from the prefecture government in 2012 for its efforts to raise awareness of disaster risks and strengthen measures to prevent human casualties. The association has been aware that there are SRD warning zones in the area damaged by the Atami 2021 storm event, and the residents in this district have conducted evacuation drills against SRDs in 2016 and 2019 (Imaizumi et al. 2022). Thus, it has been believed that the residents have a good understanding of the risk of SRDs, whereas the 2021 disaster caused human casualties.
Facing threats of disasters, people tend to minimize the probability of potential threats or their dangerous implications (Omer and Alon 1994). Individuals and organizations tend to believe in the less alarming options whenever they are presented with conflicting or ambiguous information about danger (Perry et al. 1982). This tendency is called the normalcy bias. To counter the normalcy bias and allow for suitable preparation, it is necessary to deliver timely, repeated, and unambiguous warnings and instructions (Omer and Alon 1994).
As suggested by Kosugi (2022), who analyzed T P values for a storm event occurred in Kumamoto Prefecture, Japan, in 2020, causing 11 death and one missing of people, the information brought by T P time series shown in Fig. 9b has a large potential to reduce the normalcy bias, by unambiguously showing the abnormality of current rainfall in comparison with the previous storm event which caused historical catastrophes. For instance, Typhoon Ida, which hit this area in Sep. 1958, caused a catastrophe that left more than 1000 people dead or missing in the Izu Peninsula area (Fig. 5). By this typhoon, 4 people were killed and one was missing in Atami City (Shizuoka Prefecture 2016). As shown in Fig. 9b, the T P time series clearly indicates that, more than 3 h before the debris flow occurrences, the Atami 2021 storm event had exceeded the heavy rainfall brought by Typhoon Ida and reached a magnitude never experienced before. By delivering this information to local authorities and residents could have led to more appropriate evacuation actions.
The Unzen disaster
In case of the Unzen disaster, LAI was issued at 1:17 p.m. on Aug. 12 (Fig. 10b) (Kitamoto 2022), and in response to LAI, the municipality issued the evacuation alert at 4:00 p.m. (Nagasakishimbunsha 2021). However, this did not necessarily lead to appropriate evacuation actions by residents; 3 people were killed by the landslide occurred at around 3:55 a.m. on Aug. 13.
In this region, historical disasters occurred in Aug. 1993; torrential rains caused by a stagnant front triggered landslides and debris flows, resulting in two fatalities (Nagasaki Local Meteorological Observatory 2022). As shown in Fig. 10b, the T P time series clearly indicates that the Unzen 2021 storm event had exceeded the storm event in Aug. 1993 about 9 h before the landslide. Moreover, this figure indicates that, about 4 h before the landslide, the Unzen 2021 storm event had reached a magnitude never experienced before. Actually, an elderly woman who lives nearby made a comment on the Unzen 2021 storm event as "Since the evening on Aug. 12, it has been raining and thundering more heavily than I have ever experienced (The Yomiuri Shimbun 2021)". By using the T P time series shown in Fig. 10b, the situation of such unprecedented heavy rainfall can be clearly 1 3 informed to all residents based on the objective and quantitative analyses of rainfall data. Thus, the unprecedented rainfall index is expected to be effectively used to facilitate proper evacuations of residents.
The Okaya disaster
In case of the Okaya disaster, LAI was issued at 1:25 p.m. on Aug. 14 ( Fig. 11b) (Hiramatsu et al. 2022). However, the municipality did not issue the evacuation alert until 6:00 a.m. on Aug. 15, about 45 min after the debris flow occurred at Nakaohkubo Valley which caused 3 victims (The Mainichi Newspapers 2021).
In this region, torrential rains from Jul. 15 to 24, 2006 initiated landslides and debris flows, and 13 people were killed or missing. At that time, debris flows reportedly occurred in the Honzawa Valley and Matoba Valley adjacent to the Nakaohkubo Valley. Another historical disaster in this region was caused by heavy rainfall on Sep. 28-29, 1983 resulting from Typhoon Forrest. Floods, landslides, and debris flows occurred, which resulted in 9 fatalities (Geospatial Information Authority of Japan 2009).
From the T P time series shown in Fig. 11b, it is clear that the Okaya 2021 storm event had exceeded both of the storm event in Jul. 2006 and Typhoon Forrest in 1983, and reached a magnitude never experienced before at 9:00 p.m. on Aug. 14, 2021, more than 8 h before the occurrence of the debris flow. Most likely, such information is effective to reduce the normalcy bias letting local authorities and residents to take more appropriate evacuation actions.
Storm evaluation criteria
In this study, 1326 different API diagrams were used for computing T P values. That is, the half-lives M 1 and M 2 , which were used to calculate API on the horizontal and vertical axes of each API diagram, respectively, were set in the range of 0.1 to 1500 h, in a total of 51 ways so that the change in the logarithmic value is constant. The use of various API diagrams corresponds to analyzing rainfall features using various evaluation criteria effective for predicting SRDs. As shown in Figs. 9 through 11, for every disaster analyzed in this study, rainfall reached a magnitude never experienced in the past at the time of the SRD occurrence. Here, the question is in which evaluation criteria the rainfall reached a magnitude never experienced in the past.
In Fig. 12, each circle shows the combination of M 1 and M 2 with which the API diagram exceeded the historical maximum just before or at around the time of the disaster occurrence. In this figure, the red circles represent API diagrams in which both of the API value on the horizontal axis (the horizontal API) and the API value on the vertical axis (the vertical API) exceeded their historical maximums (i.e., API diagrams in which the plot was located in the red region shown in Fig. 13). Especially, the red circles on the 1 to 1 line represents that M 1 is equal to M 2 , indicating that rainfall exceeded the historical maximum for a single API rather than a combination of two APIs. In addition, the green circles represent API diagrams in which only the horizontal API exceeded its historical maximum, and the yellow circles represent API diagrams in which only the vertical API exceeded its historical maximum, corresponding to the green and yellow regions in Fig. 13, respectively. Moreover, the gray circles represent API diagrams in which, although neither of the horizontal and vertical APIs exceeded their historical maximums, the combination of the two APIs indicated that the rainfall reached a magnitude never experienced in the past (i.e., the gray region shown in Fig. 13).
In case of the Atami disaster, APIs with the half-lives of 4.7 × 10 1 through 2.7 × 10 2 h exceeded their historical maximums just before the occurrence of debris flow (the red circles on the 1 to 1 line in Fig. 12a). The API diagrams which have one of these APIs on both of the horizontal and vertical axes (the red circles in Fig. 12a) marked a magnitude never experienced in the past. Moreover, API diagrams which has one of these APIs on either of the horizontal or vertical axis (the green or yellow circles in Fig. 12a, respectively) exceeded the historical maximum. Furthermore, in some of the API diagrams with the combinations of horizontal APIs with half-lives of 3.2 × 10 2 through 1.5 × 10 3 h and vertical APIs with half-lives of 3.2 × 10 0 through 3.9 × 10 1 h (the gray circles in Fig. 12a), rainfall reached a magnitude never experienced in the past. These results indicate that the Atami 2021 storm event is characterized by large values of APIs with middle to long halflives. At the same time, excesses of historical maximums were marked in the API diagrams with the combinations of a horizontal API with the half-life of 3.9 × 10 1 h and vertical APIs with half-lives of 1.0 × 10 -1 through 8.3 × 10 -1 h, as well as in the API diagrams with the combinations of a horizontal API with the half-life of 3.2 × 10 2 h and vertical APIs with half-lives of 1.0 × 10 -1 through 1.2 × 10 0 h (the gray circles in Fig. 12a). This was attributable to the largest 1-h rainfall intensity during the Atami 2021 storm event (i.e., 24 mm/h) from 9:00-10:00 a.m. on Jul. 3, just before the occurrence of the debris flow (Fig. 9a).
In case of the Unzen disaster, APIs with the half-lives of 5.7 × 10 0 through 5.7 × 10 1 h exceeded their historical maximums at around the time of landslide occurrence (the red circles on the 1 to 1 line in Fig. 12b). Moreover, in some of the API diagrams with the combinations of horizontal APIs with half-lives of 6.9 × 10 1 through 1.0 × 10 3 h and vertical APIs with half-lives of 8.3 × 10 -1 through 4.7 × 10 0 h (i.e., the gray circles in Fig. 12b), rainfall reached a magnitude never experienced in the past. Thus, the Unzen 2021 storm event is characterized by large values of APIs with relatively short half-lives in comparison with the storm event that caused the Atami disaster.
In case of the Okaya disaster, APIs with the half-lives of 1.0 × 10 1 through 1.5 × 10 2 h and 5.7 × 10 2 through 1.5 × 10 3 h exceeded their historical maximums just before the debris flow occurrence (the red circles on the 1 to 1 line in Fig. 12c). Moreover, in many of the API diagrams with the combinations of horizontal APIs with half-lives of 1.8 × 10 2 through 4.7 × 10 2 h and vertical APIs with half-lives of 1.0 × 10 -1 through 8.3 × 10 0 h, and with the combinations of horizontal APIs with half-lives of 3.2 × 10 2 through 4.7 × 10 2 h and vertical APIs with half-lives of 1.8 × 10 2 through 2.2 × 10 2 h (the gray circles in Fig. 12c), rainfall reached a magnitude never experienced in the past. Comparing Fig. 12c with Figs. 12a, b, it is clear that the Okaya 2021 storm event recorded historical maximums in more evaluation criteria than the Atami 2021 and Unzen 2021 storm events.
Thus, the results shown in Fig. 12 depicted that the combinations of M 1 and M 2 , which indicate the exceedances of historical maximum, are different from each other in the three disasters studied. This suggests that, for evaluating anomaly in rainfall and detecting the excesses over historical maximum, it is important to analyze rainfall features based on various evaluation criteria by using various API diagrams with different combinations of M 1 and M 2 values.
Thresholds of T P
In the current system operated in Japan, LAIs are announced when the magnitude of rainfall exceeds the threshold for SRD risks according to the rainfall forecast for the next 2 h. In case of the three disasters studied in this study, the disasters occurred when rainfall continued more than about 15 h after the LAIs were announced (Figs. 9 through 11). This suggests that the disasters occurred after the magnitude of the rainfall exceeded the thresholds by a large margin. As a general trend, the thresholds are often set low so that landslides and debris flows will not be missed. As a result, there are many cases of false-alarm where no intensive landslide or debris flow occurs even though LAI is issued; it is reported that, when LAI was issued, the percentage of occurrence of intensive landslides or debris flow remained at 4.7% over the 10-year period from 2010 to 2019 (The Asahi Shimbun Company 2021). This can be one of the reasons why, as in the three disaster cases examined in this study, the evacuation alert was not issued by municipalities or residents were not evacuated even after LAI was issued.
In case of the Unzen disaster, Fig. 10 indicates that, about 6 h after the LAI issue, the Unzen 2021 storm event exceeded the historic storm event in Aug. 1993. This means that, in the land where landslides and debris flows occurred during the storm event in Aug. 1993, this is the first time that rainfall of this magnitude has been experienced since the land was devastated by the 1993 storm event. Similarly, in case of the Okaya disaster, Fig. 11 indicates that T P went back to the historic storm event in Jul. 2006 at the time of LAI announcement. Hence, the land where debris flow occurred during the storm event in Jul. 2006, this is the first rainfall of this magnitude since the land was devastated by the 2006 storm event. In addition to the LAI announcement, communications of such information brought by the unprecedented rainfall index may help to better clarify the reasons for evacuation and the areas targeted for evacuation. Consequently, using the historical disaster occurrence time as one of the thresholds and increasing the level of alertness as T P exceeds the threshold may be effective in guiding appropriate evacuation behaviors of municipalities and residents.
In case of the Atami disaster, it is reported that the debris flow was triggered by the failure of the embankment slope which was constructed in or around 2010 (Imaizumi et al. 2022). Figure 9 shows that T P went back further than 2010 at 1:00 p.m. of Jul. 2, 2021, that was 30 min after the issue of LAI. This means that immediately after the LAI was announced, the embankment slope was experiencing torrential rains never experienced before. This information brought by T P is expected to strongly encourage municipalities and residents to take evacuation actions. Thus, the time of the embankment slope construction can be one of the thresholds of T P .
As described above, a warning system based on T P can issue warning and evacuation information that takes into account the past history of individual locations. That is, the time of past land alteration at an individual location might be set as the T P threshold as suggested by Kosugi (2022). This study pointed out that the occurrence time of historical disaster and the construction time of embankment slope as examples of such threshold, while Kosugi (2022) noted that the time when trees on a slope fell due to strong winds or the time when a slope subjected to strong shaking by an earthquake could also be a threshold of T P .
Conclusions
This study evaluated effectiveness of the unprecedented rainfall index, T P , proposed by Kosugi (2022) in achieving appropriate evacuation actions by municipalities and residents for reducing victims caused by SRDs. Based on the analyses for all disasters that caused death and missing of people in Japan in 2021, i.e., the Atami, Unzen, and Okaya disasters, we derived the following conclusions.
1. The information brought by T P time series has a large potential to reduce the normalcy bias in all 3 disasters studied, by unambiguously showing the abnormality of current rainfall in comparison with previous storm events which caused historical catastrophes. 2. For evaluating anomaly in rainfall and detecting the excesses over historical maximum, it is important to analyze rainfall features based on various evaluation criteria by using different combinations of various rainfall parameters. 3. The time of past land alteration at an individual location, such as an occurrence time of historical storm event and a construction time of embankment slope, can be set as a T P threshold, which might be effective for clarifying reasons for evacuation and areas targeted for evacuation.
3
To obtain T P values, this study used combinations of two parameters for evaluating rainfall features. In order to calculate more accurate T P values, combinations of three or more parameters should be considered in future studies, although more computational resources are required. In regard to the rainfall parameter for calculating T P values, it is important to consider a variety of parameters other than the API used in this study. Here, suitable parameters and suitable combinations of parameters likely vary from location to location due to differences in geology, topography, land use, etc. Detecting the most appropriate parameters and most appropriate combinations for each location is expected to contribute to improving the accuracy of SRD predictions and considered to be another important future research topic. For practical application, the effectiveness of T P and related ideas should be verified by increasing the number of SRD analyses in future studies.
|
2023-07-11T01:29:31.765Z
|
2023-06-15T00:00:00.000
|
{
"year": 2023,
"sha1": "6ed0fa3d6d0e505739b20c77704b0a81d49f9a49",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11069-023-06048-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "b394435dc4fc9f7e1427f778b6e7b5c3c643fe98",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
234647507
|
pes2o/s2orc
|
v3-fos-license
|
Inelastic Behavior of En 31 Steel Metallic Surface Subjected to Hard Body
In machineries of thermal power plants elements such as coal conveying systems, grinding balls, slurry carrying systems involves transfer of load and displacements. In all these machineries every element is subjected to the hard body such as particles from coal and slurry cause continuous wear results with abrasive wear. Due to this abrasive wear there is inelastic and damage of both the objects was takes place. Since many elements of all these machineries were made from En category steels this study was focused on En 31 steel. In the present study rubber wheel abrader was used to conduct the experiments. 106 microns, 212 microns and 425 microns graded abrader was used for conducting the test. En category steel i.e. En 31 was used as target material for the studies. The sand flow rate of 95 grams/minute was maintained. The weight loss was calculated and found that 0.004, 0.012 and 0.021, for an abrader size of 106 microns following with 0.005, 0.012 and 0.020 & 0.007, 0.022 and 0.023 for an abrader size 425 microns. For all these three tests a normal load of 11.37 N, 35.31 N, and 58.85 N was applied. Scanning electron microscope (SEM) was used to study the morphological analysis. There is light influence on volume loss due to the influence of abrader. Weight loss of the material does not solely depend on increasing in abrader size. Inelastic deformation and wear modes are seen as subject to mechanical properties of the target material. Disfigurement highlights describe the volume loss of the wear.
Introduction
In thermal power plants the machineries like coal conveying systems, coal handling systems, fluid and slurry handling systems, ash handling systems, many elements were made from En category steels. EN category steel offer excellent properties like wear resistance, toughness properties at low temperatures, good elastic modulus. High strength properties enable them to use in shafts, punches, drills, dies, mining equipments, ploughing elements of agriculture. In all these applications the common feature is two rubbing surfaces. Due to continuous operations involved there will be the transfer of displacements and motions. Transfer of displacements and loads causes the failure of machine elements leading to permanent machine breakdowns and efficiencies. During continuous rubbing a third body i.e. dirt or sand particles are present between the contact surfaces. As continuous relative motion takes place the third body will rubbing over the surface tends to abrade the surface, so that causing ploughing the surface. Due to this material was displaced from contact surface, so that weight losing will take place leads to element failure. Hence to design the elements of the machineries, it is the strong thrust needed to understand the mechanisms of wear in all the directions. It is also strong necessity required to understand the nature of wear as how it behaves with respect to load and nature i.e abrasive, adhesive and so on [1,12]. Abrasive wear: whenever two contact surfaces interlocks due to their surface morphology, ploughing takes place which leads to abrasive wear. The mechanism is shown in figure 1(a) Adhesive wear: If the contact surfaces are very smooth and enough adhesive bonding stresses exists between two pairs which could be sufficient enough to resist the relative sliding displacement then adhesive wear takes place. The mechanism is shown in figure 1 (b). Understanding of the wear is only limited theoretical based on the process like micro-cutting, micro-fatigue and micro-cracking [2]. Many of the researchers found theoretical model for understanding the volume loss and degree of penetration. Hokkarigawa and kato introduced a parameter called degree of penetration D p which was used to predict various modes of wear phenomenon and given by, Where D p = Degree of penetration R = Pin tip radius H = Hardness W = Normal load From the model developed they determined that in three types of abrasive wear modes the grooves were formed as a result of abrasive particle and there is a plastic flow of the material by creating the groove having ridges on both sides. They found experimental and theoretical estimates of morphology of deformation in ploughing, cutting and wedge formation [3]. Kayaba and Kato [4] developed a model for predicting adhesive wear proposing various models of wear. The size of the wear particle predicted does not simply equal to the size of contact. The complexity of wear marvel was evaluated by characterizing boundaries like wear modes and wear rates. The complication of wear phenomenon was quantified by defining parameters like wear modes and wear rates. Jonas Allebert et al [5] tried to understand the influence of overlay welded high chromium white iron and quenched, tempered low alloy carbon steels. The test results showed that the wear of the material was dependent on abrasive size. Seth et al [6] made an attempt to understand the wear behavior of steel which were coated by thermally sprayed aluminum-titanium alloy coatings. The test results showed that wear performance of coating with aluminum and titanium was comparable within the weight percentage. Beata Bialobrzeska & Piotr Kostencki [7] conducted laboratory and field experiments to study the wear behavior of selected low alloy boron steels. They identified the two pre dominative wear modes which were micro-cutting and micro-ploughing. Victor Gomez et al [8] attempted in identifying effect of size of abrasive particles on wear. The test results indicated that the mass fraction of two different sized powders had a bearing on wear modes of the micro-scale level & wear rate was not found to have a direct relation with mass fraction. Vytenis jankauskas et al [9] made an attempt to characterize wear behavior of manually arc welded hard facings with low carbon and stainless steel matrix. They found the experimental results that improvement in wear resistance when tungsten carbide content was an order of 42 to 48%. Hernandez et al [10] made an attempt to understand role of temperature on abrasive of boron steel and hot forming tool steels. The wear rate was found to remain either a constant or improved as the temperature increases up to 400ºC.. The wear rate was found to increase with increase in temperature above 400ºC. Bin Hwang et al [11] studied the role of tensile properties on abrasive wear of steel saw wires. The two body wear was found to be influenced by tensile strength and micro-hardness of test materials. Whereas wear loss of three body wear was found to be influenced by elongation or ductility. The point of the work introduced in this paper was to give the tool to foreseeing versatile and inelastic characteristic of En 31 steel when exposed to abrasion wear.
Experimentation
Experimentation was done by using dry sand abrader test rig. Dry sand wheel abrader confirms ASTM G-65 standards. The schematic view of dry sand abrader was shown in figure 2.
Figure 2. Schematic view of dry sand abrasion test rig.
Three levels of different loads, varying size of abraders, constant speed, constant speed and flow rate was followed during entire test. The test parameters followed during the experimentation was tabulated and shown in table 1. The test includes loading the target specimen in the slot provided in the test rig. Calculating the leverage loss the normal load was found for three different levels. The normal load was applied so that the chlorobutyle rubber wheel having contact with the target specimen. The wheel has been allowed to rotate at a speed of 200revolutions per minute. The standard size sand abrader was sieved and filled to the hopper of the test rig. The sand flow rate of 100 grams/minute was maintained during the test. The test has been conducted for 8 minutes. After the test the target specimen was weighed and test result was recorded. Before the experiment also the target specimen was weighed. Weight loss of the target specimen was recorded by taking initial and final weight of the specimen.
Results and Discussions
The weight loss for target material was calculated by weighing before and after experimentation and weight difference of target test sample with normal load and abrader sizes was tabulated table 2. Similarly, weight loss of 0.005, 0.012 and 0.020 was found for the same normal load and abrader size of 212 microns. And the weight loss of 0.007, 0.022 and 0.023 was found for the same normal load and abrader size of 425 microns. The variation of volume loss of the target specimen subjected to three different sand abraders was plotted in the graph and as shown in the figure.3. The weight loss takes place for this normal load was 0.005, 0.012 and 0.020 subjected to three abrader sizes of 106 microns, 212 microns and 425 microns. The green curve shows the weight loss of the target specimen when the normal load was 59 N. The weight loss takes place for this normal load was 0.005, 0.012 and 0.020 subjected to three abrader sizes of 106 microns, 212 microns and 425 microns. In order to understand the deformation morphology of the abraded specimens a scanning electron micrographic study was carried out and discussed below. Figure 5 shows scanning electron micrographs when the target specimen was subjected to a normal load of 11.77 N and three abrader sizes. figure 5 (b) have one groove but these grooves were not well defined. The scratches were found in both micrographs 5 (a) and 5 (b). But theses scratches are also not defined clearly. Many number of grooves were found in micrograph shown in figure 5 (c), but not well defined. Figure 6 shows micrographs scanned by scanning electron microscope (SEM), when the target specimen was subjected to a normal load of 35.31 N and three different abrader sizes. Two number of grooves were found in micrographs shown in figure 6 (a) and figure 6 (c). No well defined grooves were found but many scratches were found in micrograph in figure 6 (b). Figure 7 shows micrographs scanned by scanning electron microscope (SEM), when the target specimen was subjected to a normal load of 58.85 N and three different abrader sizes. In micrograph shown in figure 7 (b) no well-defined grooves were found but more number of well-defined scratches were found. In order to better understanding the variation of weight loss with different normal loads has been shown in figure 8 to figure 13. The wear scars were scanned by scanning electron microscope.
Micrographic study of wear scars when target specimen was subjected to different normal load levels and abrader size of 106 microns has been shown in figure 8. Micrographic study of wear scars when target specimen was subjected to different normal load levels and abrader size of 212 microns has been shown in figure 9. All the micrographs shown in figure 9 (a), 9 (b) and 9 (c) do not have any grooves but many scratches are found.
Micrographic study of wear scars when target specimen was subjected to different normal load levels and abrader size of 425 microns has been shown in figure 10. Micrographic study at lower magnification which is at 500 x could not reveal the feature which could explain the weight loss on variations in abrader sizes. A further study was carried out at magnification level of 1500 X to understand the in detail inelastic deformation of the target specimen. Micrographic study of wear scars when target specimen was subjected to a normal load of 11.77 N and three different abraders has been shown in figure 11. figure 11 (a) and shown in figure 11 (b). Figure 12 shows the scanning electron micrographs study of wear track on EN 31 at a normal load 35.31 N and different grades of abraders. Micrographs shown in figure 12 (a) shows a narrow and discontinuous groove running across the length. Micrographs shown in figure 12 (b) shows non continuous, shallow and number of grooves compared to features observed in micrograph shown in figure 12 (a). Micrograph shown in figure 12 (c) shows groove which is wider, deeper with not well defined ridges running across the length. At few places partially chipped metallic material is observed. Figure 13 shows the scanning electron micrographs study of wear track on EN 31 at a normal load 58.85 N and different grades of abraders.
Conclusions
The wear was found to be consequence of deformation. No such inelastic deformation was found due to the influence of abrader size. Significant deformation was found by the impact of normal load. The wear was found with different modes and well defined mode was not found. The deformation features characterize the wear volume losses. The grain sizes of the abrader are found to have little influence on volume loss when compared to the wear loss with normal load. It is required to carry out the study for the materials which exhibit different microstructural features. It is necessary to correlate the study with the developed theoretical model with different instant of loads.
|
2020-10-28T19:12:02.809Z
|
2020-09-01T00:00:00.000
|
{
"year": 2020,
"sha1": "66b5e2f85e2c2b94d58df79e96f5ab05d9998bc2",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/925/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "27c6b0155133645f987a7b14836cfb1afad3a779",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
226276387
|
pes2o/s2orc
|
v3-fos-license
|
The role of Hippo‐YAP signaling in squamous cell carcinomas
Abstract The Hippo‐YAP pathway regulates organ size, tissue homeostasis, and tumorigenesis in mammals. In response to cell density, external mechanical pressure, and/or other stimuli, the Hippo core complex controls the translocation of YAP1/TAZ proteins to the nucleus and thereby regulates cell growth. Abnormal upregulation or nuclear localization of YAP1/TAZ occurs in many human malignancies and promotes their formation, progression, and metastasis. A key example is squamous cell carcinoma (SCC) genesis. Many risk factors and crucial signals associated with SCC development in various tissues accelerate YAP1/TAZ accumulation, and mice possessing constitutively activated YAP1/TAZ show immediate carcinoma in situ (CIS) formation in these tissues. Because CIS onset is so rapid in these mutants, we propose that many SCCs initiate and progress when YAP1 activity is sustained and exceeds a certain oncogenic threshold. In this review, we summarize the latest findings on the roles of YAP1/TAZ in several types of SCCs. We also discuss whether targeting aberrant YAP1/TAZ activation might be a promising strategy for SCC treatment.
| Pathway elements
Hippo-YAP signaling ( Figure 1) is triggered by changes to extracellular matrix (ECM) rigidity, external mechanical forces, high cell density, adherens junction or integrin-SRC signaling, or G-protein- When LATS/NDR phosphorylates YAP1/TAZ, these effectors bind to 14-3-3 proteins and are retained in the cytoplasm. Phosphorylated YAP1/TAZ also bind to SCF βTRCP , promoting their degradation and preventing the activation of TEAD transcription factors driving the expression of cell survival/anti-apoptosis genes (such as CTGF, CYR61, AXL, BIRC5). Components of the tight and adherens junctions (such as AMOT, α-Catenin, PTPN14, Scribble) also bind to phospho-YAP1/TAZ to control their cytoplasmic localization and activity generally promote cell proliferation in a manner negatively regulated by Hippo signaling.
| Interaction with cancer-related genes
Dysregulation of the Hippo-YAP pathway has been implicated in numerous cancer types. 25 The YAP1/TAZ-TEADs complex directly targets genes encoding growth factors and cytokines, including connective tissue growth factor (CTGF), cysteine-rich angiogenic inducer-61 (CYR61), and tyrosine-protein kinase receptor gene (AXL).
Interestingly, however, even though the activated YAP1 protein itself frequently accumulates in tumors, actual mutation of the YAP1 and WWTR1 (TAZ) genes is uncommon in human cancers, 28 including in SCCs (Table 1).
| Crosstalk with SCC-related pathways
Hippo signaling participates in crosstalk with the WNT/β-catenin, NOTCH, SHH, SRC, TGFβ, PI3K/Akt and other pathways crucial for SSE morphogenesis and homeostasis. 29 Several lines of evidence link this interaction with general SCC carcinogenesis: (1) Activated nuclear YAP1 that is recruited to TCF/LEF binding sites along with β-catenin enhances WNT signaling. Phosphorylated cytoplasmic YAP1 inhibits Disheveled (DVL), supporting β-catenin degradation that suppresses WNT signaling. Conversely, YAP1 mRNA is upregulated by WNT signaling.
(5) YAP1 is directly and indirectly activated by SRC signaling. 31,32 Dasatinib-mediated suppression of SRC prevents YAP1 activation. 33 (6) AKT blocks MST1/2 activity, resulting in YAP activation. 29 (7) Finally, a recent comprehensive study of genetic alterations across a range of SCCs also supports the importance of Hippo/YAP signaling in SCC carcinogenesis. 34
| LINK S B E T WEEN HIPP O -YAP S I G NALING AND SCC ON S E T
In this section, we discuss recent findings linking Hippo-YAP signaling aberrations to specific SCC subtypes.
| Head-and-neck SCCs
The mammalian head-and-neck region comprises the oral cavity, pharynx, and larynx, all of which are covered with SSE. HNSCC is the 7th most common human cancer worldwide, with tongue cancer being the most frequent subtype. About 15% of HNSCC patients are infected with HPV, making this virus a key cause of HNSCC (although the underlying molecular mechanism is not clear). 35 HPV that integrates into the host cell genome generates the viral E6 and E7 proteins known to target p53 and RB, respectively, compromising tumor suppression. 36 HPV + HNSCC is found principally in the oropharynx. 35 These patients tend to have better prognoses and some are even cured. 36 The remaining 85% of HNSCC patients that are HPV − have poor prognoses because they do not benefit from either intensified chemo/radio therapy or current molecular targeting drugs. 36 Several lines of evidence tie YAP1 activation to HNSCC genesis: (1) We generated mice with tongue-specific loss of Mob1a/b that led to constitutive activation of endogenous YAP1 in this tissue. 33 These mutants developed tongue CIS with surprising rapidity and exhibited invasive HNSCC within 4 wk of Mob1a/b deletion.
(2) YAP1 activation can be detected in human precancerous dysplasia in the tongue and is linked to decreased patient survival. 33 (3) Lastly, 8.6% of human HNSCC show amplification of the 11q22 chromosomal region containing the YAP1 locus. 38 Excess YAP1 activity is a feature of such malignant SCCs and leads to a poor prognosis. 38
| Cervical SCCs
Cervical SCCs (CvSCCs) are the 4th most frequent cancer in women worldwide. CvSCC most often arises due to infection with high-risk HPV, but the complete molecular mechanism has yet to be elucidated.
Numerous studies implicate YAP1 in human CvSCC genesis: (1) YAP1 activation or LATS1 inactivation is linked to heightened proliferation and invasiveness of CvSCC cell lines, 39
Mutations of CDKN2A (INK4A/ARF) and PI3KCA/PTEN are also common, as is amplification of tyrosine kinase receptor genes ( Little information is known about Hippo/YAP signaling in CuSCCs, but accumulating evidence suggests that YAP1 activation is involved in CuSCC genesis: (1) LATS1 and LATS2 are mutated in 18% and 26% of CuSCCs, respectively 12 .
(2) YAP protein levels are elevated in human CuSCCs and correlate with disease progression. 46 (3) YAP promotes human CuSCC cell proliferation in vitro. 47 (4) Tg mice expressing constitutively active YAP have thicker epithelium compared with controls and their immature keratinocytes display increased mitogenic activity. 22 CIS forms in the skin of these mutants starting on day 9 after YAP activation and invasive CuSCC appears in 2-4 wk. Also in these
| Lung SCCs
Lung cancer is the 2nd most common cancer in humans and the most lethal. A lung cancer is classified as either a small-cell lung cancer (SCLC) or a non-small-cell lung cancer (NSCLC). NSCLCs occur more frequently compared with SCLCs and have a worse prognosis. About 30% of NSCLCs are lung SCCs (LSCCs). LSCC is essentially incurable due to a current lack of effective treatments.
In a mouse model, the trans-differentiation of LADCs into LSCCs was associated with LKB1 inactivation and led to malignant progression as well as drug resistance. 57 YAP activation levels differ between mouse LADCs and LSCCs, with the former demonstrating YAP1 hyperactivation. Indeed, forced YAP activation largely abolishes the LADC-to-LSCC transition, 58 suggesting that YAP functions may differ between these NSCLC subtypes. Work in mouse models and human LADC cell lines has confirmed that YAP hyperactivation promotes malignant progression of LADC. 59 In striking contrast, YAP activation in LSCCs downregulates GPX2 in a ΔNp63-dependent manner. 60 As a result, excessive intracellular reactive oxygen species (ROS) accumulate and kill the LSCC cells. Analyses of human cancer patient-derived xenograft (PDX) models have shown that higher YAP activation correlates positively with suppression of LSCC growth and negatively with lymph node metastasis. 61 These observations were unexpected, and why YAP activation would promote LADC growth but suppress LSCCs is still unknown. Importantly, these findings indicate that caution should be exercised in administering YAP inhibitors to LSCC patients.
| PUTATIVE MECHANIS M BY WHI CH HIPP O/ YAP S I G NALING DRIVE S SCC ON S E T AND DE VELOPMENT
The above body of work establishes that MOB1 deletion or YAP1 activation induces CIS formation within 5-9 d in the context of HNSCCs, CvSCCs, and CuSCCs. This timeline is unusually rapid, given that most cancers initiate over a long period of multi-step carcinogenesis in which several mutated genes act additively or synergistically.
However, it should be noted that: (a) a CIS diagnosis in these SCCs is When these gene mutations combine with carcinogens, YAP1 activity rises even higher. 33,42 In short, there is substantial evidence supporting our hypothesis that, for many SCCs, the original CIS F I G U R E 2 Proposed mechanism for SCC genesis using HNSCC as an example. Both HPV + HNSCCs and HPV − HNSCCs may be caused by an accumulation of YAP1 activity that is driven by either mutation of the indicated genes, HPV infection, or risk factors such as smoking.
When the oncogenic threshold of YAP1 activity is surpassed, transformation ensues. Each rounded rectangle represents a factor promoting HNSCC genesis. The height of each rectangle represents relative YAP1 activation intensity, 33 inhibition not only prevents SCC onset but also delays its progression, YAP1 may be an appealing target for molecular therapy of these devastating malignancies.
One possible reason for the frequent and early onset of SCCs The proliferation of SSE cells, which are tightly connected to one another, is greatly influenced by cell adhesion, ΔNp63 signaling, and EGF-like ligands. Thus, all 4 of the above mechanisms may in fact contribute to the genesis of most SCCs (except LSCC), perhaps explaining why the phenotype is so strong in SSE cells showing YAP1/TAZ activation.
| CON CLUS ION
An oncogenic driver gene for CIS formation in SCCs has been difficult to identify. We propose that, because YAP hyperactivation induces immediate CIS onset in mouse models, YAP must be the key molecule driving CIS with no need for any other gene alteration.
We therefore advance a new concept positing that human CIS can simply initiate when sustained YAP1 activity exceeds an oncogenic threshold. SCCs are generally refractory to conventional treatments, and so remain largely incurable. Thus, targeting the Hippo-YAP pathway may be a promising strategy for suppressing the growth of most SCCs (except LSCCs, for the reasons elaborated above). In addition, our mouse models featuring mutations in the Hippo-YAP pathway currently constitute the world's fastest spontaneous cancer onset models. Cancer progression is synchronized in these mutant animals, and the tumors are easily visualized on the mouse exterior (especially in the HNSCC and CuSCC models). These characteristics are ideal for cancer research and the development of novel anti-cancer drugs, and we humbly take pride in sharing these new tools with the international scientific community to devise fresh approaches to treating human SCCs.
ACK N OWLED G M ENTS
We are grateful for funding provided by the Japan Society for
D I SCLOS U R E
The authors have no conflicts of interest.
|
2020-11-08T14:07:09.368Z
|
2020-11-06T00:00:00.000
|
{
"year": 2020,
"sha1": "1fcdb01731f080d80fd86ce932b0e668917e5cde",
"oa_license": "CCBYNC",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/cas.14725",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5f32a6b1b1aef0057df733ad8a8a139e0606aa06",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
255088477
|
pes2o/s2orc
|
v3-fos-license
|
Genome-wide analysis of cytochrome P450 genes in Citrus clementina and characterization of a CYP gene encoding flavonoid 3′-hydroxylase
Abstract Cytochrome P450s (CYPs) are the largest family of enzymes in plant and play multifarious roles in development and defense but the available information about the CYP superfamily in citrus is very limited. Here we provide a comprehensive genome-wide analysis of the CYP superfamily in Citrus clementina genome, identifying 301 CYP genes grouped into ten clans and 49 families. The characteristics of both gene structures and motif compositions strongly supported the reliability of the phylogenetic relationship. Duplication analysis indicated that tandem duplication was the major driving force of expansion for this superfamily. Promoter analysis revealed numerous cis-acting elements related to various responsiveness. RNA-seq data elucidated their expression patterns in citrus fruit peel both during development and in response to UV-B. Furthermore, we characterize a UV-B-induced CYP gene (Ciclev10019637m, designated CitF3′H) as a flavonoid 3′-hydroxylase for the first time. CitF3′H catalyzed numerous flavonoids and favored naringenin in yeast assays. Virus-induced silencing of CitF3′H in citrus seedlings significantly reduced the levels of 3′-hydroxylated flavonoids and their derivatives. These results together with the endoplasmic reticulum-localization of CitF3′H in plant suggest that this enzyme is responsible for the biosynthesis of 3′-hydroxylated flavonoids in citrus. Taken together, our findings provide extensive information about the CYP superfamily in citrus and contribute to further functional verification.
Introduction
Cytochromes P450s (CYPs) superfamily constitute the largest family of enzymes in plants, accounting for about 1% of the protein-coding genes in several f lowering plants [1]. All plant CYPs share a common heme-thiolate catalytic center and are membrane-bound, usually anchored on the endoplasmic reticulum (ER). The CYP superfamily catalyzes a monooxygenation reaction of a carbon atom to form ketones, alcohols, etc., which contributes to further chemical expansion through O-methylation, O-acylation, O-glycosylation, etc. [2]. Plant CYPs were initially classified into two clades: the A-type (plant-specific) and the non-A-type (non-plant-specific) [3]. Subsequently, the concept of CLANS (higher order groupings of CYP families) was introduced into the classification system and is extensively adopted nowadays. The initial defined A-type became clan 71 and the non-A-type consisted of the other clans [4], for example, there were ten clans (clans 51, 72, 74, 85, 86, 97, 710, 711, 727, and 746) belonging to the non-A-type in land plants [1]. CYPs are named by the CYP nomenclature system based on homology and phylogenetic relationship [5], a typical CYP name includes a number indicating the CYP family and a letter after the number designating the subfamily [6].
Citrus is one of the most important fruit crops worldwide, and citrus fruit, especially the f lavedo is rich in a vast array of primary and secondary metabolites, such as terpenes and f lavonoids [18]. The largest enzymatic superfamily, CYP genes, are considered as key enzymes in the biosynthesis of these metabolites and play critical roles in the development and defense of citrus. However, only several CYP genes have been functionally characterized in citrus, including one gene from Citrus unshiu (CYP97C27) encoding a carotenoid epsilon-ring hydroxylase [19], two genes (CYP71CD1 and CYP71BQ4) from Citrus sinensis involved in protolimonoid biosynthesis [20], one gene (CYP82D64) from Citrus paradisi and its orthologous gene from Citrus hystrix function as xanthotoxin 5hydroxylases [21], one gene (CYP93A65) from Fortunella crassifolia encoding a f lavone synthase [22]. Previously, the exon-intron organization, classification and phylogenetic relationship of CYP genes were reported in three citrus species [23]. However, these results were confusing because the CYP genes were not assigned proper CYP names and classifications by the CYP nomenclature system when they were published [23]. Although the names and classification of these genes were corrected later in a statement on the Cytochrome P450 Homepage [5], a more comprehensive and rigorous analysis of the CYP genes in citrus is still indispensable for further research.
In this study, we performed a comprehensive analysis of the CYP superfamily based on the latest version of Citrus clementina genome [24] and identified 301 genes encoding 319 proteins. The present study includes their phylogenetic relationships, conserved motifs, gene structures, gene duplications, promoter cisacting elements as well as their expression profiles in citrus fruit peel during development and in response to UV-B irradiation. Furthermore, we report the identification and characterization of a UV-B-induced CYP, a f lavonoid 3 -hydroxylase (designated CitF3 H) in citrus. Substrate specificity in yeast assays, virusinduced gene silencing (VIGS) and subcellular localization assays together confirmed its role in enhancing the accumulation of 3hydroxylated f lavonoids in citrus. The current study will provide a wealth of valuable information for a better understanding of the CYP genes in citrus and lay the foundation for the functional characterization of plant CYP genes.
The Citrus clementina genome contains 301 CYP genes
Based on the latest version of C. clementina genome (v1.0), a total of 319 protein sequences were identified as CYP superfamily members in combination with hmmsearch, local BLSATP search and domain verification approaches ( Fig. 1; Fig. S1 and Table S1, see online supplementary material). These CYP proteins were eventually found to be encoded by 301 genes due to the alternative splicing events. Analysis of physical and chemical properties showed that the characteristics of the 319 citrus CYP proteins varied widely, with protein lengths (number of amino acids) ranging from 303 to 621, molecular weights ranging from 34.07 to 71.09 kDa and theoretical isoelectric points ranging from 4.91 to 9.69. The prediction of subcellular localization showed that most of the citrus CYP proteins (292 of 319) were localized in the endomembrane system, followed by the organelle membrane (22 of 319). Other properties of CYPs, including the instability index, aliphatic index, and grand average of hydropathicity are also provided in Table S1 (see online supplementary material).
Phylogenetic relationship of CYPs in citrus
To analyse the phylogenetic relationships of CYPs from C. clementina, an unrooted ML tree was inferred from a trimmed alignment of 319 citrus CYP protein sequences. These citrus CYPs were further assigned to specific families and clans based on the systematic names designated by the CYP nomenclature system [5] (Fig. 1; Table S1 and S2, see online supplementary material). As results, the family and clan of citrus CYPs matched well with the phylogenetic clades, which in turn indicated the reliability of the phylogenetic tree.
A total of ten CYP clans (49 families) were recognized in citrus, and clans could be further divided into two distinct clades (A-type and non-A-type) based on the phylogenetic tree ( Fig. 1) [3]. Clan 71 belonged to the A-type, while the other nine clans all belonged to the non-A-type. Among the non-A-type, clans 72, 86, and 97 were grouped into one cluster; clans 51, 85, 710, 711, and 727 formed another cluster; clan 74 constituted a single-clan cluster. Clans 71, 72, 85, and 86 were multi-family clans and included 20 families (211 genes), seven families (23 genes), 12 families (33 genes), and four families (23 genes), respectively. The remaining six clans (clans 51, 74, 97, 710, 711, and 727) were single-family clans and up to three genes were included in each clan.
Conserved motifs and gene structures of CYPs in citrus
A total of 15 conserved motifs were identified in citrus CYP proteins using MEME software ( Fig. 2A and B; Fig. S2 and S3, see online supplementary material). In general, the composition of these motifs showed a considerable divergence between the Atype and non-A-type CYPs while similar patterns were found within the same CYP clan. All of the 15 motifs were recognized in the A-type CYPs, while few motifs (5 to13) were recognized in the non-A-type CYPs ( Fig. 2A). The majority of motifs were conserved in citrus CYPs, including nine motifs located on the C-terminal (motifs 1, 2, 3, 6, 7, 8, 10, 12, and 14) and two motifs (4 and 9) located on the N-terminal. Of the 11 conserved motifs, five of them (motifs 1, 2, 6, 9, and 14) contained the functionally characterized domains (Fig. 2B) [25,26]. Motif 1 contained the core catalytic center, heme-binding motif (FxxGxRxCxG), in which the cysteine (C) was the axial ligand to the thiolate heme; motifs 2 and 14 contained the K-helix motif (ExxR) and PERF motif (PxRx), respectively. The E and R residues of the K-helix and the R residue of PERF motif formed the E-R-R triad, which was thought to stabilize the highly conserved three-dimensional structure; motif 6 contained the consensus (A/G)Gx(E/D)T(T/S) of the Ihelix motif, which was involved in oxygen binding and activation; motif 9 contained a proline-rich region with the consensus of (P/I)PGPx(P/G)xP, this region was considered as a membrane hinge that was crucial for correct orientation of CYP enzyme to the membrane. Nevertheless, the remaining four motifs (motifs 5, 11, 13, and 15), which were located on the N-terminal, were not conserved in all citrus CYPs. The absence of these motifs was common in the non-A-type CYPs. One of these motifs has been functionally elucidated, i.e. motif 5 contained the C-helix motif (WxxxR), and the W and R residues contributed to the interaction with a propionate side chain of the heme [27].
The organizations of exon, intron, coding sequences (CDS) and untranslated region (UTR) were also summarized to better understand the structure of citrus CYP transcripts ( Fig. 2C; Fig. S4, see online supplementary material). The CDS-UTR composition was highly variable in the non-A-type CYPs compared with that of the A-type CYPs, whereas the gene structure within the identical Clan Family (2) (3) (2) (1) (2) Taken together, the patterns of conserved motifs and gene structures varied considerably between the A-type and non-Atype CYPs; however, similar patterns were observed within the same clan or family which enhanced the credibility of the phylogenetic relationship and group classification.
Citrus CYP genes exhibit prevalent gene duplication events
To investigate the gene duplication events in citrus CYPs, a collinearity and gene duplication analysis within C. clementina genome were carried out using the MCScanX algorithm. Firstly, the origins of duplicated CYP genes were classified into four duplication events (tandem, proximal, WGD/segmental and dispersed). The majority of CYP genes (41.2%, 124 out of 301) were duplicated from the tandem event, compared with 32.2% (97) from dispersed, 15.9% (48) from proximal and 10.6% (32) from WGD/segmental (Table S3, see online supplementary material). These results showed that tandem duplication seemed to be
Analysis of cis-acting elements in the promoter region of citrus CYPs
The 2000-bp region upstream of the initiation codon (ATG) of each citrus CYP transcript was regarded as the promoter sequence in this study and subjected to the PlantCARE database for the prediction of cis-regulatory elements. The predicted cis-acting elements other than the core elements could be classified into four broad categories based on their responsiveness to any perturbation, including development, stress, hormone and light responsiveness Table S5, see online supplementary material). There was no obvious difference among different clans; almost all of the citrus CYPs possessed the four types of cis-acting elements analysed above. Light-responsive elements were found to be the most prevalent in the promoter regions of citrus CYPs (Fig. 4). A sum of 35 light-responsive elements was predicted, and the five elements with the highest frequency were as follows: Box 4 (95.6%, 305 out of 319), G-box (85.2%), GT1-motif (63.6%), TCTmotif (57.7%), and GATA-motif (51.7%). Hormone-responsive elements were also detected, including the most frequently occurred (265 out of 319) abscisic acid (ABA)-responsive element (ABRE), the auxin-responsive elements (e.g. TGA-element and AuxRR-core), the methyl jasmonate (MeJA)-responsive element (TGACG-motif), the gibberellin-responsive elements (e.g. P-box and TATC-box) and the salicylic acid-responsive elements (e.g. TCA-element
Expression profiling of CYP genes in citrus fruit peel during development and in response to UV-B treatment
The transcriptome data in the f lavedo of citrus during developmental stages were analysed [28]. Out of the 319 CYP transcripts, 271 (85.0%) were expressed in at least one developmental stage, with the FPKM values ranging from 0.01 to 702.2 (Table S1, see online supplementary material). The expressed CYP genes were further clustered based on their expression patterns using Mfuzz and grouped into nine distinct expression clusters. Each cluster contains a set of genes with similar expression patterns ranging from 22 to 46 members (Fig. S8, Table S1, see online supplementary material).
Light (e.g. UV light) plays a critical role in plant growth and defense induction [29]. As shown in Fig. 4, a large number of light-responsive cis-acting elements were observed in citrus CYPs, and several cis-acting elements (e.g. G-box and MRE) have been reported to be involved in UV-B responsiveness [30,31]. To understand how citrus CYPs responded to UV-B treatment, the RNA-Seq data of the citrus f lavedo which was directly irradiated by UV-B were analysed [28]. The CYPs could be roughly divided into four groups based on their response to UV-B treatment, and the A group consisted of 74 CYP transcripts, which were up-regulated in the f lavedo after UV-B irradiation compared to the control group (Fig. S9, see online supplementary material). These up-regulated members might play important roles in the protective response to UV-B irradiation in citrus.
Identification of a CYP gene encoding putative flavonoid 3 -hydroxylase in citrus
It has been well documented that plant CYPs include key enzymes in the biosynthesis of UV-B protectants, such as f lavonoids [9]. For example, several members belonging to CYP families 71, 75, 82, 93, 98, and 706 have been proven to be vital for f lavonoid biosynthesis [22,[32][33][34][35]. In this study, 17 CYP genes belonging to f lavonoid-related families were found to be up-regulated in the f lavedo of citrus after irradiation by UV-B ( Fig. 5A; Fig. S9, see online supplementary material). Among the 17 up-regulated genes, one CYP gene (Ciclev100019637m), designated CYP75B81 by the CYP nomenclature system, showed the highest expression level at the early stage (S1) when the citrus f lavonoids were rapidly biosynthesized [28] (Fig. 5B). Moreover, this gene was predicted to encode a citrus f lavonoid 3 -hydroxylase (termed CitF3 H) because it belongs to the CYP75B subfamily, most members of which hydroxylate the 3 position of f lavonoids in plant [33]. Therefore, this CYP gene was speculated to be involved in f lavonoid biosynthesis both during development and in response to UV-B irradiation. Another up-regulated gene Ciclev10033591m (CYP71AS15) also exhibited relatively high transcript levels in the f lavedo of citrus during development (Fig. 5B). However, the function of the CYP71AS subfamily was still unclear, and yeast assays showed that there was no detectable product when two representative f lavonoids (naringenin and apigenin) acted as substrates (Fig. S10, see online supplementary material). Hence, the catalytic activity of this enzyme needs to be clarified further.
Substrate specificity of CitF3 H in a yeast system
To investigate the substrate specificity of CitF3 H, the putative f lavonoids substrates (f lavanones, f lavones, f lavonols, and dihy-drof lavonols) were added to the medium of yeast strains harboring the CitF3 H-pYES2/NT C, with the empty vector as a control. The reaction mixture was analysed using HPLC and MS/MS, and the generated products were identified by comparing them with the corresponding authentic standards ( Fig. 6; Fig. S11, see online supplementary material).
The f lavanones naringenin, sakuranetin, liquiritigenin, pinocembrin, isosakuranetin, and naringenin glycosides (naringin and narirutin) were tested. Naringenin, sakuranetin, and liquiritigenin were all converted to their expected 3 -hydroxylated product by the yeast strains expressing CitF3 H compared with the empty vector ( Fig. 6A-C). However, two naringenin glycosides naringin and narirutin (naringenin 7-O-neohesperidoside and naringenin 7-O-rutinoside) could no longer be catalyzed by this enzyme. Pinocembrin and isosakuranetin, which lack a free hydroxyl group at the 4 position on the B-ring, could not be catalyzed either (Fig. S12, see online supplementary material).
Relative activities toward different f lavonoid substrates were determined in order to investigate the substrate preference of CitF3 H. The results showed that CitF3 H exhibited the highest relative activity (100%) toward naringenin compared with less than 30% toward other substrates (sakuranetin, liquiritigenin, apigenin, kaempferol, and dihydrokaempferol) (Fig. 6G). Taken together, CitF3 H was a f lavonoid 3 -hydroxylase that preferred f lavanone naringenin to other f lavonoids in yeast assays.
Silencing of CitF3 H leads to reduced 3 -hydroxylated flavonoids in citrus
A VIGS system was used to silence CitF3 H in citrus seedlings in order to explore its function in f lavonoid hydroxylation in planta. The transcript level of CitF3 H in five positive VIGS lines was significantly reduced by ∼95% compared with that of control plants (infiltrated with empty vector) (Fig. 7A). Subsequently, the potential catalytic products of CitF3 H in citrus plants were measured, including two 3 -hydroxylated f lavonoids, i.e. neoeriocitrin and hesperidin and four 3 -methoxylated f lavonoids, i.e. sinensetin, isosinensetin, nobiletin, and 5-hydroxy-6,7,8,3 ,4 -pentamethoxylf lavone (5-HPMF). The results showed the total content of these f lavonoids was substantially reduced by ∼60% in the VIGS lines compared with that of control plants (Fig. 7B).
Subcellular localization of CitF3'H
The CitF3'H-GFP construct and an ER-maker construct with mCherry-label were co-expressed in tobacco leaves to visualize the subcellular localization of CitF3'H. The non-targeted empty vector (GFP) displayed a diffuse localization throughout the tobacco cell, whereas the green signal of CitF3'H-GFP merged well with the red signal of mCherry-labeled ER marker in cells co-transformed with CitF3'H-GFP and the ER-marker (Fig. 8). Ciclev10028333m Cis−acting element number Cis−acting element number 0 10 20 Cis−acting element number These results indicated that CitF3'H was localized in ER, which was consistent with the predicted subcellular localization of endomembrane system (Table S1, see online supplementary material) and in line with the member-localization of most plant CYPs [26].
Discussion
In this study, we carried out a comprehensive investigation of the CYP gene family members in C. clementina genome, including their phylogenetic relationships, conserved motifs, gene structures, gene duplication events, cis-acting elements, [45]. Expression values were obtained from our previous study [28] and can be found in Table S1. Table S1, see online supplementary material), of which five CYP genes were newly identified compared with the previous research [23]; this discrepancy was probably caused by genomic database updates. The CYP complement of C. clementina was typical in angiosperms which consisted of about 300 genes and 50 families. For example, 245 CYP (47 families) were identified from A. thaliana [12], 326 CYP genes (45 families) were identified from O. sativa [14], 236 CYP genes (47 families) were identified from V. vinifera [15] and 317 CYP genes (48 families) were identified from G. max [17].
The citrus CYP families were further grouped into ten clans, and four multi-family clans (clans 71, 72, 85, and 86) remained the four largest clans, which contributed 96% to the total number of CYP genes in citrus (Fig. 1). Similar to most Angiosperms, which underwent a burst gene duplication in order to match various adaptive requirements, intensive gene duplications within the four CYP clans were also observed in citrus ( Fig. 3; Table S4, see online supplementary material) [2,26,36]. A sum of 20 CYP gene pairs of segmental duplication was observed in citrus. These genes were inferred to be caused by the ancient triplication WGD (γ event) during evolution as there were no recent WGDs with the exception of the γ event in citrus [37]. In addition to the segmental duplication events, more tandem duplication events (82 gene pairs) were observed, which suggested that the expansion of CYP genes in citrus was mainly through the mechanism of tandem duplication. These findings were consistent with the phenomenon found in grapevine where most CYP genes arose through tandem duplications [15].
Plant CYPs were divided into two types: the A-type (clan 71) and the non-A-type (the other clans) [4]. The non-A-type CYPs were more ancient than the A-type CYPs and were considered to have more time to undergo rearrangement and gene duplication, resulting in a more divergent structural organization than the Atype CYPs [38]. Likewise, among the CYP genes in citrus, the non-A-type CYPs tended to be more divergent than that of the A-type in terms of gene structures and conserved motifs (Fig. 2). Despite the great variation between the two types of CYPs, five motifs appeared to be conserved in both CYP types in citrus: the prolinerich region (motif 9), the I-helix motif (motif 6), the K-helix motif (motif 2), the PERE motif (motif 14), and the heme-binding motif (motif 1). These motifs have been suggested to be vital for the catalytic function of the CYP enzyme [25][26][27]. However, four motifs (5, 11, 13, and 15) on the N-terminal were absent in most of the non-A-type CYPs ( Fig. 2A and B). The four motifs were common in the A-type CYPs and were assumed to play an important role in the A-type CYPs. Taken together, these structural diversifications in citrus CYPs may lead to a wide range of substrate specificities, resulting in varying physiological activities.
Much evidence has suggested that plant CYPs participate in kinds of biochemical pathways and play important roles in multiple biological progress, including development and stress response [2,7,8]. The presence of various cis-acting elements in the promoter region of citrus CYP genes also suggested that citrus CYPs were capable of responding to many perturbations in plant, especially the responsiveness to light irradiation because lightresponsive elements appeared to be the most prevalent among the citrus CYPs (Fig. 4). The expression profiles of citrus CYPs were analysed in the f lavedo of citrus both during development and in response to UV-B irradiation, and the results showed that the CYP families (CYP71, 75, 82, 93, 98, and 706) in plant [22,[32][33][34][35]. Therefore, the 17 CYP genes were speculated to be involved in the enhanced accumulation of UV-absorbing f lavonoids. Of the 17 CYP genes, one gene (Ciclev100019637m termed as CYP75B81) showed the highest expression level at the early stage (S1) of citrus fruit development when f lavonoids were rapidly biosynthesized (Fig. 5). Therefore, this gene was probably involved in the f lavonoid biosynthesis both during development and in response to UV-B irradiation. Furthermore, this gene was predicted to be a putative f lavonoid 3 -hydroxylase in citrus (CitF3'H) because most members in the CYP75B subfamily catalyzed the hydroxylation of f lavonoids at the 3 -position [33]. Hesperidin, neoeriocitrin, sinensetin, isosinensetin, nobiletin, and 5-HPMF, the representative f lavonoids in citrus, are all derived from 3 -hydroxylated f lavonoids (e.g. eriodictyol and luteolin) [39]. Silencing of CitF3'H in the seedlings of citrus resulted in a significant reduction in the content of these f lavonoids, of which hesperidin (hesperetin glycoside), the most abundant f lavonoid in citrus, decreased the most by ∼69% (Fig. 7B). Heterologous expression in yeast confirmed that CitF3'H could catalyze the 3 -hydroxylation of different types of f lavonoids, and preferred to accept f lavanone naringenin, yielding its 3 -hydroxylation product (hesperetin) (Fig. 6). This substrate specificity was consistent with the decrease of 3 -hydroxylated f lavonoids and their derivatives after silencing of CitF3'H in citrus seedlings. The ERlocalization of CitF3'H also underlay its catalytic function in plant. Hence, CitF3'H was a canonical f lavonoid 3 -hydroxylase in citrus as most members of the CYP75B subfamily in other plants.
CitF3'H has 99.22% amino acid identity to its orthologous gene (Cs5g11730.1) from C. sinensis, with the latter having a 22amino acid deletion in the N-terminal compared with the former (Fig. S13, see online supplementary material). According to the previous study, Cs5g11730.1 was a drought-induced gene in citrus and could induce drought tolerance in transgenic A. thaliana by enhancing the accumulation of antioxidant f lavonoids; however, its catalytic function has not been characterized [40]. Taken this evidence together, CitF3'H and Cs5g11730.1 most probably have the same catalytic function, acting as F3'Hs in citrus, and are likely to be involved in the biosynthesis of 3 -hydroxylated f lavonoids both during development and in response to stresses such as UVlight and drought.
It is well known that plant F3'Hs usually belong to the CYP75B subfamily [33]. CitF3'H was the sole gene of the CYP75B family in citrus (Table S1, see online supplementary material), thus making it the important gene responsible for f lavonoid hydroxylation in citrus fruit peel. However, one gene (CYP98A9) from A. thaliana was found to acquire an additional F3'H activity compared with other members of the CYP98A subfamily [35]. Therefore, the possibility of the CYP98A subfamily involved in F3'H activity still cannot be ignored in citrus.
In summary, our work provides a thoroughly genome-wide analysis of the CYP gene superfamily in C. clementina genome, identifying 301 CYP genes encoding 319 proteins, which were classified into A-type and non-A-type, including 10 clans grouped into 49 families. We revealed similar exon-intron organizations and motif compositions within the same clan and family, as well as the great divergence between the A-type and non-A-type CYPs, which strongly support the reliability of the phylogenetic relationship. Meanwhile, we demonstrated that frequent duplication events occurred in this CYP superfamily and tandem duplication might have been the major driving force for the rapid expansion. Moreover, our results indicated a wide range of cis-acting elements in promoters of CYP genes and elucidated their expression patterns both during development and in response to UV-B. Furthermore, we identified a UV-B-induced CYP gene (Ciclev10019637m, designated CitF3'H) as a f lavonoid 3 -hydroxylase for the first time. We concluded that CitF3'H could catalyze the 3 -hydroxylation of a wide range of f lavonoids and preferred to naringenin in yeast cells, and CitF3'H was localized in ER as most CYPs, these results together with the declined content of 3 -hydroxylated f lavonoids and their derivatives in the citrus seedlings after silencing of CitF3'H support the suggestion that CitF3'H is responsible for the biosynthesis of 3 -hydroxylated f lavonoids in citrus. These findings are useful for comprehensively understanding the CYP superfamily in citrus and will facilitate the functional characterization of CYP genes in planta.
Identification of CYPs in Citrus clementina
The genome version of Citrus clementina (v1.0) was downloaded from Phytozome v13 (https://phytozome-next.jgi.doe.gov/info/ Cclementina_v1_0). To identify putative CYP genes in C. clementina, the Hidden Markov Model (HMM) (p450.hmm) corresponding to the conserved domain (PF00067) of CYPs was downloaded from PFAM 35.0 (http://pfam.xfam.org/) and used as queries to perform hmmsearch against the C. clementina protein sequences using HMMER 3.2.1 (e-value = 0.1) (http://hmmer.org/). In parallel, a local BLASTP search against the C. clementina protein database was conducted using amino sequences of CYPs from A. thaliana collected from the Cytochrome P450 Homepage [5] as queries (e-value = 1e-5). The obtained sequences with protein lengths ranging from 300 to 650 amino acids were further verified via NCBI Conserved Domain Database tool [41]. Finally, a total of 319 transcripts (301 genes) were identified as CYP members in C. clementina.
Various physical and chemical parameters of CYPs, including the number of amino acids, molecular weight, theoretical isoelectric points, instability index, aliphatic index, and grand average of hydropathicity, were calculated by ProtParam tool embedded in ExPASy (https://web.expasy.org/protparam/). Additionally, BUSCA web-server was used for predicting the subcellular localization of CYPs (http://busca.biocomp.unibo.it/).
Phylogenetic analysis, conserved motifs, and gene structures
The protein sequences of citrus CYPs were aligned using MUSCLE [42], and the poorly aligned regions were automatically removed using trimAl [43]. Based on the trimmed alignments, a maximum likelihood (ML) tree was constructed with IQ-TREE [44] and was evaluated with the UltraFast Bootstrap method (5000 bootstrap replicates). Subsequently, the phylogenetic tree was visualized and annotated with iTOL (https://itol.embl.de/).
Conserved motifs of CYPs were identified using MEME (https:// meme-suite.org/meme/tools/meme) with the following parameters: number of motifs to find = 15; min motif width = 6; and max motif width = 50. Gene structures of CYPs, including their exon, intron, CDS, and UTR were obtained from the C. clementina genome annotation file (GFF3 format). Finally, the conserved motifs and gene structures were visualized using TBtools [45], respectively.
Collinearity analysis and gene duplication
Collinearity relationship and gene duplication events between citrus CYPs were analysed using the Multiple Collinearity Scan toolkit (MCScanX) with default parameters [46]. The syntenic block and duplicated CYP gene pairs (tandem and segmental duplications) were visualized with shinyCircos [47]. The gene density profile generated by TBtools was also viewed by shinyCircos to display the genome-wide gene density distribution. Non-synonymous (Ka) and synonymous (Ks) substitution rates of duplicated CYP gene pairs were calculated using Simple Ka/Ks Calculator function implemented in TBtools, and the Ka/Ks ratio was used to estimate the selective strength. Additionally, the gene locus of citrus CYPs was mapped to the scaffolds of C. clementina genome using TBtools [45].
Cis-acting element analysis in promoters
The 2000-bp region upstream of the initiation codon (ATG) of each citrus CYP transcript was regarded as the promoter sequence. The promoter sequences were extracted using TBtools and subjected to PlantCARE (https://bioinformatics.psb.ugent.be/webtools/ plantcare/html/) for the prediction of cis-acting regulatory sites. The cis-acting elements involved in development, stress, hormone, and light responsiveness were visualized and summarized using TBtools and GraphPad Prism version 7 (GraphPad Software, San Diego, CA, USA).
Gene expression analysis using RNA-Seq
In our previous study, the fruit peel of 'Ougan' cultivar (Citrus reticulata cv. Suavissima) was used as material, and transcriptome changes in the f lavedo during developmental stages (S1, S3, S5, and S7) and in response to UV-B treatment were analysed using RNA-Seq [28]. In this study, expression data of citrus CYP genes were retrieved from these transcriptomic data and processed as FPKM (fragments per kilobase of exon per million fragments mapped) values. Genes were clustered based on their expression levels using Mfuzz (cluster number = 9) [48]. Expression heatmaps were drawn using TBtools [45].
Protein expression and enzyme assays in a yeast system
The full-length of CitF3'H (Ciclev10019637m) and Ciclev10033591m without the termination codon were cloned into pYES2/NT C vector using primers listed in Table S6 (see online supplementary material). Recombinant constructs or an empty vector were transformed into yeast strain INVSc1 (Saccharomyces cerevisiae) via the Quick and Easy Yeast Transformation Mix kit (Takara, Dalian, China). The transgenic yeast cells were initially cultured in a 10 mL synthetically defined medium lacking uracil (SD-Ura) liquid medium (Takara, Dalian, China) supplemented with 2% glucose at 30 • C for 24 h with shaking at 200 rpm. Yeast cells were then harvested by centrifugation 1500 × g for 5 min before being resuspended in an equal volume (10 mL) of induction medium (SD-Ura liquid medium containing 2% galactose). Flavonoid substrates were added to the cultures (incubated to OD 600 = 1.2) at a final concentration of 20 μM. After 24 h of incubation, the cultures were extracted twice with an equal volume of ethyl acetate, and 8 mL of the upper organic phase was dried and resolved in 200 μL methanol for HPLC or HPLC-MS/MS. The relative activity of CitF3'H was measured by scaling up the procedure described above. Specifically, large-scale induced yeast cultures (200 mL) were prepared for the determination of enzyme activity toward various f lavonoid substrates. For each substrate, 5 mL of the induced yeast cultures were set as a replicate, and a total of three replicates were used in enzyme assays. For substrate bias evaluation, the enzyme activity was evaluated by controlling the conversion of substrate at less than 10% of the total substrate. In detail, after 10 h of incubation with the corresponding substrate, the cultures were extracted twice with an equal volume of ethyl acetate, and 6 mL of the organic phase was dried and resolved in 200 μL methanol for HPLC analysis. The measured velocity was close to the true initial velocity, which could be used to calculate the enzyme activity.
Virus-induced gene silencing in citrus
Gene-silenced plants were generated via tobacco rattle virus (TRV)-based VIGS as described previously [49]. The germinating seeds of Ponkan (a citrus cultivar with abundant 3 -hydroxylated f lavonoids) were subjected to infiltration in the experiment. A 326-bp fragment of CDS from CitF3'H was cloned into the TRV2 vector to invoke efficient gene silencing by agroinoculation. Agrobacterium (EHA105) cultures (OD 600 = 1.2) carrying TRV1 and TRV2 were centrifuged and resuspended in an equal volume of infiltration buffer (10 mM 2-(N-morpholino) ethanesulfonic acid, 10 mM MgCl 2 , 150 μM acetosyringone, pH = 5.6) and mixed at a 1:1 (vol/vol) ratio. These Agrobacterium suspensions were infiltrated into germinating seeds with sprouts ∼1 cm in length via a SHB-IIIA vacuum chamber (−100 kPa, 1 min) (Shanghai Yukang Science and Education Equipment Co., Ltd, Shanghai, China). Infiltrated sprouts were rinsed with water and sown in Murashige & Tucker (MT) solid medium (PhytoTechnology Laboratories, Shawnee Mission, KS, USA) in darkness for three days, followed by growing in soil in a growth chamber (Zhejiang Qiushi Artificial Environment Co., Ltd, Hangzhou, China) for one month. Plants co-inoculated with TRV1 and TRV2 were used as vector control. Aerial parts of each plant were sampled for further analysis. Primers used in the construction of CitF3'H-TRV2 are listed in Table S6 (see online supplementary material).
RNA isolation and qRT-PCR
For the samples in the VIGS experiment, total RNA isolation and cDNA synthesis were performed as described previously [28]. Real-time quantitative reverse transcription-PCR (qRT-PCR) of CitF3'H was carried out on a Bio-Rad CXF96 instrument (Bio-Rad, Hercules, CA, USA) with a TB Green Premix Ex Taq (Tli RNaseH Plus) kit (Takara, Dalian, China) according to the manufacturer's instructions. The relative expression of CitF3'H was calculated using the 2 -Ct method, using the citrus β-actin gene as the housekeeping gene [50]. Primers used for qRT-PCR are shown in Table S6 (see online supplementary material).
Metabolite analysis
HPLC and MS/MS analyses were performed to analyse metabolites in enzyme assays and plant materials. For the latter, samples were prepared as described previously [28].
HPLC analysis was performed on a Waters 2695 HPLC system (Waters Corp, Milford, MA, USA) equipped with a Sunfire C18 ODS column (4.6 × 250 mm, 5 μm), quaternary solvent manager and a 2998 PDA detector. Separation was conducted using water (A) and acetonitrile (B) with the following gradient: 0-5 min, 20% B; MS/MS was conducted on an AB TripleTOF 5600plus System (AB SCIEX, Framingham, MA, USA). MS2 spectra were obtained in positive ion mode (ESI) or negative ion mode and the exact mass was measured.
Subcellular localization assays
The CDS of CitF3'H without stop codon was cloned into the 35S-eGFP vector, then transferred to Agrobacterium strain (GV3101: pSoup). Agrobacterium cells harboring CitF3'H-GFP and mCherrylabeled ER-marker (ER-rk CD3-959) [51] were co-infiltrated at a 1:1 (vol/vol) ratio in leaves of 4 weeks old tobacco (Nicotiana tabacum) as described previously [52]. The corresponding GFP (empty vector) was used as a positive control. After three days, tobacco leaves containing the corresponding vector were imaged to observe the green f luorescent protein (GFP) and mCherry f luorescence with a Zeiss LSM710NLO confocal laser scanning microscope. Primers for the construction of CitF3'H-GFP are described in Table S6 (see online supplementary material).
|
2022-12-25T16:06:23.879Z
|
2022-12-23T00:00:00.000
|
{
"year": 2022,
"sha1": "de081fd8c869c0c3ffd1f9b494512dff80e92b4e",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "4888809e198e2418f6347b65cd24ee5ecb58a10d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258354566
|
pes2o/s2orc
|
v3-fos-license
|
Ibrutinib sensitizes CLL cells to venetoclax by interrupting TLR9-induced CD40 upregulation and protein translation
Chronic lymphocytic leukemia (CLL) cells upregulate Bcl-2 proteins within the lymph node (LN) microenvironment. Signaling via B-cell receptor, Toll-like receptors and CD40 collectively reduce sensitivity to the BCL-2 inhibitor venetoclax. Time-limited treatment with venetoclax plus the BTK-inhibitor ibrutinib results in deep remissions, but how this combination affects LN-related signaling is not yet completely clear. Therefore, samples obtained from the HOVON141/VISION phase 2 clinical trial were used to analyze this. Two cycles of lead-in ibrutinib monotherapy resulted in decreased protein expression of Bcl-2 proteins in circulating CLL cells. Strikingly, at this timepoint CD40-induced venetoclax resistance was strongly attenuated, as was expression of CD40. Since CD40 signaling occurs within the CLL LN, we tested various LN-related signals that could affect CD40 signaling. While BCR stimulation had only a minor effect, TLR9 stimulation via CpG led to significantly increased CD40 expression and importantly, reverted the effects of ibrutinib treatment on venetoclax sensitivity by inducing overall protein translation. Together, these findings identify a novel effect of ibrutinib: interruption of TLR9-induced CD40 upregulation and translation of pro-survival proteins. This mechanism may potentially further inhibit priming of CLL cells in the LN microenvironment for venetoclax resistance.
INTRODUCTION
Within the lymph node (LN) microenvironment, CLL cells are exposed to interactions with non-malignant immune cells leading to inhibition of apoptosis and induction of proliferation [1]. In particular, CD40-CD40L interactions between CLL cells and follicular T-helper cells in the LN promote NF-κB and mTOR activation, resulting in CLL survival and drug resistance by upregulation of anti-apoptotic Bcl-2 family members Bcl-XL, Mcl-1 and Bfl-1 [2][3][4][5]. Various studies have supported that CD40 signaling is involved in CLL proliferation and provides a model for tumor microenvironment (TME)-induced drug resistance [2,[5][6][7][8]. A successful therapeutic strategy in CLL is to induce apoptosis directly by so-called BH3 mimetics. The Bcl-2-specific inhibitor venetoclax is highly cytotoxic for CLL cells and causes a rapid reduction in circulating CLL cells in the majority of patients, but LN responses are less complete [9]. Another successful therapy for CLL is the use of ibrutinib, which targets the B-cell receptor (BCR) signaling pathway [10]. Ibrutinib treatment results in efflux of CLL cells from the proliferative lymphoid tissue into the peripheral blood (PB) [11,12], thereby preventing CLL cells from receiving microenvironmental survival signals and in that way halting disease progression [13,14]. However, neither of these therapies are curative as single treatment agents and repeated or continuous treatment is required, thereby increasing the risk of developing resistance to therapy and disease progression [15]. The combination of ibrutinib and venetoclax may have synergistic anti-tumor effects, since ibrutinib forces CLL cells from LNs to the PB where they become fully dependent on Bcl-2, and thus vulnerable to venetoclax [16]. Phase 2 and 3 outcomes of timelimited ibrutinib plus venetoclax combination trials have demonstrated high response rates when used as first-line therapy and in relapsed/refractory CLL [17][18][19][20][21][22].
In a previous study, we investigated the role of ibrutinib on the expression of Bcl-2 family members and drug resistance in a small set of randomly obtained clinical samples. We found that CLL cells that recently left the LN had higher Bcl-XL and Mcl-1 expression compared to cells immigrating back to the LN. This distinction in expression collapsed upon ibrutinib treatment yet the pretreatment profile reappeared upon relapse [23]. In order to investigate whether changes in expression levels of Bcl-2 family proteins might correlate with clinical response to venetoclax-containing regimens, we studied PB samples of CLL patients after two cycles of ibrutinib in the context of the HOVON141/VISION clinical trial [21]. In this study, two 28-day cycles of ibrutinib lead-in is followed by MRD-guided combination treatment of venetoclax and ibrutinib. We analyzed changes in Bcl-2 protein levels at the end of cycle 2 of ibrutinib monotherapy in relation to MRD status obtained at cycle 9 (after 7 cycles of ibrutinib + venetoclax combination) [18,21,24]. We observed that two cycles of ibrutinib treatment resulted in a collapse of CD40-mediated venetoclax resistance. This correlated well with previous suggestions that ibrutinib might, besides BCR signaling, also affect CD40 signaling, at least in vitro [25,26]. Whether this can also occur in vivo [27], either directly or indirectly, is as yet unknown which led us to investigate mechanistically how ibrutinib affects CD40 signaling. Since Bcl-2 family proteins are induced in the LN as a result of microenvironment-induced signaling, we hypothesized that ibrutinib treatment may interrupt those in vivo signals due to relocalization of CLL cells to the PB. Consequently, we investigated which signals present in the LN microenvironment may explain how ibrutinib affects TME-mediated resistance to venetoclax. [28]. The treatment regimen consisted of ibrutinib monotherapy (420 mg days 1-28; cycle 1 and 2), followed by combined venetoclax (ramp up to 400 mg) and ibrutinib (420 mg; cycles [3][4][5][6][7][8][9][10][11][12][13][14][15]. Participants with undetectable MRD (<10 −4 ; less than one chronic lymphocytic leukemia cell per 10.000 leukocytes) [24] were randomly assigned (1:2) to ibrutinib maintenance or treatment cessation. Patients who were MRD positive continued to receive ibrutinib monotherapy. A full protocol description has been published previously [29].
Cell culture and detection of apoptosis
Lymphocytes of CLL patients were co-cultured with NIH3T3 fibroblasts stably transfected with human CD40L or negative control as described before [30,31]. 3T40 fibroblasts were STR profiled and tested as mycoplasma-negative. After 24 h, CLL cells were detached and incubated with or without drugs for an additional 1, 2, 4 or 24 h. CLL cell viability was measured by flow cytometry using DiOC6 and TO-PRO-3 viability dyes. For BCR stimulation, either soluble anti-IgM or anti-IgM-coated dynabeads (Invitrogen, Waltham, Massachusetts, USA) were used. All used reagents are listed in Supplementary table 2.
Protein synthesis assay
Global protein synthesis was measured using the Click-iT Plus OPP Protein Synthesis Assay Kit (ThermoFisher) and following protocol of Yeomans et al. (2016) [33]. O-propargyl-puromycin (OPP; 20 μM) was added to 0.5 × 10 6 cells and incubated for 30 min. Cells were fixed and permeabilized using the Cytofix/Cytoperm Fixation Permeabilization Kit (BD Biosciences, San Jose, CA, USA). Alexa-Fluor-488 was conjugated to OPP as described in the manufacturer's instructions and cells were stained with anti-CD19 (BD Biosciences, #BD555412) and CD5 (Biolegend, #300619). Cells were measured on a FACS Canto II flow cytometer (BD Biosciences) and analyzed using FlowJo v10.8. As a control, cells were treated with cycloheximide for 5 min before OPP addition and fluorescence of cycloheximide-treated cells was subtracted from all experimental values.
Statistics
To ensure adequate power of statistical testing, sample sizes were chosen based on the type of material used, including at least two primary LN samples and three primary CLL cells/clinical trial samples. Additional patient samples were included to ensure that variation within each group of data was similar. For every figure, we applied the appropriate statistical test. Normality tests were applied to ensure normal data distribution. The paired sample t-test was used to analyze paired observations. Two-way ANOVA test was used to analyze differences between groups. *p < 0.05; **p < 0.01; ***p < 0.001; ****p < 0.0001.
RESULTS
Ibrutinib treatment causes collapse of Bcl-2 family member expression which does not correlate with mid-treatment MRD response with the combination Previously we reported an upregulation of pro-survival Bcl-2 family members in the LN of patients with CLL [6,34], which correlated with increased expression of Bcl-XL and Mcl-1 in the LN 'emigrant' versus 'immigrant' fraction of circulating CLL cells, defined by opposing expression levels of CD5 and CXCR4. This differences collapsed upon ibrutinib treatment [23]. Here, we applied an algorithm to reproducibly select LN immigrant and emigrant populations, by automated gating of 9 quadrants ( Supplementary Fig. 1A). Application of this method in a large cohort of samples from the HOVON141 clinical trial validated differential Bcl-2 member expression in these two circulating leukemia fractions. Furthermore, we now included Bfl-1 expression in the analysis. For Bcl-XL, Mcl-1 and Bfl-1, we found increased levels in CLL cells emigrating from the LN. After two months of ibrutinib treatment, a significant reduction of the pro-survival proteins Bcl-2, Bcl-XL, Mcl-1 and Bfl-1 was observed (p < 0.0001 for Bcl-2, Bcl-XL and Mcl-1 and p < 0.001 for Bfl-1) (Fig. 1A-D).
To investigate whether the relative changes in protein expression correlated with clinical responses, we divided patients based on positive or undetectable MRD at the end of treatment cycle 9 (after 7 cycles of combination) [24] or based on IGHV mutational status. After analysis of a subset of patients (N = 17, N = 9 for Bfl-1), the changes in expression levels of Bcl-2 family members did not correlate with mid-treatment MRD status or IGHV mutational status. Of note, also the baseline levels in these proteins did not correlate with subsequent MRD or IGHV status ( Supplementary Fig. 2), implying that the expression levels of Bcl-2 family members do not serve as early biomakers for later clinical responses.
Ibrutinib treatment attenuates CD40-mediated venetoclax resistance We next studied possible consequences of ibrutinib-mediated shifts in the expression of Bcl-2 members on venetoclax sensitivity. Despite reduction in Bcl-2 family protein expression upon ibrutinib treatment, in vitro treatment with venetoclax for 24 h showed no differences in venetoclax sensitivity between ex vivo PB samples collected at baseline and after two months of ibrutinib treatment ( Fig. 2A). We applied co-cultures of CLL cells with CD40L-expressing 3T3 fibroblasts (3T40L) to mimic TMEinduced venetoclax resistance. As shown before, samples from untreated patients demonstrated almost complete venetoclax resistance upon CD40 stimulation [6]. Strikingly, two cycles of ibrutinib treatment strongly attenuated in vitro CD40-induced resistance to venetoclax (to 100-fold increase in IC50; p < 0.0001; Fig. 2A). This was parallelled by a significantly higher induction of Bcl-XL, Mcl-1 and Bfl-1 expression upon in vitro CD40 activation in baseline samples compared to ibrutinib-treated samples (p < 0.0001) (Fig. 2C-E). Together, these data demonstrate that CD40-mediated venetoclax resistance was reduced after ibrutinib treatment.
Downstream signaling via CD40 is unaffected by two cycles of ibrutinib treatment We next investigated how ibrutinib affects signaling downstream of CD40. CD40 triggering activates downstream Akt/mTOR, NF-κB and Erk signaling in CLL cells [13,35,36]. We first tested effects of ibrutinib treatment on NF-κB signaling as this pathway is responsible for induction of Bcl-XL and Bfl-1 that contribute strongly to venetoclax resistance [2,23]. Using read-outs for canonical (p65) and non-canonical (p52) NF-κB activity, we could not detect significant reduction in NF-κB activity after two cycles of ibrutinib (Fig. 3A, Supplementary Fig. 3). The same was true for pS6, a marker of Akt-mTOR activity, both by Western blot and by flow cytometry (Fig. 3A, B). Pro-apoptotic Bim was increased after ibrutinib treatment, and reduced after CD40/Erk activation both as previously described [6,37] (Fig. 3A, Supplementary Fig. 3). Moreover, CD40-induced NF-κB activity as measured by p65 and p52 DNA-binding activity seemed unaffected after two months ibrutinib treatment (Fig. 3C, D). These data strongly suggest that although CD40 protein expression, upregulation of Bcl-2 proteins, and induction of venetoclax resistance were reduced after ibrutinib treatment, CD40 downstream signaling remained intact.
In vitro ibrutinib treatment shows no direct effects on CD40induced venetoclax resistance or Bcl-2 family expression To establish whether ibrutinib directly affects CD40 signaling, we performed in vitro experiments in which CLL cells were pretreated with 0.1 µM ibrutinib for 24 h followed by co-culture with 3T3 or 3T40L fibroblasts for another 24 h. CD40-activated CLL cells that were treated with ibrutinib in vitro did not show a significant reduction in venetoclax resistance compared to the 3T40L control (Fig. 4A). Furthermore, expression levels of the activation marker CD95 were unaffected upon 0.1 µM ibrutinib treatment (Fig. 4B). Finally, the expression levels of Bcl-2, Bcl-XL and Mcl-1 after CD40 activation also showed no significant differences upon in vitro ibrutinib treatment (Fig. 4C-E). Higher concentrations of 1 µM ibrutinib were also tested and showed similar results (Supplementary Fig. 4). These data indicate no direct effects of ibrutinib on CD40-mediated venetoclax resistance, suggesting that the effect of in vivo ibrutinib treatment on CD40 signaling in CLL cells happens indirectly.
CD40 expression is increased in the lymph node emigrants mediated by TLR9 signaling Since in vivo ibrutinib treatment prevented CD40-mediated upregulation of Bcl-2 family proteins whereas downstream CD40 signaling was unaffected, we next studied expression levels of CD40. Analysis of several paired LN and PB samples indicated that CD40 expression on CLL cells is upregulated in the LN (Fig. 5A). Therefore, we next investigated which in vivo signals Fig. 3 Downstream signaling via CD40 is unaffected after ibrutinib treatment. A Western blot of peripheral blood collected from two patients. CLL cells obtained at baseline (BL) and after two months of ibrutinib treatment (IBR) were unstimulated or co-cultured on CD40Lexpressing fibroblasts (3T40) for 24 h. Protein lysates were probed for NF-kB proteins (p100, p52 and p-p65), pS6, pro-apoptic Bim and actin as loading control. More patients are included in Supplementary Fig. 3. B Immunological detection of phospho-S6 expression before and after in vitro CD40 stimulation in patient samples obtained at baseline and after ibrutinib treatment (N = 7). C, D DNA-binding ELISA of NF-κB family members showed that CD40-mediated NF-kB activity of p52 and p65 was not affected after two months ibrutinib treatment (N = 8). Two-Way ANOVA test was used for statistical analyses. . Two-way ANOVA test was used for statistical analyses.
could be involved in the upregulation of CD40 in the LN niche. We tested various in vitro stimuli to mimic pathway activation as observed in LN-derived CLL cells in vivo. Of more than 10 candidates tested, only TLR1/2,3,7 and especially TLR9 stimulation via PAM3CSK4, Poly:IC, R837 and CpG respectively, induced CD40 expression significantly in both IGHV mutated and unmutated CLL (p < 0.0001). In comparison, BCR stimulation via α-IgM-coated dynabeads plus IL-4 as a known strong surrogate for LN signals [38] showed only a minor but still significant increase in CD40 expression in CLL cells (p ≤ 0.05) (Fig. 5B). Analysis of CD40 expression upon stimulation with CpG revealed increased CD40 expression specifically in the LN emigrant population (p < 0.001) (Fig. 5C). CpG stimulation did not skew CLL LN emigrant/ immigrant phenotypes themselves ( Supplementary Fig. 5).
Therefore, these data strongly suggest a TLR9-mediated effect on CD40 expression in CLL cells that recently left the LN.
Ibrutinib affects CD40 expression in a non-transcriptional manner
We next studied the effects of ibrutinib treatment on the expression levels of CD40 and its activation marker CD95 in LN immigrants and emigrants according to the automated gating strategy as described above (Supplementary Fig. 1A). Consistent with what we observed in LN-derived CLL cells, at baseline LN emigrants showed significantly increased CD40 and CD95 protein expression levels compared to LN immigrants (p < 0.0001) (Fig. 6A, B). After ibrutinib treatment, CD40 and CD95 expression levels were significantly reduced in both LN immigrant and emigrant populations, indicating that in vivo ibrutinib treatment leads to impaired CD40 expression and activation (N = 7, p < 0.001). Although CD40 protein expression was reduced after ibrutinib treatment, transcription of CD40 was unaffected compared to baseline (Fig. 6C). Furthermore, transcriptional induction of the Bcl-2 family members Bcl-2 and Bcl-XL was also maintained after ibrutinib treatment (Fig. 6D), suggesting that in vivo ibrutinib treatment affects CD40 signaling in a post-transcriptional manner.
TLR9 stimulation promotes protein translation, thereby reverting the effects of ibrutinib on CD40-mediated venetoclax resistance Effects of ibrutinib treatment on translation were studied by measuring global protein synthesis. Either TLR9 or CD40 stimulation could induce protein translation to a certain extent, yet the combination of TLR9 and CD40 stimulation induced protein translation even further (Fig. 7A, B). Protein translation was reduced upon ibrutinib treatment, though the combination of TLR9 and CD40 stimulation was able to induce protein translation in ibrutinib-treated samples back to the baseline 3T40 control. Finally, we probed the role of TLR9 stimulation on CD40-induced venetoclax resistance, before and after in vivo ibrutinib treatment. In CD40-stimulated CLL cells of ibrutinib-treated patients, TLR9 stimulation via CpG led to increased venetoclax resistance (p < 0.05), almost to the same extent as observed at baseline (N = 5) (Fig. 7C). These data are consistent with our results on protein translation and indicate that TLR9 stimulation is able to revert ibrutinib-mediated effects on CD40-mediated venetoclax resistance. In summary, these findings demonstrate that TLR9 signaling may play a role in CLL drug resistance in the LN microenvironment by promoting protein translation, thereby increasing susceptibility of CLL cells to CD40-mediated prosurvival signaling.
DISCUSSION
Most evidence for driving CLL proliferation and disease progression is attributed to the CLL LN microenvironment, where CLL cells A, B Protein synthesis assay using O-propargyl-puromycin (OPP), which is incorporated into newly translated proteins and fluorescently labeled (N = 3). As a control, fluorescence of cycloheximide-treated cells was subtracted from all experimental values. Paired sample t-test was used for statistical analyses. C Baseline and ibrutinib-treated CLL cells were stimulated with CpG (1 µg/ml) and cultured on 3T40L for 24 h. After detachment, cells were treated with venetoclax for an additional 24 h. Viability data was measured by flow cytometry using DiOC6/TO-PRO-3 staining (N = 5). Paired sample t-test was used for statistical analyses.
A B C
i m m i g r a n t s e m i g r a n t s i m m i g r a n t s e m i g r a n t s receive various signals activating downstream signaling pathways [1,13]. BCR signaling is a major driver in vivo, and there is also compelling evidence that CD40 and additional signaling is involved in CLL proliferation and drug resistance [2,3,6,7,13,[39][40][41]. Therefore, it is relevant to understand which signals, present in the LN microenvironment, regulate CD40 expression and what effect ibrutinib treatment has on CD40 signaling. Our combined data indicate an unrecognized interplay between TLR and CD40 signals in determining potential LN-associated resistance to venetoclax, which is interrupted already during early stages of ibrutinib treatment. This mechanistic insight extends previous transcriptional data on reduced expression of TLR-and CD40-driven gene sets under ibrutinib [27]. We confirmed induction of pro-apoptic Bim [37] and a collapse of all anti-apoptotic Bcl-2 family members after ibrutinib treatment [23], yet venetoclax sensitivity was not increased ex vivo. Also shorter incubations with venetoclax did not show differences in venetoclax sensitivity after ibrutinib treatment compared to baseline (Supplementary Fig. 6). A possible explanation might be that the difference in the expression of Bcl-XL, Mcl-1 and Bfl-1 between unstimulated baseline and post-ibrutinib treatment emigrant cells is much smaller compared to the difference in expression of these anti-apoptotic proteins in the same cells following CD40 activation. Moreover, PB samples obtained after 1 week of combination treatment with ibrutinib and venetoclax showed an upregulation of Bcl-2 and Mcl-1 as seen previously for single venetoclax treatment [23], while Bcl-XL expression remained reduced ( Supplementary Fig. 7). With respect to potential utility as biomarker for later clinical responses, we tested whether the expression level changes differed between prognostic groups, or correlated with later MRD responses. This appeared not to be the case, though it cannot be excluded that by studying larger patient numbers, or possibly relapse samples, such predictive aspects might be found. Furthermore, the lack of correlation between expression of these proteins in emigrant cells and clinical responses might be due to higher levels of these proteins in actual LN cells compared to the emigrant CLL subpopulation. Previously we reported that soluble CD40L was not sufficient to induce venetoclax resistance in vitro [42]. We therefore use a co-culture system of CLL cells with 3T40 fibroblasts, as a model for TME-induced venetoclax resistance. Though distinct from gradually emerging venetoclax resistance in relapsing patients as seen in the clinic, the underlying mechanisms may have a common basis. A key finding was the reduction of in vitro CD40-induced venetoclax resistance upon two cycles of ibrutinib treatment compared to baseline. Downstream CD40 signaling and trancription, and also NF-κB activity were unaffected after two cycles of ibrutinib treatment. A previous study reported decreased NF-κB transcription factor activity measured by p50 on day 28 of ibrutinib treatment compared to pretreatment in CLL cells [43]. As this was in unstimulated CLL cells, it actually fits well with our data in CD40-stimulated cells, together emphasizing that cells in PB are devoid of NF-κB stimulatory signals, yet retain capacity to respond to them. Instead of direct attenuation of CD40 downstream signaling capacity, we found that ibrutinib treatment affects overall protein translation which includes expression levels of CD40 receptor. This is in agreement with previous studies showing that protein expression of the oncogene MYC is increased in the LN and is reduced after ibrutinib treatment [33,44]. In addition, a previous study demonstrated that in the LN microenvironment, expression of miR29, which targets the positive regulator of CD40 signaling TRAF4, is suppressed through BCR-induced activation of MYC [45]. It was suggested that BCR/MYC effects on miR29 correlated with IGHV mutation status [45]. Since our data did not show such a correlation ( Supplementary Fig. 2), this would indicate that the BCR-MYC-miR29-TRAF4 axis is distinct from the link between TLR and CD40 pathways described here.
Our data implies that under ibrutinib treatment, if cells cannot (re-)enter LN sites, a factor is lacking that maintains or induces CD40 expression. We showed that TLR9 stimulation via CpG led to increased CD40 expression in CLL cells specifically in the LN emigrant population. These findings fit well with increased TLR pathway activity in LN-derived CLL cells as identified by gene array studies [41,46], as well as in situ proximity ligation assays reporting interactions of pIκBa with TLR9 and MyD88 in LNderived CLL cells [46]. This is consistent with our finding that TLR9 expression was increased in CLL LN emigrants ( Supplementary Fig. 8). Another previous study demonstrated the formation of a multiprotein supercomplex composed of MYD88, TLR9, and the BCR (My-T-BCR) as a mode of oncogenic BCR signaling in various lymphomas, yet this was not evident in CLL LN biopsies [47]. This suggests that CLL may depend on an alternative type of BCRenhancing signaling, and does not exclude a complex involving pIkBa and TLR9 [46]. Considering that there is no direct proof that CLL cells receive signals through the TLR9 pathway in vivo, a recent study proposed a role for mitochondrial DNA (mtDNA) that harbors hypomethylated CpG motifs similar to bacterial DNA in the in vivo activation of TLR9 signaling [48]. A potential mechanism to trigger TLR9 in vivo might be via mtDNA released into the cytosol via BAX/BAK-mediated mitochondrial outer membrane pores followed by extrusion and permeabilisation of the inner mitochondrial membrane [49], providing ligands for triggering cGAS/STING or TLRs. Furthermore, mtDNA might be also transmitted via exosomes or directly via mitochondrial transfer between cells in the TME [50]. Interestingly, a contrasting notion was presented very recently by Martines et al. who demonstrated that in mouse models of CLL/Richter syndrome, TLR/IRAK4 mediated signaling did not play an intrinsic role in propagation of the malignant cells [51]. Formally, the available data are not mutually exclusive, as TLR involvement in human CLL LN in situ might be more prominent or required than in the aggressive mouse TCL1 and Richter mouse models. In vitro CpG stimulation induces phosphorylation of CD79A, LYN and SYK, implying a potential overlap between TLR and BCR signaling [13]. However, there is a strong correlation between CpG-mediated CLL proliferation and IGHV mutation status which may suggest that TLR activity is regulated by BCR signaling [52,53]. Other studies demonstrated an important role for TLR signaling in CLL pathogenesis and maintaining CLL cell viability during ibrutinib therapy, suggesting superior anti-tumor activity by combining ibrutinib with drugs targeting TLR signaling [46,48]. Targeting TLR signaling might affect the CLL cells that remain in the LNs during ibrutinib treatment and in non-responsive patients, thereby inhibiting protein translation and consequently preventing a resistant CLL phenotype induced by microenvironmental signals.
In conclusion, ibrutinib treatment broadly affects Bcl-2 family protein expression and interrupts a triad of signaling pathways involving BCR, CD40 and TLR9. Our combined data indicate a novel aspect of ibrutinib efficacy, specifically its potential capacity to interrupt TLR9-induced CD40 upregulation and protein translation, which normally primes CLL cells in the LN environment for venetoclax resistance. With respect to implications for potential combination scenarios, our findings suggest that simultaneous administration of ibrutinib and venetoclax might be more beneficial than sequential single treatment.
DATA AVAILABILITY
All data are available from the corresponding author on reasonable request.
|
2023-04-28T06:17:49.675Z
|
2023-04-26T00:00:00.000
|
{
"year": 2023,
"sha1": "b6f8e8c1a6c84e1fd341b62dd1a74908634be557",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41375-023-01898-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "c078d9465b7d3244a420ecf36e93794ef24e5557",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239635835
|
pes2o/s2orc
|
v3-fos-license
|
Spatial Effects of Energy Consumption and Green GDP in Regional Agreements
: One of the main factors in environmental degradation and climate change is the consumption of fossil fuels. For this reason, cooperation between countries to overcome environmental challenges is seen more than ever. In this regard, regional economic treaties are a good platform for advancing common policies in the development of renewable energy, because this type of energy has fewer external effects on the environment. Accordingly, the purpose of this paper is to investigate the spatial effects of energy consumption and green production in Shanghai Cooperation Organization member states. For this purpose, the spatial panel data econometric approach has been used. Spatial effects refer to the effects of neighboring countries. According to CO 2 emissions, the spatial effects of the positive or negative effects of these emissions and other determinants of neighboring countries are local. The results show that although both fossil fuels and renewable energy have a negative effect on green GDP, the impact of fossil fuels is almost four times greater. On the other hand, proximity and membership in the Shanghai Cooperation Organization have had a positive effect on green GDP in member countries, as countries have tried to coordinate their development policies in the field of renewable energy. Therefore, the development and strengthening of regional cooperation can play an effective role in controlling pollution caused by fossil fuel consumption and ultimately improve green production.
Introduction
Continuous use of fossil fuels leads to increased carbon levels in the atmosphere, global warming, and climate change [1]. Because of global warming, the use of renewable energy as an alternative to current energy sources has received much attention in many developing countries [2]. Currently, the renewable energy industry has received more and much attention from governments, companies and scientists in the world because of concerns caused by health problems, economic damage, climate change, sustainability issues, and fossil energy consumption. In addition, the renewable energy industry, as an emerging green industry, has a new mode of economic growth that can partially resolve or curb the negative effects of fossil energy consumption and create sustainable development methods [3]. The importance of renewable resources depends on their ability to provide security and energy independence and to create additional employment [4]. The use of renewable energy technologies is very widespread in developing countries due to ease of access, cheapness and reliability [5]. Also, renewable energies can protect regional economies against rising future prices of ordinary fuels by diversifying the energy basket. Most developing countries have a natural advantage in using these resources due to the abundance of renewable energy sources. This results in renewability being economically competitive as a solution to the energy challenges of developing countries, even in competition with rising types of traditional energy prices [6]. In addition, it should be noted that the use of different forms of renewable energies such as solar energy or wind power in one country, does not prevent other countries from using those types of energy sources [7]. Most pathways to a low carbon economy would require a rapid deployment of renewable energy and the doubling of energy efficiency, given that the energy sector accounts for two thirds of global emissions [8]. The protection of the environment and climate change promote the use of renewable energy. Renewable energy technologies could offer competitive cost options for the delivery of modern energy services in remote areas, while innovation and scale merit the continued achievement of cost savings [9]. On the other hand, renewable energy sources can be controlled regionally and do not require advanced energy transmission systems. In other words, all different regions of a country in terms of energy resources can be connected to each other through lines or other means of energy transmission. Based on this, countries can choose the type of energy used in that region with correct and logical planning, based on the characteristics of each region, and solve the problem of energy supply safely and with targeted investment and high efficiency. Also, this type of energy, in addition to its favorable effects, such as reducing the use of fossil fuels and consequently reducing greenhouse gas emissions and environmental pollutants, has adverse effects such as changes in the ecosystem of the region and population displacement caused by Hydropower plants, noise pollution, surface and groundwater pollution. These create thermal pollution, chemical pollution, and biological [10]. Renewable energy was estimated to be the fastest growing source of energy during the years 2003-2012 [11]. It is predicted that the use of the world's renewable energy to generate electricity will increase at an average rate of 6.2% annually between years 2007 and 2035, going from 18% of total energy to 23% in that period [12].
According to the Intergovernmental Panel on Climate Change (IPCC) in 2014, carbon dioxide accounted for 76% of total greenhouse gases. Therefore, it can be said that reducing carbon dioxide emissions plays an important role in sustainable development and the protection of the environment (Omari, 2013). On the other hand, spatial effects are an important issue that should be mentioned when investigating the effects on CO 2 emissions or renewable energy development. Spatial effects refer to the effects of neighboring countries. According to CO 2 emissions, the spatial effects of the positive or negative effects of these emissions and other determinants of neighboring countries are local. CO 2 emissions can be affected by spatial effects in two ways: first, CO 2 emissions produced by other countries flow to the other country. Secondly, the process of energy transfers or the state of economic development of countries will affect its neighbors. For example, successful energy transfer in neighboring countries can facilitate local energy transfer, because the local country can adopt similar energy policies or import related equipment and techniques [13]. As a result, spatial effects must be considered in order to achieve more accurate results about the effects of energy transfer, CO 2 emission and renewable energy growth. The existence of spatial and neighborhood effects has more importance in countries that become members of unions or regional organizations since, in this situation, countries usually adjust their economic policies in harmony with each other. Regional convergences, while helping to improve economic growth, accelerate the process of countries joining the global economy. One successful example of convergence is the Shanghai Cooperation Organization, which was established in 2001 with the goal of security, economic, and cultural integration. Iran joined the Shanghai Cooperation Organization in 2005 as an observer member. The area of the member countries (Main and Observer) of the Shanghai Cooperation Organization is equivalent to 37 million square kilometers. These members are geographically close to each other and, in fact, they are each other's natural business partners. This organization includes 43.62% of the world's population, with 2.9 billion people in the member countries. Table 1 shows the statistical specifications of these countries in 2018.
In this organization, the two major Asian economies, those being China and India, do not have enough energy and import a large part of theirs for their growing economies. On the other hand, Iran and Russia are also members of this organization, and they are the two largest major energy producers with massive reserves of fossil energy. As a result, the member and observer countries of the Shanghai Cooperation Organization are the largest producers and consumers of energy in the world. Therefore, joint projects in the field of energy will be a priority for this organization in achieving its goal of economic integration [14]. The amount of fossil and renewable energy consumption of the Shanghai Organization member countries in 2019 is shown in Figures 1 and 2. These numbers show the effects of proximity on energy consumption between these countries and that they have relied more on fossil fuels than on renewable energy. This fact is illustrated in Figures 1 and 2. In this organization, the two major Asian economies, those being China and India, do not have enough energy and import a large part of theirs for their growing economies. On the other hand, Iran and Russia are also members of this organization, and they are the two largest major energy producers with massive reserves of fossil energy. As a result, the member and observer countries of the Shanghai Cooperation Organization are the largest producers and consumers of energy in the world. Therefore, joint projects in the field of energy will be a priority for this organization in achieving its goal of economic integration [14].
The amount of fossil and renewable energy consumption of the Shanghai Organization member countries in 2019 is shown in Figures 1 and 2. These numbers show the effects of proximity on energy consumption between these countries and that they have relied more on fossil fuels than on renewable energy. This fact is illustrated in Figures 1 and 2. In 2013, China became the leading country in the world in terms of renewable energy production. The major share of this energy is taken from hydropower and wind energy [15]. Therefore, the outlook for the SCO is thought-provoking. It seems that this organization will provide a worthy and effective position for itself in light of the cooperation of members in the formation of the international system. This organization has the necessary capabilities to become a multilateral organization [16], and the issue of reducing energy pollution is an axis that must be paid attention to with regard to the future of energy. In this context, the replacement of fossil fuels is an inevitable prospect. In 2013, China became the leading country in the world in terms of renewable energy production. The major share of this energy is taken from hydropower and wind energy [15]. Therefore, the outlook for the SCO is thought-provoking. It seems that this organization will provide a worthy and effective position for itself in light of the cooperation of members in the formation of the international system. This organization has the necessary capabilities to become a multilateral organization [16], and the issue of reducing energy pollution is an axis that must be paid attention to with regard to the future of energy. In this context, the replacement of fossil fuels is an inevitable prospect.
Energy is considered as the motive behind most economic activities, therefore this input plays an effective role in growth and economic development of countries [17]. But there is not exist global consensus on the role of renewable energy in economic growth [18,19]. For example, the results of some works show that there is no relationship between renewable energy consumption and economic growth [20][21][22]. However, other studies indicate a positive relationship between these two variables [23,24] or a negative relationship between these [25,26]. On the other hand, many countries hope to achieve rapid development by minimizing the cost of CO2 emissions and seek to balance economic growth with CO2 emissions. These countries began the process of energy transfer to achieve their goals. Energy Transfer acts as a bridge for less linked CO2 emissions and sustainable development, because not only it can reduce the intensity of CO2 emissions, it can modify the energy structure as well [27]. Therefore, this study aims to investigate the spatial effects of energy consumption on green GDP in the SCO member countries. It is hoped that the results of this study will be useful for researchers, stakeholders and policy makers in order to select appropriate approaches to reduce greenhouse gas emissions and increase the consumption and production of renewable energy. For this purpose, the relationship between CO2 emissions, economic growth and energy transmission has been assessed by considering the spatial effects of energy consumption on green GDP in SCO member countries.
According to the objectives of the research, the literature review is classified into three categories. In the first, the relationship between energy consumption and economic growth is evaluated. For example, energy consumption and economic growth have a longterm relationship with each other, and there is a two-way causal relationship between them [28]. The Adjaye [29] study confirmed this relationship between energy consumption and income in India, Indonesia, Thailand and the Philippines. But the results of Lee's [30] study show that there is a one-way causal relationship between energy consumption and GDP in the short and long term. The results of Al-Iriani [31] also show that energy consumption and GDP are not cointegrated and that there is a one-way causal relationship between GDP and energy consumption. Cheng and Lai [32] achieved a one-way causal Energy is considered as the motive behind most economic activities, therefore this input plays an effective role in growth and economic development of countries [17]. But there is not exist global consensus on the role of renewable energy in economic growth [18,19]. For example, the results of some works show that there is no relationship between renewable energy consumption and economic growth [20][21][22]. However, other studies indicate a positive relationship between these two variables [23,24] or a negative relationship between these [25,26]. On the other hand, many countries hope to achieve rapid development by minimizing the cost of CO 2 emissions and seek to balance economic growth with CO 2 emissions. These countries began the process of energy transfer to achieve their goals. Energy Transfer acts as a bridge for less linked CO 2 emissions and sustainable development, because not only it can reduce the intensity of CO 2 emissions, it can modify the energy structure as well [27]. Therefore, this study aims to investigate the spatial effects of energy consumption on green GDP in the SCO member countries. It is hoped that the results of this study will be useful for researchers, stakeholders and policy makers in order to select appropriate approaches to reduce greenhouse gas emissions and increase the consumption and production of renewable energy. For this purpose, the relationship between CO 2 emissions, economic growth and energy transmission has been assessed by considering the spatial effects of energy consumption on green GDP in SCO member countries.
According to the objectives of the research, the literature review is classified into three categories. In the first, the relationship between energy consumption and economic growth is evaluated. For example, energy consumption and economic growth have a long-term relationship with each other, and there is a two-way causal relationship between them [28]. The Adjaye [29] study confirmed this relationship between energy consumption and income in India, Indonesia, Thailand and the Philippines. But the results of Lee's [30] study show that there is a one-way causal relationship between energy consumption and GDP in the short and long term. The results of Al-Iriani [31] also show that energy consumption and GDP are not cointegrated and that there is a one-way causal relationship between GDP and energy consumption. Cheng and Lai [32] achieved a one-way causal relationship from GDP to energy consumption and a one-way causal relationship from energy to employment for Taiwan. In addition, the results of Stern [33] show that energy is a limiting factor for economic growth. This result confirmed Yu and Lv's [34] findings in their study of the United States, where energy supply shocks reduce GDP.
The second category are studies that have reviewed the relationship between renewable and non-renewable energies with green GDP. For example, Marinko et al., [35] express that increasing energy consumption will increase GDP and Green GDP. Hongxian's [36] research also shows that while the consumption of coal and the consumption of oil energy Sustainability 2021, 13, 10078 5 of 13 prevent green GDP, the consumption of natural gas contributes the most to green GDP. The results of Al-mulali [37] show that the consumption of renewable and non-renewable energy has a long-term positive relationship with all economic sectors in all regions. Also, it has been shown that the consumption of non-renewable energy has a remarkable effect more than the consumption of renewable energy in economic sectors.
The third category consist of studies that have calculated green GDP. For example, Hamilton and Lutz [38] stated that Green GDP is better than GDP for measuring sustainability and suggest policies to achieve sustainable development. Also, Figueroa and Calfucura [39] modified the Ordinary GDP criteria by adding human-made capital depreciation, plus the loss of total natural capital, whereas, XU et al. [40], by introducing a new method of accounting for green GDP, have entered the value of ecosystem services in GDP. Talberth and Bohara [41] tested the effects of economic openness on green GDP. Their results indicate a negative nonlinear relationship between economic openness and Green GDP growth and show that there is a positive nonlinear relationship between the openness of a country's economy.
Studies have shown that there is a one-way causal relationship between energy consumption and green GDP, and that green GDP can be a better expression of the concepts of environmental sustainability than GDP. Also, renewable energies have less harmful effects on the environment than non-renewable energies. Therefore, this article investigates the impact of energy consumption on green GDP in the SCO member states. That is the main purpose of this study. The new approach of this paper is to investigate the effect of renewable and fossil energy consumption on green GDP by using the spatial econometric method. The research hypothesis is that energy consumption has a negative effect on the growth of green GDP, and that the proximity of countries has a positive effect on green GDP in SCO member states.
Materials and Methods
In this study, the panel data method has been used. Because this type of data provides a suitable environment for the development of estimation methods and theoretical results, researchers are able to study issues by combining cross-sectional data and time series, which cannot be studied in a cross-sectional or time series environment [42]. The general form of panel data regression is as follows: In Equation (1), v it was a component of the disorder and has a normal distribution and in return, all i's and t's are independent of X it . It must first be determined whether they are heterogeneous or whether there are individual differences. If there is heterogeneity in the panel data method and otherwise, the normal least squares (OLS) method is used to estimate the model. µ it 's express individual or heterogeneous effects in countries, which appear in the form of random effects or fixed effects and compared to the ordinary least squares method. The Hausman test is used to choose between fixed and random effects models. Also, the spatial econometric method has been used to measure the effects of proximity. In this approach, the relationship between spatial autocorrelation or spatial heterogeneity in regression models with cross-sectional data or panel data is investigated. Conventional econometrics does not consider "Spatial Dependence" and "Spatial Heterogeneity" because it violates the Gauss-Markov assumptions, and therefore the spatial econometric approach should be used. The general form of panel spatial econometrics is as follows: In Equation (2), Y it represents the gross green product of country i at time t, c is constant, X it represents a matrix n × n of the explanatory variables, W represents the spatial weight matrix, which is usually the first-order proximity matrix, the parameter ρ is Sustainability 2021, 13, 10078 6 of 13 the coefficient of the spatial dependent variable, and WY and the β parameter represents the effect of the explanatory variables on the deviation in the dependent variable. In this paper, the proximity and correlation method are used to determine the proximity matrix. Considering the spatial dependence, the countries that have the neighborhood relationship should show a higher degree of dependence than the more distant neighborhoods. There are various methods for forming an adjacency matrix, including Linear Contiguity, Rook Contiguity, Bishop Contiguity, Double Linear Contiguity, Double Rook Contiguity, and Queen Contiguity [43]. The W matrix can be represented as follows: The W matrix is symmetric, where the elements on the main diagonal are zero and the other elements take the number one or zero depending on whether the countries are adjacent or not. In the following, by standardizing the adjacency matrix and then multiplying it by the vector of the dependent variable, a new variable is obtained that shows the average of the observations from adjacent areas and is called the spatial lag variable. In this study, the adjacency matrix has dimensions of 10 × 10.
Three main models are used in spatial econometrics, which are the Spatial Lag Model or Spatial Autoregressive Model, the Spatial Error Model, and Spatial Durbin Model. In the Spatial Lag Model, spatial effects are propagated through a dependent variable; while in the Spatial Error Model, it is assumed that the error sentence is the main path of spatial propagation. In the Spatial Durbin Model, the effect of spatial diffusion is considered both through dependent variables and through independent variables [44].
Finally, experimental models are used to test the spatial effects of green GDP on Shanghai member states. It is as follows: LnGGDP it = α 0 + α 1 LnF f uel it + α 3 LnRenew it + α 4 LnL it + α 5 LnK it + α 6 Wax it + ε it (5) In Relations (4) and (5), i represents the country and t represents the time. GGDP Indicates Green GDP and En is Energy Consumption, L is Labor, K is Capital, F f uel is Fossil Fuel Consumption, and Renew is Renewable Fuel and Bio Fuel Consumption, and the Wax Variable is the spatial lag of each member state of the Shanghai Cooperation Organization. In Equation (4), we consider energy consumption in general and look for the effect of total energy consumption on green GDP. However, in Equation (5), we consider energy as both renewable and non-renewable energy, and examine how the consumption of each of them affects green GDP. Equations (4) and (5) are estimated logarithmically in two modes, conventional panel and spatial panel.
The statistical sample of this study includes 10 countries with eight main members and two observers of the Shanghai Cooperation Organization. These include Russia, China, Kazakhstan, Kyrgyzstan, Tajikistan, Uzbekistan, India, Pakistan, Iran and Mongolia, and have been reviewed in the period 2019-1990. It should be noted that the information required for this research has been collected from the World Bank [45].
Results
Before estimating the model, the statistical characteristics of the data related to the Shanghai Organization member countries in the period 1990-2019 including the mean, minimum, maximum, variance and standard deviation are shown in Table 2. In order to obtain the spatial latency variable, we must first obtain the proximity matrix and then convert it to the first standardized proximity matrix, after which the spatial lag variable can be determined. The proximity matrix for the member and observer countries of the Shanghai Organization is obtained by using the linear proximity method. In the following, this matrix is multiplied by the value of the dependent variable and the spatial lag variable is obtained. First, to estimate the nature of the panel data model, the F-Limer statistic in Table 3 and the Hausman test in Table 4 has been reviewed. Table 3. F-Limer statistic to detect the use of panel data in estimating the equation.
Normal Regression The F-Limer Statistic Prob Result
Equation (4) In Table 3, the hypothesis H 0 is pooled data and the hypothesis H 1 is panel data. According to the results of Table 3, since the probability number for both relations is less than 0.05, the panel data method is preferred to the pooled data method. In the following, Hausmann's chi-square test is used to detect the use of fixed effects or random effects, and if the probability value is less than 0.1, then the equation must be estimated using fixed effects.
Also, to ensure the suitability of the Spatial Durbin Model for both Equations (4) and (5), two tests have been used. As a result, the Spatial Durbin Model is not a suitable model for the research models of this research. Therefore, the Spatial Autoregressive Model for the two relationships has been estimated. Estimation of research models for Equation (4) with conventional regression and spatial regression (Spatial Durbin Model and Spatial Autoregressive Model) is shown in Table 5: (4)). In Model 1-1, the coefficient of energy consumption is positive, which indicates that by increasing energy consumption through increasing demand and increasing productivity, green GDP will increase. But since fossil fuels are one of the sources of carbon dioxide emissions and also cause environmental degradation, the increase in green GDP is not significant. The coefficient of labor force is positive and significant, which indicates that the labor has a positive environmental impact on green GDP. The effect of capital on green production is positive and significant. Therefore, more capital increases people's ability to use new techniques and use environmentally friendly technologies, which will improve green GDP. In Model 1-2, the coefficient of the spatial lag variable is positive and significant, and as a result of the proximity of countries to each other has a positive effect on green GDP and should be estimated according to the method of spatial econometrics. The spatial effects of economic growth and energy consumption are statistically justifiable for the SCO member states. In particular, energy consumption for countries can be different, but the existence of a neighboring country that is a major producer of energy and rich in energy resources can have a significant impact on energy consumption of the neighboring country. Also, a country with increasing green GDP growth can increase it in the neighborhood countries by sharing its knowledge and technology.
An estimation of research models for Equation (5) with normal regression and spatial regression (Spatial Durbin Model and Spatial Autoregressive Model) is shown in Table 6: Table 6. Results of research model estimation (Equation (5)). According to the results in Table 6, in which the green production is considered as a dependent variable, all coefficients are significant. Fossil fuel consumption has a negative and significant effect on the green GDP variable. Therefore, by increasing the consumption of fossil fuels 100%, green production decreases by 71.8%, because the consumption of such energies causes carbon dioxide emissions and affects green GDP. Consumption of renewable fuels and biofuels shows a negative and significant effect on green GDP. Thus, increasing the consumption of this type of fuel by 100%, leads to a green GDP decrease of 19.8%. All renewable and biofuels are relatively expensive, and energy efficiency and the benefits of CO 2 savings from using them is variable. But in the end, they all compete with food production for land, water and other resources. For this reason, the impact of their consumption on green production is negative, and the undesirable effects of renewable and biofuels on the environment are greater than their desirable effects. Changes in the region's ecosystem and displacement of population caused by hydropower plants, which create noise pollution, heat pollution, chemical pollution and biological pollution, are some of the desirable effects of renewable fuels on the environment. It is important to pay attention to this point that although both types of energy have a negative effect on green production, the negative effect of renewable and biofuel consumption is much less than the effect of fossil fuel consumption. As a result, renewable and biofuels, despite less environmental degradation, can be substituted for fossil fuels. Today, in most global programs, the use of renewable and biofuels is known as one of the appropriate solutions to replace fossil fuels and to prevent air pollution while preserving fossil fuel reserves.
The positive effects of labor on green GDP can be due to the fact that the most important factors in the production process and in labor productivity do not create environmental damage in the production process. As a result, with a 100% increase in labor, green GDP increases by 54.7%. The capital inventory coefficient is 0.49 and is significant, which indicates the positive effect of capital inventory on green GDP. The coefficient of spatial delay is positive and significant and should be estimated according to the method of spatial econometrics.
Because the conventional regression method can only obtain the direct effects, it may be because the neglect of indirect effects caused by spatial effects provides biased results and this not only affects the regression results, but also affects the direct and indirect effects of determinants. As a result, the proximity of countries to each other had a positive effect on green GDP and should be estimated according to the method of spatial econometrics. Furthermore, the effects of the spatial overflow of renewable energy are statistically significant for the SCO member states. Possible reasons are that the SCO countries can influence each other's renewable energy consumption by sharing technologies for the production and consumption of renewable energy.
The main purpose of this study was to investigate the impact of energy consumption on green GDP in the SCO member states based on spatial econometrics. The results of estimating the research models indicate that the consumption of fossil fuels has negative and significant effects, and the spatial lag has a positive and significant effect on green GDP in the SCO member states. According to the findings of the study, there is a two-way negative relationship between green GDP and energy consumption, which is consistent with the studies of [28]. Also, the consumption of renewable and biofuels has had a much less negative effect on green GDP than the consumption of fossil fuels. This is consistent with the studies of Marinko et al., [35], Hongxian [36] and Al-mulali [37], and indicates that although the effect of renewable fuel consumption on green GDP is negative, their consumption remains preferable to fossil fuel consumption. Also, Kohansal and ShayanMehr [46] have suggested that it is not necessary to reduce energy consumption to reduce carbon dioxide emissions because this leads to a decline in GDP. However, it is better to replace fossil and non-renewable fuels with clean and green fuels in order to achieve the two goals of continued economic growth and the reduction of carbon dioxide.
The coefficient of spatial lag is positive and significant and is consistent with other studies [34,43,[46][47][48]. In other words, the proximity of countries has a positive effect on the growth of green GDP, and this shows that the growth of green GDP in neighboring countries in the Shanghai Cooperation Organization is one of the most important factors in the growth of green GDP in these countries. The SCO members are the largest producers and consumers of energy in the world. Hence, these countries can trust their comparative advantage instead of imitating the economic structure of neighboring countries. Therefore, economic growth in neighboring countries can have positive effects on the economic growth of the local country. In addition, SCO members can collaborate in the development of green economies because their creation benefits all countries by increasing economic growth and reducing CO 2 .
Conclusions
By considering the problem that international organizations have become important actors in the world economy and have a direct and indirect impact on all global interactions and developments, regionalism and regional cooperation have provided a good platform for strengthening the power of the national economy. Different countries seek their comprehensive development through regional cooperation and know it is necessary to join regional unions, and in a way, the success of any country in playing an effective and constructive role in its surrounding areas can be considered as a prelude to the successful entry of that country into the global arena. Also, the global economy is undergoing a process that is making different countries closer and more interdependent. The role of proximity in trade and economic cooperation has been important in the transfer of capital, technology, and energy and, consequently, growth. The issue of reducing energy pollution is an axis that must be considered in the future of energy. In this context, the replacement of fossil fuels is an inevitable prospect. As a result, despite the prominent role that fossil fuels have played in the economic growth and development of countries, they have created widespread problems in the national and international arenas. These issues and problems have become so obvious that the need to turn to renewable energy is no longer hidden from anyone. Accordingly, many countries have turned to new energies that, in addition to economic growth and development, also have characteristics such as sustainability. As a result, investment and planning for this area should be a priority.
According to the results, the hypothesis of spatial dependence in the model is confirmed. In other words, there is a positive spatial relationship between the observations related to the green production flow of Shanghai member countries. It is suggested that in similar studies the spatial dependence be considered to obtain more accurate estimates of the variables. Thus, when governments and international organizations make policy decisions, they must not only consider their own circumstances, but also the influences of neighboring countries, such as economic growth and energy structure. In other words, governments and international organizations must emphasize cooperation on policies to increase the efficiency of renewable energy, reduce carbon and maintain the quality of the environment. In particular, countries can share common costs to develop technologies for the production and consumption of renewable energy, the reduction of carbon, and the exchange of carbon-related technology. As a result, governments and the SCO Council must continue to facilitate the development of a green economy in order to achieve sustainable development.
Since the negative impact of renewable and bioenergy has been less than the negative impact of fossil fuels on green production, renewable and bioenergy should replace fossil fuels. Accordingly, given the SCO's extensive capacity in the field of renewable energy production, investment in this area should be a priority. These types of energies can reduce the environmental problems of this organization in achieving the goal of sustainable development. As a result, food tariffs, subsidies or renewable energy certificates must be provided to produce renewable energy. In addition, to reduce the cost of producing renewable energy, governments can adopt policies that encourage technological innovation in the renewable energy industry. Furthermore, in a situation where many countries are currently facing challenges with unemployment, investing in new energy can not only reduce dependence on fossil fuels and reduce many environmental problems, but can also increase the level of public employment.
|
2021-10-21T15:15:17.878Z
|
2021-09-09T00:00:00.000
|
{
"year": 2021,
"sha1": "1c64acb55d14996ffa8b6fbe7061e8e74f778733",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/13/18/10078/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "794b5d4ee3334c366ded9b60bcecdf96f4cb6460",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
254366560
|
pes2o/s2orc
|
v3-fos-license
|
The cosmological evolution condition of the Planck constant in the varying speed of light models through adiabatic expansion
There have been various varying speed of light (VSL) models with one free parameter, $b$, to characterize the time variation of the speed of light as a function of a scale factor, $c = c_0a^{b/4}$, based on the expanding universe. One needs to induce cosmological evolutions of other physical constants and quantities having different powers of scale factor as a function of $b$ to satisfy all known local physics laws, including special relativity, thermodynamics, and electromagnetic force. These models should be based on the Friedmann-Lema\^{i}tre-Robertson-Walker metric satisfying the isotropic and homogeneous three-space known as the cosmological principle. Adiabaticity is a necessary condition to keep homogeneity and isotropy because a net energy flux would falsify the isotropy if there is a preferential energy flow direction. It also might forge homogeneity if the outward (inward) flow is isotropic. Thus, any VSL model based on the expanding universe should preserve an adiabatic expansion condition to be a viable model. We show that this condition specifies the cosmological evolution of the Planck constant as $\hbar = \hbar_0 a^{-b/4}$.
Introduction
The spatial distribution of galactic clusters shows apparent isotropy and statistical homogeneity on scales exceeding 250 million light years. Astronomers have made estimations regarding the cosmic scale at which the transition from a lumpy, inhomogeneous Universe to a smoother, statistically homogeneous Universe takes place, utilizing cosmological observations. Some of these estimations can be found in [1,2]. For further details, please refer to the references therein as well. This observed characteristic can be effectively described by the Friedmann-Lemaître-Robertson-Walker (FLRW) metric, which provides a suitable framework for understanding the underlying spacetime structure. Adiabatic expansion, in the context of cosmology, denotes a process where there is no exchange of heat with the surroundings, and any change in the system's internal energy is solely determined by work. Adiabaticity plays a crucial role in maintaining the homogeneity and isotropy mandated by the cosmological principle (CP), which asserts the uniformity of the universe on large scales.
The first law of thermodynamics, known as the law of conservation of energy, can be expressed as dQ = dE + P dV , where dQ represents the heat flow into or out of a given volume. In the case of a perfectly homogeneous and isotropic Universe, the condition dQ = 0 holds true for any volume, indicating the absence of bulk heat flow (i.e., adiabaticity). In such a scenario, a homogeneous and isotropic expansion of the Universe does not contribute to an increase in the overall entropy of the Universe.
It follows that the time evolution of the cosmic microwave background (CMB) temperature for an adiabatic expansion predicts the linear increase with redshift, T (z) = T 0 (1+z), where T 0 is the present value of the CMB temperature. There are several methods to measure this theoretical prediction at different redshifts. The CMB photons interact with the hot intracluster medium (ICM), producing the thermal Sunyaev-Zeldovich (tSZ) effect [3,4]. This inverse Compton scattering of CMB photons by electrons provides a direct measurement of the CMB temperature at redshift 0 ≤ z ≤ 1. One can also use the fact that CMB radiation excites the rotational lines of molecules and/or atoms in quasar (QSO) absorption line systems at redshifts 1.8 ≤ z ≤ 3.3, but this is model-dependent [5]. One can also use a massive starburst galaxy to measure T at higher redshifts [6]. Any deviation from the adiabatic expansion of the Universe can be expressed by the modification of the redshifting of the CMB temperature as T (z) = T 0 (1 + z) 1−β [7].
There have been several data from different missions to constrain the value of β shown in Table 1. The South Pole Telescope (SPT) obtains measurements of the spectrum of the Sunyaev-Zeldovich effect (SZe) using the ratio of the SZ signal at 95 and 150 GHz to constrain deviations from the expected adiabatic evolution of CMB temperature [8]. They apply the method to a sample of 158 SPT-selected clusters covering a redshift range of 0.05 < z < 1.35. There is another SZe spectrum obtained from Planck temperature maps at various frequencies ranging from 70 to 353 GHz for a subset of 104 clusters from the Planck SZ cluster catalog. Using a Monte-Carlo Markov Chain approach and examining the SZ intensity change at different frequencies, they derived individual measurements of CMB temperature for each cluster in the sample [9]. By analyzing data from 370 clusters obtained from the largest SZ-selected cluster sample to date, collected by the Atacama Cosmology Telescope (ACT), they derive new constraints on the deviation of CMB temperature evolution from the standard model (SM i.e., c = const) [10]. All these results are consistent with β = 0, which corresponds to the adiabatic expansion.
[10] There have been various VSL models [11,12,13,14,15,16,17,18,19]. We can obtain an extended theory satisfying both the Lorentz invariance (LI) and the law of energy conservation even when the speed of light varies as a function of cosmic time c(a). One needs to obtain the cosmological evolutions of other physical constants to satisfy LI, electromagnetism, and thermodynamics. The so-called minimally extended varying speed of light (meVSL) model satisfies these local physics laws [18,19]. We compare the cosmological evolutions on physical constants and quantities between different VSL models in Table 2. We especially focus on the condition of the cosmological evolution of the Planck constant to preserve the CP to clarify the viability of VSL models. [12,13,17] c 0 a n const 0 a n const ν 0 a n const const T 0 a 2n const α 0 a −2n [11,14] [18,19] Any viable VSL model should also guarantee adiabatic expansion to pre-serve the CP in addition to satisfying all known local physics laws. Both theories and observations minimally require this condition. Thus, we should exclude any cosmological evolution relations of physical quantities and constants of VSL models which violate the adiabatic expansion condition.
Testing the stability of fundamental couplings in nature offers valuable insights into new physics. Detecting variations in these couplings would be groundbreaking, while even improved null results place competitive constraints on various cosmological and particle physics theories [20,21,22,23,24]. Thus, there are plans and forecasts for studies using advanced facilities like ALMA [25,26], ESPRESSO [27], ELT [28], Euclid [29], CORE [30], etc.
In Sec.2, we review how to derive the cosmological evolution of the CMB temperature from the adiabatic expansion. We investigate whether the adiabatic expansion condition is obtained in various VSL models or not in section 3. If this condition is not satisfied, it is not a viable model, even if it can satisfy all known local physics laws due to the violation of the CP. We conclude in Sec. 4.
Review of Adiabatic expansion
The first law of thermodynamics is a statement of conservation of energy. In the momentarily comoving reference frame, the fluid element can exchange energy with its surroundings in two ways: by heat conduction (absorbing heat) and by work (doing work). Let dQ be heat energy gained, E is the total energy of the element, and P dV is energy lost, then one can write dQ = dE + P dV = d (εV ) + P dV . (1) Processes for which dQ = 0 are known as adiabatic processes. The adiabatic expansion of the Universe does not increase the entropy of the Universe. Because the photon is a dominant component to contribute entropy, we consider where σ γ is the so-called black-body constant, related to the Stefan-Boltzmann constant, σ SB , as σ SB = σ γ c/4. One can use Eq.(2) into Eq. (1) to obtain where a is a scale factor. If we adopt the adiabatic expansion condition dQ = 0, then we obtain the time evolution of the CMB temperature where we use a 0 = 1 and z is the cosmlogical redshift.
Adiabatic expansion in VSL models
We can repeat the consideration in the previous section 2 by including the time evolution of cosmological constants. In these cases, the black-body constant can depend on cosmic time to obtain Eq (3) for the adiabatic expansion as One can define the different time-dependent black-body constants for various VSL models. We investigate these in this section.
mVSL
A VSL model which proposed the change of the speed of light only without allowing the variations of other physical constants is called the minimal VSL (mVSL) [31,32,33]. By assuming c = c 0 a b/4 , the black-body constant becomes where the Boltzmann constant does not vary. One can obtain the cosmological evolution of CMB temperature by plug Eq. (6) into Eq. (5) Thus, there exists the modification of the redshifting of the CMB temperature in this model.
meVSL
Cosmological evolutions of physical constants and quantities of the meVSL model are summarized in Table 2. From these, we can establish consistent all known local physics laws, including special relativity, thermodynamics, and electromagnetism. In this model, the black-body constant is the same as the SM one Thus, the cosmological redshift of the CMB temperature is also consistent with that of SM in the meVSL model
Other VSL
One can also obtain the different time-dependent black-body constants for other VSL models as It causes the deviation of CMB temperature from the linear increase as Thus, if one investigates the VSL model without considering the accompanying variation of the Planck constant, one obtains the CMB temperature's different time evolution for an adiabatic expansion compared to that of the SM.
Conclusion
In the context of the expanding Universe, cosmological redshift leads to differences in dimensional quantities, such as the wavelength and temperature of photons, between the observed and emitted states. As a result, considering the constancy or variability of dimensional constants, rather than just dimensionless ones, becomes physically meaningful in the context of cosmic expansion. Thus, one can investigate the viable varying speed of light model based on the RW metric, and it should satisfy the isotropy and homogeneity of the three-space, called the cosmological principle. Adiabaticity is a necessary condition to maintain the cosmological principle. From this, we should specify the cosmological evolution of the Planck constant for the given form of the speed of light. The analysis relies on the assumption of the cosmological principle, which posits the statistical homogeneity and isotropy of the Universe. Recent observations have suggested potential deviations from the expectation of cosmological principle [34]. In this case, we need to reconsider the entire paradigm for the Friedmann-Lemaîtree-Robertson-Walker metric with the validity of the adiabatic expansion.
|
2022-12-08T06:42:05.598Z
|
2022-12-06T00:00:00.000
|
{
"year": 2022,
"sha1": "c6482b1cb92adfb438ba8004178a75ed5a4095fb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bdb889f2235b141faee0008afaada46c565a0401",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
201663888
|
pes2o/s2orc
|
v3-fos-license
|
Rapid subacute myelopathy following kidney transplantation from HTLV-1 donors: role of immunosuppresors and failure of antiretrovirals
Two kidney transplant recipients from a single donor became infected with HTLV-1 (human T-lymphotropic virus type 1) in Spain. One developed myelopathy 8 months following surgery despite early prescription of antiretroviral therapy. The allograft was removed from the second recipient at month 8 due to rejection and immunosuppressors discontinued. To date, 3 years later, this patient remains infected but asymptomatic. HTLV-1 infection was recognized retrospectively in the donor, a native Spaniard who had sex partners from endemic regions. Our findings call for a reappraisal of screening policies on donor–recipient organ transplantation. Based on the high risk of disease development and the large flux of persons from HTLV-1 endemic regions, pre-transplant HTLV-1 testing should be mandatory in Spain.
Introduction
Human T-lymphotropic virus type 1 (HTLV-1) infection is a neglected disease despite there being 10-15 million people infected worldwide. 1,2 As in HIV infection, chronicity uniformly occurs following acute HTLV-1 acquisition and there are no self-limited infections. However, less than 10% of HTLV-1 carriers may develop lifelong clinical manifestations, including two life-threatening illnesses, namely a subacute invalidating myelopathy known as tropical spastic paraparesis (TSP) 3 and an acute T-cell leukemia/lymphoma (ATLL). 4 HTLV-1 is efficiently transmitted perinatally (breastfeeding), sexually (more from male to female) and parenterally (transfusions, injection drug use and transplants). 2 The diagnosis is based on the demonstration of specific serum HTLV-1 antibodies. A high HTLV-1 proviral load predicts the risk of disease development 5 and sexual transmission. 6 To date there is neither prophylactic vaccine nor effective antiviral therapy. 7 Spain is one of the world leading countries performing solid organ transplants. The annual figure is steadily rising, and is currently nearly 5300 transplants per year. To date, more than 110,000 individuals have benefited from organ donations. More than 60% are kidney transplants, with liver being the second most frequent allograft. Heart, lung and pancreas are transplanted less frequently. On the other hand, of the nearly 47 million people currently living in Spain, around 4.5 million are foreigners. In addition, the country is a frequent destination for tourists, with 75 million visitors during 2016, more than 10% from HTLV-1 endemic regions. 8 Rapid-onset subacute myelopathy and T lymphomas associated with HTLV-1 infection have both been reported following solid organ transplantation. At least three different scenarios have been described for HTLV-1 acquisition and disease in the transplantation setting, including infection from the organ donor, contaminated blood transfusions during surgery, and baseline carriage of HTLV-1 by the recipient. 9,10 It seems that the immunosuppression used to avoid organ rejection (e.g. corticoids, tacrolimus, cyclosporine, mycophenolate, etc.) plays a major role in frequent and rapid-onset disease development in this population, particularly for HTLV-1 associated myelopathy (HAM). 9,10 Case reports
Patient 1
During 2015 a 54-year-old woman underwent kidney transplantation in Spain. She was a white native Spaniard and denied any risk factors for HTLV-1 infection. To date, HTLV testing of blood and tissues in Spain is only recommended for 'donors that came from or live in highly endemic regions, or have either sex partners or parents from those regions'. 11 However, many transplantation centers do HTLV screening in all donor-recipients when possible. In this case, results of the donor HTLV testing are received after surgery, informing of reactivity for HTLV antibodies using a commercial enzyme immunoassay. No organs other than the two kidneys were transplanted from the same donor.
Positivity for HTLV-1 in the first kidney recipient was confirmed thereafter using immunoblot. Soon after this became known she was informed and antiretroviral therapy with zidovudine, lamivudine plus raltegravir was introduced within the first week following transplantation. Antiretrovirals were used for at least 18 months. In a retrospective interview of the deceased donor's relatives, it became apparent that he most likely had acquired HTLV-1 from a Brazilian heterosexual partner.
Eight months later the recipient developed difficulty walking, along with unsteady gait, followed by progressive lower extremity weakness, all suggestive of HAM/TSP. 12 Her HTLV-1 proviral load had been high in sequential longitudinal samples collected soon after transplantation, in the range of 320-350 HTLV-1 DNA copies per 10,000 peripheral blood mononuclear cells/ml. At the time of presentation of the initial neurological symptoms, the proviral load in the cerebrospinal fluid was also very high (2340 HTLV-1 DNA copies per 10,000 mononuclear cells/ml).
Patient 2
A 65-year-old male was the second kidney transplant recipient from the same infected donor. Like the other recipient, he was a white native Spaniard who denied any risk factor for HTLV-1 infection. He also became infected with HTLV-1 after transplantation but to date, 3 years later, has not developed any disease. He received antiretroviral drugs, including zidovudine, lamivudine plus raltegravir during the first 2 months following transplantation. It should be highlighted that immunosuppressive therapy (mycophenolate and tacrolimus) dosing had to be reduced from the second month due to hematological toxicity. Finally, rejection of the allograft forced its removal 8 months following transplantation. The patient is no longer undergoing immunosuppressant treatment and is currently undergoing hemodialysis. His proviral load has always been low, in the range of 25 HTLV-1 DNA copies per 10,000 peripheral blood mononuclear cells/ml.
Discussion
Similar cases of HAM/TSP shortly after transplantation have been reported in the literature (Table 1). [13][14][15][16][17][18][19][20][21][22][23][24][25] Myelopathy was originally reported in a heart transplant recipient in France following HTLV-1 acquisition from contaminated blood transfusions during the surgery. 13 In Spain, two kidney recipients and one liver transplant recipient who received organs from a single deceased HTLV-1 donor developed HAM/TSP within 2 years. 16 More recently, in the USA two kidney transplant recipients from separate infected donors developed HAM/TSP. 19,20 In Japan, where HTLV-1 is endemic, cases of HAM/ TSP have been reported following either kidney or liver transplants. 14,15,17,[22][23][24] More interesting, however, is the diagnosis of ATLL shortly following solid organ transplants, mostly among native Japanese recipients already infected with HTLV-1. 28,29 Similar cases have been reported in other places. 26,27 This is somewhat unique since most post-transplant lymphoproliferative disorders are of B lymphocytes and linked to Epstein-Barr virus infection. Anecdotally, in Germany, transplanted kidneys and a liver resulted in primary cutaneous T-cell lymphoma in at least two patients infected from a single HTLV-1 donor. 30 Table 1 summarizes the major features of ATLL cases reported so far following solid organ transplantation. Table 1. Reports of HTLV-1 associated diseases in solid organ transplant recipients.HAM/TSP developed in recipients negative for HTLV-1 who received organs from infected donors or, more occasionally, contaminated blood transfusions during surgery. In contrast, most recipients who developed ATLL were HTLV-1 before transplantation and immunosuppression.
Country
Year Age (years) Gender Organ Interval Reference These findings call into question the current view that anti-HTLV screening of donated organs is not needed or is only recommended when there is suspicion. This opinion is based on the assumption that HTLV-1 associated diseases will develop only in a small proportion of carriers and that progression to disease is slow compared with the average lifespan of humans and therefore poses no major threats to public health. 31 We postulate that in the transplant setting the very high risk of transmission and the high rate along with the rapidity of developing HTLV-1 disease is the result of immunosuppressive therapy. Interestingly, it seems to be a divergent predominant clinical manifestation depending on whether the allograft derives from an HTLV-1 infected donor or it is the recipient who actually carries the virus (Figure 1). When the recipient is already infected, transplantation might lead more frequently to ATLL, whereas when HTLV-1 is acquired from the donor, the recipient's greater risk is for HAM/TSP.
HTLV-1 associated myelopathy/tropical spastic paraparesis (HAM/TSP)
In a recent large retrospective study conducted in Japan including over 180 kidney transplants that involved HTLV-1 either in donors or recipients, 4 out of 10 HTLV-negative recipients that became infected following transplantation of a positive donor developed HAM/TSP within 4 years. In contrast, only 1 of 59 HTLV-1 positive recipients developed complications, with the only exception being a patient who developed both TSP/HAM and ATLL at 8 and 10 years, respectively, after transplantation. 32 More than 35 years after the discovery of HTLV-1, 33 donor/recipient screening for the virus remains sporadic or nonexistent in most countries. 34 In 1993, the American CDC recommended that persons infected with HTLV be counseled 'not to donate blood, semen, body organs or other tissues'. 35 Nevertheless, in 2009 the recommendation for universal HTLV screening in deceased organ donors was dropped in the USA because of the perception of low HTLV-1 prevalence and low positive predictive value of serologic screening tests. 36 Alongside this recommendation, most international transplant society guidelines currently do not provide any advice on HTLV-1 screening and use of donated organs.
High rates of HTLV-1 have been found in specific groups in nonendemic regions of North America 37,38 and Europe, 39 and decades of migration/immigration and tourism/traveling have altered the demographics of many Western countries. For example, the ongoing refugee migration from the Middle East and Africa would rapidly change the prevalence of HTLV-1 in many European countries. For a while this has been the case with Latin Americans in Spain, given the large influx of immigration facilitated by strong cultural and ancestral links. 8 Organ procurement organizations and transplant programs should determine local prevalence to guide HTLV-1 screening efforts. 37 Targeted screening of potential high-risk living (and deceased) donors for HTLV-1 has been recommended by some experts. 19 Suggestions have also been put forward for national or international registries of all HTLV-1 affected transplants. 10 Alerted to the dangers of rapid-onset TSP following HTLV-1 infected organ transplants, Japan begun in 2014 HTLV-1 screening of all kidney donations. Similarly, the UK issued new transplantation guidance on HTLV-1 screening of cadaveric solid organs in 2011. Finally, the Global Virus Network has recently called for more systematic HTLV-1 screening before solid organ transplantation everywhere. 31 It has been highlighted that 'whereas not screening donors . . . for HIV infection would be considered unethical, the same is not the case for HTLV-1, another human retrovirus, where risk assessments are made based on the predicted prevalence of cases among donors, the probable risk of transmission, and the subsequent likelihood of disease'. 10 Yet, those risk estimates are based on inadequate national epidemiology, ignoring changing demographics in many countries, 1,2 and a poor understanding of the transplant-acquired HTLV-1 disease risk. 9,10 The screening costs for HTLV-1 are small in comparison with the cost of posttransplant illness and/or death associated with TSP or ATLL following HTLV-1 infection. 40 For all these reasons, the American and European CDCs, along with other health agencies, should urgently update their policy recommendations on organ transplant HTLV-1 screening. In the meantime, some transplant centers around the world, including a few in Spain, have already implemented 'rapid' (or 'urgent') HTLV-1 testing of all deceased organ donors. In parallel, diagnostic companies should improve the specificity of HTLV-1 screening tests and design rapid tools (i.e. point-of-care, or PIC, tests) to minimize unwanted organ discharge.
|
2019-08-31T13:04:05.470Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2485529c34a28f3538377ed5f7c68379f5ae32c7",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2049936119868028",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "129635bb274e9b6c926c8c67049a1013e89e608b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245866086
|
pes2o/s2orc
|
v3-fos-license
|
How Is Cultural Intelligence Related to Human Behavior?
Cultural intelligence is an individual’s ability to recognize, understand, and adapt to cross-cultural contexts in order to change his or her self-capacity. Hence, this study explores the relationship between cultural intelligence and interpersonal communication, psychological capital (PsyCap), and organizational citizenship behavior (OCB) among teachers in Indonesia and investigates the possibility of finding relevant new models. A Likert questionnaire was used to collect research data. The research participants included 450 Indonesian junior high school teachers selected by accidental sampling. Structural equation modeling (SEM) was used for data analysis, supported by descriptive statistics and correlational matrices. The results indicate that cultural intelligence is significantly related to teachers’ interpersonal communication, PsyCap, and OCB. Additionally, this study also produces a new model regarding the relationship between cultural intelligence and a teacher’s OCB, mediated by interpersonal communication and PsyCap. Therefore, researchers and practitioners can discuss and adopt a new empirical model to increase cultural intelligence.
Introduction
The benefits of intelligence for life should not be doubted. Various studies have shown that intelligence contributes to individuals' livelihood and has implications for groups and organizations. One of them is cultural intelligence. Scholars have claimed that cultural intelligence is a measure of intercultural competence (Ang and Van Dyne 2008;Matsumoto and Hwang 2013;Leung et al. 2014;Yari et al. 2020), including work-related outcomes, such as job satisfaction, work adjustment, job performance (Schlaegel et al. 2021;Zhao et al. 2020;Akhal and Liu 2019;Baluku et al. 2019;Rockstuhl and Van Dyne 2018;Henderson et al. 2018), and knowledge sharing (Stoermer et al. 2021). The concept of cultural intelligence is based on the idea that acting intelligently within diverse cultures may require more than general intelligence and its subfactors (Ang et al. 2020). Conceptually, cultural intelligence is an individual's capability to function effectively in culturally diverse contexts (Earley and Ang 2003;Ang and Van Dyne 2008). This definition of cultural intelligence-as a capability-emphasizes a person's potential to be effective across a wide range of intercultural contexts (Ang et al. 2015). Yari et al. (2020) define cultural intelligence as the ability to succeed in complex cross-cultural environments through knowledge or cognition, motivation, and behaviors. Cultural intelligence also refers to being skilled and flexible regarding the understanding of a culture, learning more about it from ongoing interactions, and gradually reshaping one's thoughts to be more sympathetic to the culture and behaviors of others, so as to be more skilled and appropriate when interacting with them (Thomas and Inkson 2017). Cultural intelligence also reflects an individual's ability to help in recognizing the specificities of many cultures, and to understand and adapt to cross-cultural contexts (Berraies 2020). Sternberg et al. (2021) describe cultural intelligence as one's ability to adapt when confronted with problems arising in interactions with people of diverse cultures. However, cultural intelligence also draws upon abstract analytical abilities, as one must analyze situations that, unlike many practical problems, are rather removed from one's everyday experience. Cultural intelligence draws on creative abilities since the problems one confronts are more novel than one would confront in typical tests as well as life situations. Cultural intelligence includes the ability of individuals and organizations to adapt (Solomon and Steyn 2017) to an external environment consisting of various cultures. Therefore, cultural intelligence has a strategic position in social relations involving many people from various cultural backgrounds. For example, in learning activities, teachers' cultural intelligence is required to understand the condition of students from various cultural backgrounds and then adapt and position themselves appropriately among students. Understanding, adapting, and positioning oneself is important, since culture has been shown to influence one's perception and cognition (Kastanakis and Voyer 2014). Moreover, human behavior is strongly influenced by a content-related bias favoring culture (Cronk 2017). In fact, people tend to behave preferentially towards people with the same cultural background. Although this is not an error from a specific cultural perspective, it can hinder social relations with people from different cultural backgrounds. Overall, this tendency can disrupt cooperation, collaboration, and partnership, and stimulate interpersonal or intergroup conflicts. Hence, an understanding of various cultures is an essential skill for a person to increase his or her capacity, including interpersonal communication, PsyCap, and OCB, in order to adapt when interacting with people from different cultural backgrounds. Furthermore, this skill is beneficial for achieving life goals with a greater chance of success. Based on these issues, this research focuses on efforts to reveal the influence of cultural intelligence on interpersonal communication, PsyCap, and OCB, which are much needed by teachers in their teaching tasks. Furthermore, this study investigates the relationship between interpersonal communication and PsyCap with OCB, to find novel and relevant models regarding the relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap. This is crucial for teachers in Indonesia, who have quite diverse cultural backgrounds. Specifically, Indonesia has 1340 ethnic groups, each of which has its own unique culture. The Javanese are the largest ethnic group making up 41% of the population. Thus, teachers in Indonesia have diverse cultural backgrounds. Some teachers teach in schools with the same cultural base in their respective regions, while others, in different regions, teach in schools with students from different cultures as well as students with the same or different cultural backgrounds. In Indonesia, many teachers study at universities in other regions with different cultural backgrounds. This setting causes teachers to be involved and become part of the acculturation and cultural assimilation process that can impact their perspective on culture as the core of cultural intelligence. Ang et al. (2020) mention the four-factor model of cultural intelligence: metacognitive, cognitive, motivational, and behavioral.
(1) Metacognitive consists of planning, awareness, and checking; (2) cognitive consists of culture-general knowledge and context-specific knowledge; (3) motivational consists of intrinsic interest, extrinsic interest, and self-efficacy to adjust; and (4) behavioral consists of verbal behavior, nonverbal behavior, and speech acts. Meanwhile, Thomas and Inkson (2017) proposed three indicators of cultural intelligence: knowledge, mindfulness, and behavior skills. Knowledge refers to individuals' knowledge about cultures, including what culture is, how cultures are different, and how culture influences behavior skills. Mindfulness is related to being open-minded and using the context of the situation to support understanding. Behavior skills relate to demonstrating appropriate behaviors or social skills in new cultural settings. Researchers use the four-factor model from Ang et al. (2020) widely as an indicator for measuring cultural intelligence constructs. However, this study used three indicators: knowledge, mindfulness, and behavior skills, from Thomas and Inkson (2017), with the rationale that they are more suitable for Indonesian teachers, who have very diverse cultural backgrounds.
Hypothesis (H1). Cultural intelligence has a relationship with the teacher's interpersonal communication.
Cultural Intelligence and PsyCap
Cultural intelligence is also related to PsyCap. A previous study showed that cultural intelligence significantly affects PsyCap (Chen and Chen 2018;Imran and Shahnawaz 2020;Jiony et al. 2021). This indicates that cultural intelligence, reflected in knowledge, mindfulness, and behavior skills regarding culture, can enhance PsyCap. PsyCap is a basic competency that plays a vital role in employees' handling of awareness and achieving efficiency at work (Cavus and Gokcen 2015). According to Goertzen and Whitaker (2015), PsyCap offers a framework to understand human assets that can be useful in actualizing the human potential. Luthans and Youssef-Morgan (2017) state that PsyCap goes beyond the human capital theory, that answers the question: "what do we know?", and the social capital theory: "who do we know?" It also answers the ultimate questions that all individuals pose themselves: "who are we?" PsyCap refers to an individual's positive and developmental state characterized by self-efficacy, optimism, hope, and resilience (Luthans and Youssef 2004;Luthans and Youssef-Morgan 2017). Self-efficacy refers to individuals' belief in their capabilities to complete tasks. Optimism is based on the clear appraisal and judgment of current situations and understanding what can be done in such situations. Hope is the individuals' belief in their determination to achieve their goals and find possible pathways to overcome difficulties. Resilience refers to the ability to recover from adversity or setbacks, proactively rising to challenges, and adapting to an ever-changing business environment (Tang et al. 2019). Several studies across various fields and contexts have indicated that Psycap contributes to an individual's life and career. In the academic field, Carmona-Halty et al. (2019) reported that Psycap determines academic performance. Recent studies by Imran and Shahnawaz (2020) also revealed that PsyCap influences performance. Moreover, recent studies also claim that PsyCap is significantly related to career outcomes (Baluku et al. 2021) and innovative behavior (Slatten et al. 2020;Wang et al. 2021). These previous studies confirm that PsyCap is significantly important for individuals and organizations, especially teachers in a school organization context. Hence, it is crucial that this topic is researched and extensively discussed, in order to explore and analyze the factors that influence it, including cultural intelligence. The contextualization of the relationship between cultural intelligence and PsyCap can be viewed in the context of teachers with high knowledge and mindfulness regarding cultures to be self-efficacious, optimistic, and resilient. For example, a teacher with high knowledge of culture(s), including what culture is, how cultures are different, and how culture influences behavior and skills, will have high self-efficacy, which will be reflected in their belief in their capabilities to complete tasks. Likewise, teachers with high mindfulness-marked by being open-minded and using situational contexts to support their understanding-tend to have increased optimism (manifested in the clear appraisal and judgment of current situations and an understanding of what can be done in such situations) and high resilience to recover from adversity or setbacks, and can proactively rise to challenges and adapt to ever-changing environments. Therefore, the following hypothesis can be formulated: Hypothesis (H2). Cultural intelligence has a relationship with a teacher's PsyCap.
Cultural Intelligence and OCB
Several previous studies have also revealed that cultural intelligence is not only related to interpersonal communication and PsyCap but also to OCB. For instance, Narayanan and Nirmala (2016) and Shafieihassanabadi and Pourrashidi (2019) claimed that cultural intelligence affects OCB. Other studies by Mehdipour et al. (2019) and Kadam et al. (2021) also indicated that cultural intelligence had a positive impact on OCB. However, an opposite study investigating the relationship between OCB and cultural intelligence conducted by Popescu et al. (2018) indicated the existence of some influencing OCB components on cultural intelligence. The intensity of the influence was weak, and so OCB cannot be considered a cultural intelligence predictor. This shows that cultural intelligence is an essential antecedent for OCB; therefore, cultural intelligence dimensions, such as knowledge, mindfulness, and behavior skills regarding culture, can impact OCB. According to Cascio (2016), OCB refers to discretionary behaviors performed outside of one's formal role that can help other employees perform their job or show support and conscientiousness towards an organization. OCB consists of employee behavior that goes beyond the call of duty and exceeds formal job duties, such as cooperation and helpfulness to others that supports the organization's social and psychological context; however, it is often necessary for organizational survival (Kreitner and Kinicki 2013;McShane and Von Glinow 2020). Meanwhile, Schultz and Schultz (2016) described OCB as putting forth effort, i.e., doing more for your employer than the minimum job requirements, including taking on additional assignments, voluntarily assisting others at work, being up to date with the developments in one's field or profession, following company rules (even when no one is looking), promoting and protecting the organization, and maintaining a positive attitude and tolerating work inconveniences. Organ et al. (2006) mention OCB's indicators: altruism, conscientiousness, sportsmanship, courtesy, and civic virtue. Altruism is related to helping others who may be having difficulties related to organizational or personal tasks. Conscientiousness refers to an effort to exceed the organization's expectations. Sportsmanship is about tolerating conditions that are less than ideal in the organization. Courtesy is about maintaining good relations with others to avoid interpersonal problems. Finally, civic virtue refers to being responsible for organizational life. The relationship between cultural intelligence and OCB, for example, can be judged from the way in which mindfulness as a cultural intelligence indicator is reflected in one's open-mindedness and use of a situational context to support understanding, which can potentially stimulate some OCB indicators, such as conscientiousness, courtesy, and civic virtue. Furthermore, behavior skills-the ability to demonstrate appropriate behaviors or social skills in new cultural settings-also contribute to increasing altruism and sportsmanship in an OCB context. Recently, OCB has received greater attention among academics, researchers, and practitioners, since it is vital for individuals, e.g., teachers in a school organization context. Several studies have indicated the power of OCB. Widodo and Gustari (2020), for instance, showed that OCB influences teachers' innovative behavior in an educational context. Other studies have indicated that OCB affects employees' productivity (Barsulai et al. 2019), performance (Hidayah and Harnoto 2018), and organizational performance (Aval et al. 2017). This means that OCB is essential for teachers; therefore, it requires urgent investigation, primarily related to cultural intelligence. Based on the argument and above-mentioned studies, the following hypothesis can be formulated: Hypothesis (H3). Cultural intelligence has a relationship with a teacher's OCB.
The Relationship between Interpersonal Communication and PsyCap with OCB
Several previous studies-throughout many countries across various fields and contextshave shown that interpersonal communication, apart from being influenced by cultural intelligence, also affects OCB. For example, studies conducted by Ezerman and Sintaasih (2018), Putra (2018), Herfina and Wulandari (2019), Nofia et al. (2019), and Syamsudin and Retnowati (2019) claimed that interpersonal communication is related to OCB. This indicates that adequate interpersonal communication skills reflected in openness, empathy, supportiveness, positiveness, and equality (DeVito 2016) can lead to a high OCB, which is manifested as altruism, conscientiousness, sportsmanship, courtesy, and civic virtue (Organ et al. 2006). For example, teachers with high empathy, reflected by their sense of what other people feel, and experiencing what others are experiencing from their perspective (without losing one's identity), will tend to have high altruism, which is reflected in their willingness to help others facing difficulties related to organizational and personal tasks. Likewise, teachers who have strong supportiveness-that is, who have an uncertain and open-minded attitude and who are willing to listen to opposing points of view, change their position, and assist in creating a supportive environment-can support their sportsmanship. This provides tolerance in less-than-ideal conditions within an organization. Accordingly, interpersonal communication indicators can influence OCB indicators.
In addition, other studies have reported that PsyCap, apart from being influenced by cultural intelligence, can also affect OCB (e.g., Kong et al. 2018 Da et al. 2021). This confirms that an adequate PsyCap, reflected as self-efficacy, optimism, hope, and resilience, can stimulate OCB, consisting of altruism, conscientiousness, sportsmanship, courtesy, and civic virtue (Organ et al. 2006). For example, teachers with high self-efficacy-reflected, for example, in a strong belief in their capacity to complete school tasks-tend to possess high conscientiousness, that can exceed a school's expectations. Likewise, teachers with high hope, marked by a strong belief in their determination to achieve their goals and find possible pathways to overcome difficulties, also tend to have high civic virtue in the form of high responsibility for organizational life. Based on these studies and the above-mentioned arguments, the following hypothesis can be formulated: Hypothesis (H4). Interpersonal communication has a relationship with a teacher's OCB.
The Current Study
The current study focuses on investigating the relationship between cultural intelligence and human behavior-specifically, interpersonal communication, PsyCap, and OCB. Moreover, it focuses on the relationship between interpersonal communication and PsyCap on OCB and seeks novel and relevant models related to mediating the role of interpersonal communication and PsyCap on the relationship between cultural intelligence and OCB. To achieve this goal, using SEM analysis and research participants among teachers in Indonesia, we seek to confirm results from previous studies, which are used as the basis for building the research hypotheses of this study-namely the influence of cultural intelligence on interpersonal communication (Bostan et al. 2021), PsyCap (Chen and Chen 2018), and OCB (Shafieihassanabadi and Pourrashidi 2019). Then, we uncover the influence of interpersonal communication on OCB (Nofia et al. 2019;Syamsudin and Retnowati 2019) and the influence of PsyCap on OCB (Chamisa et al. 2020;Rodríguez-Cifuentes et al. 2020;Da et al. 2021). Finally, we hope to find the relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap, in order to build novel and relevant models from our research results.
Participants
The research participants (sample) consisted of 450 junior high-school teachers spread across eight provinces in Indonesia, namely Jakarta, Banten, West Java, Central Java, Riau Islands, Lampung, East Kalimantan, and East Nusa Tenggara. Moreover, the teachers were from 34 provinces in Indonesia, representing at least eight main ethnic groups-Betawi, Javanese, Sundanese, Bedouin, Kutai, Malay, Lampung, and Alor-forming the basis of cultural intelligence. From each province, four schools (two public schools and two private schools) were taken from four different districts; therefore, a total of 32 schools were included in this study. In total, 825 teachers from the 32 schools were used as the sample frame (population). Of these, 450 teachers voluntarily filled out the complete questionnaire during the research (Widodo 2019) and naturally became research participants. Their profiles are presented in Table 1. The majority of participants were female (68.67%), aged 26-35 years (35.33%), with a bachelor's degree (90%), married (80%), and with teaching experience ≥16 years (31.11%).
Procedure and Materials
This research uses a quantitative approach with a survey method. Using a Likert scale, a questionnaire was employed to collect data with five options: strongly disagree, disagree, neutral, agree, and strongly agree. Since this research took place during the COVID-19 pandemic-which required all participants and researchers to comply with health protocols, especially social distancing-the survey was conducted online using Google Forms, which can be shared via the WhatsApp application (on the teacher group WhatsApp network). (Organ et al. 2006). Cultural intelligence consists of six items with a corrected item-total correlation coefficient between .445 and .798 and an alpha coefficient of .828. Interpersonal communication consists of eight items with a corrected item-total correlation coefficient between .458 and .712 and an alpha coefficient of .830. PsyCap consists of 12 items with a corrected item-total correlation coefficient between .530 and .869 and an alpha coefficient of .920. OCB consists of 10 items with a corrected item-total correlation coefficient between .497 and .765 and an alpha coefficient of .911. All items have corrected item-total correlation coefficients of >.361, and all variables have an alpha coefficient of >.70; therefore, it is valid and reliable as a research instrument (Van Griethuijsen et al. 2015;Hair et al. 2018).
Data Analysis
The data analysis was conducted by structural equation modeling (SEM) supported by correlational and descriptive statistics. The significance of the path coefficient of direct correlation was tested using the Student's t-test, while the Sobel test (Z) was used for the path coefficient of indirect correlation (Abu-Bader and Jones 2021). Descriptive and correlational analyses were performed by SPSS version 26, while SEM analysis was performed by LISREL 8.80.
Result
The The measurement model estimate-by confirmatory factor analysis-is presented in Table 3. The factors' loading values of all indicators and items equal ≥.3 (Costello and Osborne 2005), indicating validity. This means that all indicators and items-as manifested variables-can measure all research variables as latent variables. Meanwhile, reliability was determined based on the construct reliability (CR) and variance extracted (VE) values. The CR values of all variables are greater than .70, and the VE values of all variables are greater than .50, indicating a good reliability and an acceptable convergence (Hair et al. 2018).
As seen in Table 4, the goodness of fit (GOF) indices from the eleven measurements of the criterion showed that eight good indices and one marginal index were suitable; however, two others were not (chi-square and sig. probability values). According to Hair et al. (2018), the chi-square test is highly sensitive to large sample sizes (>200); it requires accompaniment by another testing method. This study included 450 teachers; therefore, the chi-square test and sig. probability values were rendered ineffective. Nevertheless, it was considered valid, since the other nine criteria tested have suitable requirements. Table 4. Goodness-of-fit statistics.
Goodness of Fit Index Cut of Value Result Information
Absolute fit measures Chi-square χ2 < χ2 The results from the hypothesis tests are visualized in Figures 1 and 2 and summarized in Table 5. All the hypotheses were supported (t-value > t table at α = .01 and .05). Cultural intelligence has a significant relationship with interpersonal communication (γ = .71, p < .01), PsyCap (γ = .77, p < .01), and OCB (γ = .19, p < .05). Interpersonal communication has a significant relationship with OCB (β = .22, p < .01), and PsyCap has a significant relationship with OCB (β = .48, p < .01). The path coefficient of the direct relationship between cultural intelligence and interpersonal communication (γ = .71) and PsyCap (γ = .77) is better than that between cultural intelligence and OCB (γ = .19). Meanwhile, interpersonal communication and PsyCap have a significant relationship with OCB. This opens up opportunities for the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap. Therefore, it is important to analyze the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap.
was considered valid, since the other nine criteria tested have suitable requirements. The results from the hypothesis tests are visualized in Figures 1 and 2 and summarized in Table 5. All the hypotheses were supported (t-value > t table at α = .01 and .05). Cultural intelligence has a significant relationship with interpersonal communication (γ = .71, p < .01), PsyCap (γ = .77, p < .01), and OCB (γ = .19, p < .05). Interpersonal communication has a significant relationship with OCB (β = .22, p < .01), and PsyCap has a significant relationship with OCB (β = .48, p < .01). The path coefficient of the direct relationship between cultural intelligence and interpersonal communication (γ = .71) and PsyCap (γ = .77) is better than that between cultural intelligence and OCB (γ = .19). Meanwhile, interpersonal communication and PsyCap have a significant relationship with OCB. This opens up opportunities for the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap. Therefore, it is important to analyze the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap. In Table 6, the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap, was found to be significant. Cultural intelligence has a significant relationship with OCB mediated by interpersonal communication (β = .16, p < .01) and PsyCap (β = .37, p < .01). The path coefficient of the indirect relationship between cultural intelligence and OCB mediated by PsyCap (.37) is better than the direct relationship between cultural intelligence and OCB (.19). This indicates the vital role of PsyCap in mediating the influence of cultural intelligence on OCB, along with interpersonal communication, which also significantly mediates the influence of cultural intelligence on OCB.
Similar results were also obtained when simulating with alternative models by eliminating mediation. The relationship between cultural intelligence, interpersonal communication, and PsyCap with OCB was significant, with indicated path coefficients of .20, .21, and .49, respectively. Furthermore, the influence of cultural intelligence and PsyCap experienced a slight increase (.19 to .20; 48 to 49), while interpersonal communication experienced a slight decrease (.22 to .21). This indicates that cultural intelligence significantly affects OCB with or without mediation. However, with PsyCap mediation, the relationship was more robust (.19 to .37). Meanwhile, with interpersonal communication mediation, the relationship was slightly weaker (.19 to .16), but with a greater degree of significance (.05 to .01). Overall, with mediation, the relationship tended to be stronger, indicating that the new empirical model of the influence of cultural intelligence on OCB with the mediation of interpersonal communication and PsyCap is rational and applicable. .48 ** 5.38 Supported * p < .05; ** p < .01.
In Table 6, the indirect relationship between cultural intelligence and OCB, mediated by interpersonal communication and PsyCap, was found to be significant. Cultural intelligence has a significant relationship with OCB mediated by interpersonal communication (β = .16, p < .01) and PsyCap (β = .37, p < .01). The path coefficient of the indirect relationship between cultural intelligence and OCB mediated by PsyCap (.37) is better than the direct relationship between cultural intelligence and OCB (.19). This indicates the vital role of PsyCap in mediating the influence of cultural intelligence on OCB, along with interpersonal communication, which also significantly mediates the influence of cultural intelligence on OCB. Similar results were also obtained when simulating with alternative models by eliminating mediation. The relationship between cultural intelligence, interpersonal communication, and PsyCap with OCB was significant, with indicated path coefficients of .20, .21, and .49, respectively. Furthermore, the influence of cultural intelligence and PsyCap experienced a slight increase (.19 to .20; 48 to 49), while interpersonal communication experienced a slight decrease (.22 to .21). This indicates that cultural intelligence significantly affects OCB with or without mediation. However, with PsyCap mediation, the relationship was more robust (.19 to .37). Meanwhile, with interpersonal communication mediation, the relationship was slightly weaker (.19 to .16), but with a greater degree of significance (.05 to .01). Overall, with mediation, the relationship tended to be stronger, indicating that the new empirical model of the influence of cultural intelligence on OCB with the mediation of interpersonal communication and PsyCap is rational and applicable.
Discussion
This research found that cultural intelligence significantly affects teachers' interpersonal communication, PsyCap, and OCB. This finding confirms that cultural intelligence is a crucial determinant for teachers' interpersonal communication, PsyCap, and OCB. This empirical result shows that teachers with high cultural intelligence tend to have adequate interpersonal communication skills; in other words, cultural intelligence can improve interpersonal communication skills. This empirical result aligns with and confirms previous studies that suggest that cultural intelligence affects interpersonal communication (Bahrami and Narafshan 2018;Henderson et al. 2018;Bostan et al. 2021). In reality, teachers who have knowledge regarding culture, including what culture is, how cultures are different, and how culture influences behavior and skills, are open-minded and use the context of a situation to support their understanding and demonstrate appropriate behaviors or social skills in new cultural settings (Thomas and Inkson 2017). Moreover, they tend to be open, empathetic, supportive and positive and to understand equality (DeVito 2016) in fostering communication with other people (including students) from various cultural backgrounds in the school environment. For teachers, interpersonal communication skills are vital. Interpersonal communication is necessary for building interactions and social relations with school members, especially students-individually and in group (classical) settings. In the modern, predominantly student-centered learning environment, that relies on a participatory and collaborative approach, interpersonal communication is a key factor that determines the continuity of learning activities. Teachers' skills in managing a classroom are dynamic, allowing students to collaborate and participate well and effectively in learning activities; in practice, interpersonal communication skills are necessary. This study shows that these interpersonal communication skills can be achieved through improvements in cultural intelligence.
This study also indicates that cultural intelligence significantly influences PsyCap. This empirical result shows that teachers with high cultural intelligence tend to possess adequate PsyCap; in other words, cultural intelligence can be relied upon to build an improved PsyCap. This finding is consistent with previous studies conducted by Chen and Chen (2018), Mohammadi et al. (2020), andJiony et al. (2021), in which cultural intelligence had a significant relationship with PsyCap. In practice, teachers who have the knowledge, open-mindedness, and demonstrate appropriate behaviors towards cultures across various contexts (Thomas and Inkson 2017) tend to possess adequate PsyCap, which manifests as self-efficacy, optimism, hope, and resilience (Luthans and Youssef-Morgan 2017). For teachers, PsyCaps such as self-efficacy, optimism, hope, and resilience are essential. For instance, self-efficacy-an individual's belief in his or her capabilities to complete tasks-is necessary for teachers to build confidence in themselves in order to carry out teaching tasks well and effectively. In addition, optimism is the clear appraisal and judgment of current situations and understanding what can be done in such situations; teachers also need to build optimism when carrying out teaching tasks. Likewise, hope is an individual's belief in his or her determination to achieve his or her goals and find possible pathways to overcome difficulties. Importantly, teachers must build more hope in order to deliver successful teaching tasks. Finally, resilience, the ability to recover from adversity or setbacks, proactively rise to challenges, and adapt to ever-changing organizational environments, is an important asset for teachers, so that they can face various difficulties and respond enthusiastically and boldly to future challenges and opportunities. Once again, this study demonstrates the power of cultural intelligence to build an improved PsyCap.
This study also showed that cultural intelligence has a significant influence on OCB. This empirical result suggests that teachers with high cultural intelligence tend to also have high OCB. In other words, cultural intelligence can increase OCB. This finding is consistent with the studies of Narayanan and Nirmala (2016), Shafieihassanabadi and Pourrashidi (2019), Mehdipour et al. (2019), and Kadam et al. (2021), who claimed that cultural intelligence has a significant relationship with OCB. Regarding the dynamics of teacher activities in schools, teachers who possess knowledge and open-mindedness and can demonstrate appropriate behaviors towards cultures in various contexts (Thomas and Inkson 2017) tend to assimilate with school members from different cultural backgrounds. Therefore, they possess strong altruism, conscientiousness, sportsmanship, courtesy, and civic virtue (Organ et al. 2006). For example, teachers with high knowledge about culture, including what culture is, how cultures are different, and how culture influences behavior and skills, tend to have good conscientiousness, courtesy, and civic virtue. Moreover, if teachers are open-minded and use the context of a situation to support their understanding and demonstrate appropriate behaviors or social skills in new cultural settings, they can easily build altruism and sportsmanship. For teachers, OCB is crucial. OCB indicators, such as altruism, conscientiousness, sportsmanship, courtesy, and civic virtue, are indispensable for teachers to deal with various problems that standardized conventional procedures and methods cannot resolve. For example, students who have difficulty following class lessons require additional assistance from teachers outside the classroom. Students who are not motivated to learn-for various reasons-should also seek assistance from a teacher. Likewise, students who behave defiantly, e.g., bullying, truancy, and brawls, also require the teacher's input to overcome these problems. In summary, teacher OCB is not only limited to learning activities but also involves various activities and problems outside of school learning that require participation and contribution to solving problems, such as helping students who behave defiantly. At the organizational level, teacher OCB is needed by schools to help deal with various school problems that have not been resolved or to support school efforts to achieve improved progress and competitiveness. Previous studies have shown that OCB-at the individual level-can increase productivity (Barsulai et al. 2019) and performance (Hidayah and Harnoto 2018), and-at the organizational level-can help improve organizational performance (Aval et al. 2017). This means that cultural intelligence has not only a direct positive impact on teacher OCB but also indirect implications for school performance.
In addition, this study also reveals that interpersonal communication is significantly related to OCB. This evidence shows that teachers with adequate interpersonal communication skills tend to have strong OCB. In other words, interpersonal communication can improve teachers' OCB. This finding agrees with previous studies conducted by Ezerman and Sintaasih (2018) (DeVito 2016), is important or a predisposition that allows teachers to demonstrate strong altruism, conscientiousness, sportsmanship, courtesy, and civic virtue (Organ et al. 2006). For example, teachers who uphold the values of openness, empathy, and equality in communicating will tend to show strong altruism, courtesy, and civic virtue. Likewise, teachers with high supportiveness and positiveness tend to show strong conscientiousness and sportsmanship.
This study also indicated that PsyCap is significantly related to OCB, confirming that teachers with adequate PsyCap tend to have strong OCB. This means that Psycap can be a vital asset for teachers to develop their OCB. These findings confirm scholarly studies in which PsyCap was found to be related to OCB (Yildiz 2019;Chamisa et al. 2020;Rodríguez-Cifuentes et al. 2020;Da et al. 2021). In reality, PsyCap indicators, such as selfefficacy, optimism, hope, and resilience (Luthans and Youssef-Morgan 2017), are essential antecedents for teachers to build altruism, conscientiousness, sportsmanship, courtesy, and civic virtue (Organ et al. 2006). As an example, teachers with high efficacy and optimism tend to show strong sportsmanship. Likewise, teachers possessing high hope and resilience also tend to show strong conscientiousness and civic virtue; therefore, it is easy for them to help other parties, such as students and schools, who may require assistance.
In addition, this study also discovered new empirical data on the role of interpersonal communication and PsyCap in mediating the relationship between cultural intelligence and teachers' OCB. Both interpersonal communication and PsyCap significantly mediated the relationship between cultural intelligence and teachers' OCB. However, PsyCap's mediating role was more enhanced than interpersonal communication, which means that PsyCap is more dominant and crucial in mediating the relationship between cultural intelligence and teachers' OCB compared to interpersonal communication. Moreover, these findings led to a new empirical model regarding the relationship between cultural intelligence and OCB mediated by interpersonal communication and PsyCap. In contrast, the results were also significant when simulating with alternative models by eliminating mediation. The relationship between cultural intelligence and PsyCap experienced a slight increase, while interpersonal communication experienced a slight decrease. Overall, with mediation, the relationship tended to be stronger, indicating that the new empirical model of the relationship between cultural intelligence and OCB mediated by interpersonal communication and PsyCap is rational and applicable. Naturally, this finding can be used as a topic for discussion among researchers and practitioners. Furthermore, it can also be adopted as a model for developing cultural intelligence, especially for improving interpersonal communication, PsyCap, and OCB across various locations, sectors, organizations, and contexts.
Overall, the results of this study indicate that the strength of cultural intelligence relates to interpersonal communication, PsyCap, and teacher OCB. Therefore, the cultural intelligence of teachers should be continuously improved by using an appropriate strategy. First, teachers should independently and consciously increase their cultural intelligence capacity by reading literature relevant to cultural intelligence. Second, principals should encourage teachers to participate in training programs specifically designed to improve teachers' cultural intelligence. These training programs should involve instructors from expert circles who are competent in cultural intelligence. Importantly, the provided training material should lead to the mastery of knowledge regarding cultures, including: (1) what culture is, how cultures are different, and how culture influences behavior and skills; (2) to be open-minded and use the context of a situation to support understanding; and (3) to demonstrate appropriate behaviors or social skills in new cultural settings. Moreover, the methods should be used according to the needs of the training material, i.e., combination: discussions-including focus group discussions, simulations, and role-playing. Lastly, school principals should create and enforce guidelines for teachers to think and behave culturally in the school environment.
Conclusions
Cultural intelligence is an individual's ability to recognize, understand, and adapt to cross-cultural contexts to change his or her self-capacity. Therefore, cultural intelligence contributes significantly to the lives of individuals, groups, and organizations. This research found that cultural intelligence significantly affects interpersonal communication, PsyCap, and OCB among teachers in Indonesia. Furthermore, this study also produced a new model regarding the relationship between cultural intelligence and teachers' OCB mediated by interpersonal communication and PsyCap. This finding is crucial for teachers' well-being, especially in developing teachers' intercultural competence, including workrelated outcomes, such as job satisfaction, work adjustment, knowledge sharing, and job performance, which can impact the team and organizational performance. Therefore, in the future, researchers and practitioners should discuss and adopt a new empirical model to increase cultural intelligence, in order to specifically enhance teachers' interpersonal communication, PsyCap, and OCB, in various contexts.
Limitations and Future Research
The findings of this study should be interpreted by considering its limitations. The mediating effects produced by this study should be interpreted with caution in relation to the cross-sectional study design. Accordingly, future research should investigate the interrelationships between cultural intelligence, interpersonal communication, and psychological capital using a longitudinal or cross-lagged panel design to obtain stronger conclusions about the causal order of these variables. Besides, this study could not control all variables that may have interfered with the relationship between cultural intelligence, PsyCap, interpersonal communication, and OCB, e.g., the Big Five personality traits. Future research may involve the Big Five personality traits, both as an antecedent of cultural intelligence and a means of moderating the influence of cultural intelligence on interpersonal communication, PsyCap, and OCB. Moreover, this study did not accommodate all indicators/dimensions of all the research variables. Further research should utilize indicators/dimensions not used in this study or comprehensively synthesize all indicators/dimensions. Furthermore, this study did not explore the empirical facts of why cultural intelligence affects OCB directly or indirectly-mediated by interpersonal communication and PsyCap. Therefore, further research should respond to these limitations using mixed methods-simultaneously, using both quantitative and qualitative analyses. Finally, further research should replicate the findings of this research by adding different data sources (participants) for OCB, such as principals and/or students.
Author Contributions: M.A. and W.W. were actively involved in writing the article, from planning, research and data analysis to the preparation of the final manuscript. All authors have read and agreed to the published version of the manuscript.
|
2022-01-12T16:23:13.868Z
|
2022-01-07T00:00:00.000
|
{
"year": 2022,
"sha1": "a3919f91c2a23dcbbc22f26e44f55f1bd1db08bc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-3200/10/1/3/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cbc077bfc88599548afdbc7a6bfb24dc0d8a0c4c",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
159365879
|
pes2o/s2orc
|
v3-fos-license
|
Competitiveness Analysis of Indonesian Fishery Products in ASEAN and Canadian Markets
This study aimed to analyze the competitiveness of Indonesian fishery products in the ASEAN and Canada markets. The method used was Revealed Comparative Advantage (RCA), Export Product Dynamic (EPD), and X-Model product export potential. The research showed that Vietnam and Canada had a similar level of export structure to Indonesia in the ASEAN market so that Indonesia would have a high competition with both countries. Indonesian fishery products showed a high competitiveness in the export destination markets, except Philippines and Canada. The market position of Indonesian fishery products in Philippines, Thailand and Canadian markets was in the rising star and lost opportunity. These countries also showed as an optimist and potential market for Indonesian fishery products. The policy implication is that government and private sector need to prioritize the export of fishery products to Thailand, Philippines and Canada by improving the product competitiveness through quality improvement and production cost efficiency
Introduction
Indonesia is one of the biggest maritime nations in the world. Indonesian water territorial is very extensive with an area of 6.32 million km2 and 99.093 km of coastline (Ministry of Marine Affairs and Fisheries, 2015). Indonesian waters have huge fishery potential. The United Nations Development Program (UNDP) mentioned that Indonesia waters were a habitat for 76% of world coral reefs and 37% of world reef fish. The potential of Indonesian fishery in 2016 was 12.5 tons per year with the value of marine resources and services gained US$ 2.5 trillion per year. Due to this condition, the marine and fishery sector had a comparative advantage for Indonesia which was very dependent on the sectors that utilized its natural resources (Oktaviani et al., 2008).
Other than to meet domestic demand, Indonesian fishery production was also used to meet the demand from other countries through export activities. The performance of Indonesian fishery export during 2011-2015 showed the growth of export value. The value of Indonesian fishery export showed growth of 2.29% per year from US$ 3521.091 million in 2011 to US$ 3,943.935 million in 2015. However, the export volume of fishery products in the same period showed a decrease of 3.41% per year, from 1,159.35 thousand tons to 974.55 thousand tons. The decreased export volume was due to lower shrimp exports (23%) and fish (6%) to the primary destination markets such as the US, Japan, and the European Union (Statistics Indonesia, 2015).
Problems encountered such as the competitiveness of Indonesian fishery products that was still low compared to the competitor countries. Indonesia was recognized as the world's second largest fishing producer after China and the world's third largest aquaculture producer after China and India (Food and Agriculture Organization, 2015). However, Indonesia only occupied the top 10 positions as fishery products exporter in the world. This condition was caused by the low export value of Indonesian fishery products in the international market. The importance of export product competitiveness was driven by the globalization which created a more dynamic and competitive market (Asmara et al., 2016).
The low competitiveness of export commodities from the fishery sector was due to the domination of products with low value-added making it difficult to compete with products from other countries (Kusumastanto, 2008). The huge dominance of fresh/chilled fish caused a low value-added so that the comparative advantage and competitiveness of Indonesian fishery products was low (Saptanto, 2011). This phenomenon had raised a challenge for Indonesia to maintain or seize the international market share. Therefore, a key which must be properly addressed so that the Indonesian marine and fishery sector could win the global competition was the improvement of competitiveness (Chasanah et al., 2017).
Data from the Ministry of Marine Affairs and Fisheries (2015) showed that the leading export commodities were shrimp and lobster and tuna which tended to be low value-added. The export destinations of this leading commodities were dominated by the United States, Japan, and European Union markets as the developed country group. The facts showed that the contribution of the primary destination countries tended to decline, especially Japan and the EU. The economic crisis that hit those three countries significantly affected the Previous studies measured the competitiveness of Indonesian fishery product commodities by RCA and EPD methods. However, studies with X-Model methods to measure market potential for Indonesia's fishery product exports were very limited. In FTA scheme followed by Indonesia, most of the existing studies focused on the FTAs conducted between Indonesia and the trading partner countries. This study analyzed the FTA between Indonesia in the ASEAN region and Canada which was in the initiation stage since 2016. Therefore, this study was conducted to measure the competitiveness position and market development strategy which focused on Indonesian fishery products in ASEAN and Canada markets.
Methods
This study used secondary data which was annual data between 2010 up to 2015 consisted of countries as the export destination markets in ASEAN-Canada FTA, which were 6 countries. All the data of export value of Indonesian fishery product to ASEAN countries (Malaysia, Philippines, Singapore, Thailand, Vietnam) and Canada was obtained from the United Nations Commodity Trade based on Harmonized System (HS) codes of 4 digits, 1996 version. Fishery commodity groups analyzed were: a) HS 0301 for live fish; b) HS 0302 for fresh/chilled fish; c) HS 0303 for frozen fish; d) HS 0304 for fillet and fish meat; e) HS 0305 for dried/salted/smoked fish; f ) HS 0306 for the crustaceans; and g) HS 0307 for the molluscs.
This research used Revealed Comparative Advantage (RCA) method to measure competitiveness fishery product of Indonesia. RCA method was used to determine the comparative superiority or competitiveness of a commodity. RCA was an index that measure the export performance of a commodity from a country by evaluating the export role of the commodity that showed the ratio between market share of a certain country's commodity in the global market with exports share if the country toward the total of world export (Zuhdi & Suharno, 2015). Based on Bender & Li (2002), the RCA index considered intrinsic benefit of certain exports commodity with the change of productivity in a productivity economic and the relative grace factor. RCA assumption is the pattern of commodity trading reflecting the differences of each country related to relative cost and non-price factor. So that RCA could reveal comparative superiority from those countries in international trading. If RCA value was bigger than 1, the product had comparative superiority or strong competitiveness. Otherwise, if RCA value is smaller than one, the product did not have comparative superiority or weak competitiveness. Systematically, the calculation method of RCA as followed (Kanaka & Chinadurai, 2012). (1) Where: = competitiveness of Indonesian fishery product to export destination country = value of Indonesian fishery product export in export destination country = value of total export of Indonesia in exports destination country = value of world fishery product export in export destination country = value of total world export in export destination country This study also used the Export Product Dynamic (EPD) method as one of the indicators that can give good depiction about competitiveness level. The indicator measured market position from a country's product for certain market destination. An EPD matrix consisted of market appeal and business strength information. Combination of market appeal and business strength resulted in position character of product that wanted to analyze in four categories. There are "Rising Star", "Falling Star", "Lost Opportunity", and "Retreat" seen in Figure 1.
Figure 1. Market appeal and business strength in EPD matrix
Sources: Estherhuizen (2006) The estimated commodity of competitiveness position would place one of four quadrants depended on market appeal and business strength of the commodity. The general formula of EPD as followed (Hasibuan et al., 2012) Where: X ij = value of Indonesian fishery product in export destination country X t = value of Indonesian total export in export destination country W ij = value of world fishery product export in export destination country W t = value of world total export in export destination country t = year -t T = analysis number of year to use To determine market development potential in each ASEAN country and Canada, this study used the X-Model of Potential Export Product method. This method was used to conduct product clustering that had high development potential in export destination country by considering competitiveness (RCA and market position (EPD). X-Model of potential export product method was shown in Figure 2.
Result and Discussion
Marine resources potential made Indonesia as one of the countries of the producer of fishery products in the world. Total of production amounted 22.4 million ton in 2016 that consisted of capture fisheries of 6.83 million ton and aquaculture was 16.68 million ton. The fishery production had increased better than in 2015 that was amounted 0.42% (Ministry of Marine Affairs and Fisheries, 2017). The trend of Indonesian fishery production had increased since 2012 with the average enhancement of 20.9 million ton in the same period. The development of Indonesian fishery production can be seen in Figure 3. The fishery production was utilized to fulfill the consumption of the country. The compliance of the country's fish consumption really depended on the stock of fish supply. Data of fish supply was to find out the number of fish which were available to fulfill domestic consumption needs as it was shown in Table 1. Table 1 showed the trend of fish supply and consumption per capita in 2011-2015, respectively showed an average growth of 10.15% and 6.26%. Table 1 also showed that fish stock in the country was still surplus so fish consumption needed to increase and export needed to enhance to permeate country's fishery production. The production of capture fisheries and aquaculture were then export to trading partners country of Indonesia. Based on goods group in Harmonized System (HS) 4 digits, Indonesia fishery product was categorized from live fish (HS 0301), fresh/chilled fish (HS 0302), frozen fish (HS 0303), fillet and fish meat (HS 0304), dried/salted/smoked fish (HS 0305), crustaceans (HS 0306) and molluscs (HS 0307. The performance export for these fishery products from 2012 until 2016 shows in Table 2. If it was seen from the volume side, the performance of Indonesian fishery product export decreased 3.39% in 2012-2016, that 789,457 tons became 687,822 tons. The decrease in the export volume was encouraged by the export volume decrease of frozen fish (HS 0303) and dried/salted/smoked fish (HS 0305) that were 16.08% and 12.78%. As it was in Table 2, the export volume of both fish commodities continued to decrease from 2012 until 2016. For frozen fish, the biggest reduction export occurred in 2015 that was amounted 49.34% worse than in 2014. For dried/salted/smoked fish commodity, export performance had the biggest reduction in 2013 of 27.69%.
If it was seen from each commodity contribution toward the value of fishery product export, crustaceans (HS 0306) became the biggest contributor with US$ 1.46 billions or 50.49% from the total value of Indonesian fishery export in 2016. The second and third grade were given by fillet and fish meat (HS 0304) amounted US$ 427 million (14.73%) and frozen fish (HS 0303) was US$ 389 million (13.41%). In 2016, frozen fish and crustaceans products had export value growth rather than in 2015 respectively of 18.15% and 7.97%. Meanwhile, value export of fish fillet and fish meat had a reduction of 0.79%. One of opportunity in developing market export destination of the fishery is by cooperating with ASEAN and Canada. The economic relation between ASEAN and Canada have been held since 1977. Trade volume of both parties shows the enhancement every year. This positive sign encouraged ASEAN and Canada to reveal Joint Declaration Between ASEAN and Canada on Trade and Investment (JDTI) in order to encourage and increase economic cooperation. This cooperation showed the significant result in which the trade volume of ASEAN and Canada in 2013 achieved almost CAD17 billion o increased 7.3% from 2012. Furthermore, adding the term of agreement from 2016 -2020 regenerated the agreement. In 2017, Canada and ASEAN had the plan to make FTA together so that the economic activity of both parties could be more integrated.
Trade performance of both parties continued to grow rapidly in years. In 2015, trade in bilateral goods between ASEAN and Canada achieved 21.4 billion dollars in Canada or it increased better than in 2014 that was only 18.2 billion dollars in Canada. The data showed that trade volume between both of then increased 10% every year in the period of last 5 years. The magnitude of the trade showed that ASEAN had a contribution of 2.02% from total trade of bilateral goods of Canada was bigger than Germany (1.97%) as the sixth biggest trading partner of Canada.
Figure 4. Export of Indonesian Fishery Products to ASEAN and Canada
Source: United Nations Commodity Trade, 2017 (processed) FTA schematic also impacted the performance of Indonesian fishery product export to ASEAN and Canada markets in the last 10 years. The exports development of Indonesian fishery product to ASEAN and Canada is showed in Figure 4. Export development to ASEAN grew in average 12% in 2006-2010 and 5% in 2011-2015. Exports performance of Indonesian fishery product to Canada downed in the average of 4% per year in 2006-2010, but it increased becoming 1% per year in 2011-2015. This trade performance showed promising potential from higher economic integration, whether integration with ASEAN countries or between regions integration.
Canada has FTA with ASEAN that plays an important role in facing the United State and its foreign policy. Other than that, the cooperation is also to face the resurrection economic of China and uncertainty of Uni Europe countries. Therefore, the Foreign Affairs Ministry of Canada in 2016 announced that feasibility study would be conducted to find out the opportunity and challenge of the free trade agreement formation ASEAN and Canada in the future. If FTA is conducted successfully, FTA will have a big potential of economic cooperation at large. It means, it is for ASEAN and Canada, it is also to improve the position of Canada business so that integration with intra-ASEAN to be more intends.
Exports structure of Indonesian fishery product in ASEAN market was dominated by frozen fish of 40%, fresh/chilled fish (20%) and crustaceans (14%) in 2010-2015. The similar export structure was shown by competitors country of Indonesia, which were Vietnam and Canada. Superiority fishery product of Vietnam was fish fillet that gave a contribution of 60% and the rest was donated by crustaceans and mollusks groups of 20%. Meanwhile, Canada showed the most same with Indonesia in which frozen fish gave the biggest contribution (40%) and it was followed by crustaceans and fillet and fish meat products with the contribution of 15% and 13%. These facts were shown with the calculation result of the Export Similarity Index (ESI) between Indonesia and five trading partners in ASEAN market in Table 3. Export similarity index (ESI) has a value between zero and one or 0 ≤ S ≤ 1. ESI value that closes to zero shows the export structure of both countries is not the same. It means, very slight competition causes the loss of market share because of the same product. Otherwise, ESI value that closes or the same as 1 (one) marks that pattern or structure exports of both countries is the same (or close to identical) so that both countries face strong rivalry that will cause the loss of market share in export destination country (Finger & Kreinin, 1979). high fishery product export are Canada and Vietnam in which ESI value is reported bigger than 0.50. Canada is the highest similarity level of export structure with the ESI value of 0.842. The country that is the main competitor of Indonesia is Vietnam with an ESI value of 0.642. This condition is because of the enhancement of export structure similarity of main competitor countries significantly every year during 2010-2015 (Table 4).
In fact, Vietnam showed the export structure of fishery product that is really the same as Indonesia at the end of the period that was amounted 0.74 in the previous year. The same condition occurred to Canada at the end of the period that reported ESI value was 0.942 toward Indonesia. This condition became the warning for Indonesia that will compete to Canada and Vietnam in the future in maintaining market share for fishery product in the ASEAN market. Thus, Indonesia has to work hard to increase competitiveness fishery product to face rivalry in the future. On the other side, the country that had a different export structure of fishery product was Singapore that was shown by the low ESI value of 0.085. It means the Export structure of Indonesia and Singapore fishery product were really different. It was because of the low absolute superiority of Singapore due to overflowing fishery source that occurs in the main export commodity. The low level of similarity export was also shown by Malaysia and Thailand with an ESI value of 0.3888 and 0.308. Both countries had ESI value that tent to decrease during 2010-2015. It showed that Malaysia and Thailand had a different export structure of fishery product from Indonesia. It meant, the specification of the fishery product level of three countries in the ASEAN market increased and the trade relation of three countries was complimentary (Wang & Liu, 2015).
Based on United Nations Commodity Trade, it is obtained that in 2010-2015, there were 7 commodity groups of Indonesia fishery in HS 4 digits had strong competitiveness with RCA index value RCA > 1 that were different in each export destination country. Table 5 shows the estimation result of Indonesia fishery product competitiveness in ASEAN and Canada during the period 2010-2015 with RCA method. RCA estimation results in Table 5 shows that comparative superiority of all Indonesia fishery product had strong competitiveness (RCA > 1) in Vietnam market, otherwise, most Indonesian fishery products had low competitiveness in Philippines market. Through the RCA value can be seen that the competitiveness of Indonesian live fish had strong competition in all the export destination markets because RCA value was bigger than 1. It showed that the strongest competitiveness among six export destination countries was the Philippines, followed by Vietnam and Canada. This result indicated that the market share of Indonesian fishery product export was bigger than other exporter countries (Serin & Civan, 2008).
Improving the competitiveness of fresh/chilled fish products and dried/salted/smoked fish in the Philippines and Canadian markets was necessary due to the RCA of both fishery products indicated less than one. The competitiveness of frozen fish in the Philippines and Canadian markets was also showed weak competitiveness, followed by the Malaysian market. This finding was consistent with Hidayati's (2016) study which proved that frozen and processed fishery products, especially tuna, tended to possess weak competitiveness in the international market. However, it was necessary to increase the quantity and quality of these three fishery productions for the Singapore, Thailand, and Vietnam markets to avoid a decreased competitiveness.
Fillet and fish meat products showed strong competitiveness in the ASEAN and Canadian markets, except in the Philippines market due to less than one of RCA's value. It was similar to the crustaceans group, except in the Thailand market. For the mollusks group, it was necessary to improve the quality of production to the markets of the Philippines, Singapore, and Thailand. The strongest competitiveness was in the Canadian market, followed by Vietnam and Malaysia.
Based on Table 5, crustaceans and mollusks showed the strongest competitiveness in the Canadian market as these two groups were Indonesia's main export commodities to Canada. Ornamental fish belonged to live fish group was also listed as the primary export commodity of Indonesian fishery products to Canada. These three groups of fishery products had the strongest competitiveness compared to other Indonesian fishery products. This showed that http://journal.uinjkt.ac.id/index.php/signifikan DOI: htttp://dx.doi.org/10.15408/sjie.v8i1.7301 Canada was another potential market for Indonesian crustaceans and mollusks group which had been dominated by Malaysia as the largest exporter (Ashari et. al., 2016).
The analysis result also showed that the strength of Indonesian fishery products in ASEAN countries was dominated by fish product with the highest RCA value in the live and fresh/chilled fish products, while crustaceans had the highest competitiveness in Canada. This indicated that Indonesian fishery products were vulnerable to non-tariff barriers imposed by developed countries such as Canada because non-tariff barriers often occur in shrimp commodity (Saptanto et al., 2017).
The market position of Indonesian fishery products in ASEAN and Canada using EPD method can be seen in Table 6. Based on the estimation of EPD results, Indonesian fishery products acquired a rising star market share and export market share from the destination country was lost the opportunity in the Philippines, Thailand, and Canada. The opposite condition occurred in Malaysia, Singapore, and Vietnam markets where Indonesian fishery products lost their falling star market share and no longer wanted by the markets (retreat). Both of these positions were not expected because these indicated that Indonesian fishery products had been declining due to non-dynamic and non-competitiveness movements in global markets (Kusuma & Firdaus, 2015). Indonesian live fish and dried/salted/smoked fish had the best benefit (rising star) in the markets of the Philippines, Thailand, and Canada. Live fish products in Malaysia and Singapore were in the falling star position, which showed that Indonesia was unable to meet the growing demand in both countries. Vietnam's market positioned Indonesian live fish and dried/salted/smoked fish products in a retreat condition. This was due to the declining RCA value of these products from 2010 to 2015. A similar condition was also shown by the fillet and fish meat products which grew dynamically in the Philippines, Thailand, and Canada markets. However, it was necessary to improve the product market share in Malaysia and Vietnam markets because fillet and fish meat products of Indonesia was in a falling star position. The more unfavorable condition occurs in the Singapore market where Indonesian fillet and fish meat products were in a stagnancy position and non-competitive (retreat) products.
Indonesian frozen fish had the best competitive benefit in the Philippines and Thailand markets, while the lowest competitive benefit was in the Singapore and Vietnam markets. In the Malaysian market, Indonesian frozen fish was in a falling star position indicating that Indonesia was unable to meet Malaysia's growing demand. Nevertheless, the market share of Indonesian frozen fish in the Canadian market was still increasing in the lost opportunity position. The Indonesian fresh/chilled fish and mollusks groups had a competitive benefit which tended to be better than other fishery products because there was no retreat position in all export destination markets. The consumers desired the export of these products in the primary destination country. The best position for the fresh/chilled fish products was in the Thailand market, while the best position of mollusks group was in the Philippines market. The Indonesian most fishery products' market position in the Philippines, Thailand, and Canada showed a rising star position, and the rest of the markets was in the lost opportunity position. Rising star position was the highest position because the commodity was competitive and had a positive trading dynamic. Indonesia gained additional market share in the rapidly growing products. The lost opportunity position was the least desirable position due to opportunity loss created by the declining market share affected by the declining RCA value over the last five years. Based on those RCA and EPD, the market development status of Indonesian fishery products in every country presented in Table 7. The measurement results of the X-model method also showed that Philippines, Thailand, and Canada indicated a positive and potential market development which showed that all three markets had a significant potential to develop Indonesian fishery product market. Indonesian fishery products in Malaysia and Singapore had a potential market development for most commodities, and the rest of it was in the less potential market development. The opposite indicated by the Vietnam market where live, frozen, dried/ salted/smoked fish and crustaceans product were a potential market development for the less potential market. This result because of the competitiveness of Indonesian fishery products was still high compared to Vietnam in the global scope (Natalia & Nurozy, 2012).
Conclusion
The results show that during 2010-2015, the export structure of Indonesian fishery products has a similar level to Vietnam and Canada with the ESI value of approaching 1. This result shows that the competition between Indonesia and those two countries will be higher in the ASEAN market. Research also shows that fishery commodities in HS 4-digit (HS 0301 to HS 0305) have RCA > 1 index which means that strong competitiveness in the export destination markets, except in the Philippines market. Fishery products such as crustaceans and mollusks (HS 0306 and HS 0307) have strong competitiveness in the Canadian, Malaysian, and Vietnamese markets. The analysis results of EPD method show that Indonesian fishery products in the Philippines, Thailand, and Canadian markets are in the rising star position and lost opportunity position, while falling star and retreat position occur in Singapore, Malaysia, and Vietnam markets. The results of X-model analysis of potential export products indicate that Thailand, Philippines, and Canada are optimistic markets for Indonesian fishery products.
Based on the research results, there are several policy implications. First, the Indonesian government and private sectors need to improve the product quality and lower the production costs to boost the competitiveness of fishery products so that Indonesian is able to exploit the potential markets in ASEAN and Canada. Second, the government and private sectors need to prioritize the export of live fish, fresh/chilled fish, frozen fish, fillet and fish meat and also dried/salted/smoked fish to Thailand. Third, the government and private sectors need to prioritize the export markets of Philippines and Canada for live fish, fillet and fish meat, and also crustaceans product.
|
2019-05-21T13:05:31.721Z
|
2019-03-10T00:00:00.000
|
{
"year": 2019,
"sha1": "c92edf5b05f0391be6967094c750887a4f939ac8",
"oa_license": "CCBYSA",
"oa_url": "http://journal.uinjkt.ac.id/index.php/signifikan/article/download/7301/5632",
"oa_status": "GOLD",
"pdf_src": "Neliti",
"pdf_hash": "02bfc966ae206e5adbac1c599971fbb3d614243f",
"s2fieldsofstudy": [
"Economics",
"Political Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
2168704
|
pes2o/s2orc
|
v3-fos-license
|
Antibiogram of Medical Intensive Care Unit at Tertiary Care Hospital Setting of Pakistan
Objective: To determine the frequency of micro-organisms causing sepsis as well as to determine the antibiotic susceptibility and resistance of microorganisms isolated in a medical intensive care unit. Materials and methods: This is a cross-sectional analysis of 802 patients from a medical intensive care unit (ICU) of Shifa International Hospital, Islamabad, Pakistan over a one-year period from August 2015 to August 2016. Specimens collected were from blood, urine, endotracheal secretions, catheter tips, tissue, pus swabs, cerebrospinal fluid, ascites, bronchoalveolar lavage (BAL), and pleural fluid. All bacteria were identified by standard microbiological methods, and antibiotic sensitivity/resistance was performed using the disk diffusion technique, according to Clinical and Laboratory Standards Institute (CLSI) guidelines. Data was collected using a critical care unit electronic database and data analysis was done by using the Statistical Package for Social Sciences (SPSS), version 20 (IBM SPSS Statistics, Armonk, NY). Results: Gram-negative bacteria were more frequent as compared to gram-positive bacteria. Most common bacterial isolates were Acinetobacter (15.3%), Escherichia coli (15.3%), Pseudomonas aeruginosa (13%), and Klebsiella pneumoniae (10.2%), whereas Enterococcus (7%) and methicillin-resistant staphylococcus aureus (MRSA) (6.2%) were the two most common gram-positive bacteria. For Acinetobacter, colistin was the most effective antibiotic (3% resistance). For E.coli, colistin (0%), tigecycline (0%), amikacin (7%), and carbapenems (10%) showed low resistance. Pseudomonas aeruginosa showed low resistance to colistin (7%). For Klebsiella pneumoniae, low resistance was seen for tigecycline (0%) and minocycline (16%). Overall, ICU mortality was 31.3%, including miscellaneous cases. Conclusion: Gram-negative infections, especially by multidrug-resistant organisms, are on the rise in ICUs. Empirical antibiotics should be used according to the local unit specific data. Constant evaluation of current practice on basis of trends in multidrug resistance and antibiotic consumption patterns are essential.
Introduction
Sepsis and septic shock are the leading causes of mortality and morbidity in critical care units. After obtaining samples for cultures, antibiotics are started empirically in intensive care units (ICUs) to cover common pathogens causing sepsis. Early provision of adequate antibiotics improves survival outcomes among critically ill patients with infection. The early use of antibiotics provides effective control of infections; however, at the same time, the use of broad-spectrum empiric antibiotics is causing an increasing emergence of antibiotic resistance [1]. A rise in multidrug-resistant bacteria is limiting the available therapeutic options for infections in the ICU and further reducing the likelihood that empiric treatment selections will offer adequate coverage for common ICU pathogens [2].
Over the past two decades, there has been a rapid emergence of multidrug-resistant organisms, which is a major problem in terms of infection control [3]. Although 5% to 10% of all patients are treated in intensive care units (ICUs), they constitute about 25% of all nosocomial infections. The incidence is five to 10 times higher than in general hospital wards [4], which means that nosocomial infections are especially prominent in the ICU [5].
There is a wide diversity between institutions in the prevalence of pathogens and in their antimicrobial susceptibility [6]. The hospital antibiogram is a periodic summary of antimicrobial susceptibilities of local bacterial isolates. Antibiograms are often used by clinicians to assess local susceptibility rates as an aid in selecting an empiric antibiotic therapy and in monitoring resistance trends over time within an institution [7]. Therefore, this study aims to determine the types and frequency of microorganisms causing sepsis as well as to determine the antibiotic susceptibility and resistance of microorganisms isolated in medical ICU.
Materials And Methods
After approval from the institutional review board (IRB) at Shifa International Hospital, Islamabad Pakistan (approval #558-006-2016), a cross-sectional study was conducted in which data was collected from patients admitted to the medical intensive care unit over a one-year period from August 2015 to August 2016. Pediatric patients were excluded from the survey. Signed informed consent was obtained from every patient before enrollment in the study. Specimens collected were collected from blood, urine, endotracheal secretions, catheter tips, tissue, pus/pus swabs, cerebrospinal fluid (CSF), ascites, bronchoalveolar lavage (BAL), and pleural fluid ( Table 1). All samples were inoculated and incubated for 24 to 48 hours. Colonies were subjected to gram staining and gram-negative and gram-positive organisms isolated.
FIGURE 3: Spectrum of Antibiotic Resistance to Bacteria
The length of ICU stay has a negative linear relationship with final outcome in terms of either transferring outpatient with stable vitals or mortality (r = -0.129, X 2 : p = 0.0001) ( Table 2).
Discussion
The use of antibiograms to help select empirical antibiotic therapy for suspected infection with likely or known pathogens is a well-established practice. In our study, out of the total bacterial isolates, gram-negative bacteria were more prevalent than were gram-positive bacteria. This predominance of gram-negative bacteria is in concordance with the findings of the similar study conducted in 2012 (Al-Jawady, et al.), which may be due to their wide prevalence in the hospital environment. In addition, their frequent resistance to antibiotics may play a role in their persistence and spread [9].
The predominant gram-negative isolates in our study were Acinetobacter and E. coli, whereas the study done by Rajan and Rao showed Klebsiella as the most predominant organism [10]. In a similar study, Ziab, et al. [11] reported Pseudomonas aeruginosa as the predominant gram-negative bacilli being isolated from the ICU, whereas E. coli was the most common gram-negative bacillus reported in studies done by Al-Jawady, et al. and Morfin-Otero, et al. [9,12]. The most frequent isolate from urine was found to be E. coli in our study. This is comparable to studies conducted by Rajan, et al., Kritupanta, et al., and Sankarankutty, et al. [10,[13][14]. Acinetobacter was most predominant isolate from the respiratory tract in our study. A similar study conducted by Pradhan, et al. showed Acinetobacter to be the most frequent microorganism in respiratory tract [15]. Klebsiella was the most reported organism isolated in the respiratory tract in studies done by Rajan, et al. and Patel, et al. [10,16].
Our study shows a very high prevalence of carbapenem resistance among Acinetobacter (100%). A similar study conducted by Mumtaz, et al. [17] has reported 79% resistance to imipenem, while Rajan, et al. [10] showed 52% carbapenem resistance among Acinetobacter. In our study, Acinetobacter was highly resistant to third generation cephalosporins (100% ceftazidime), aminoglycosides (97% gentamicin and 95% amikacin), and fluoroquinolones (100% ciprofloxacin and moxifloxacin). The most effective drug was colistin, which showed 3% resistance in our study. Similar results of colistin effectiveness against Acinetobacter were seen in the study by Rajan, et al. [10], while work published by Hasan, et al. [18] showed that tigecycline was the most effective antibiotic against Acinetobacter.
Our study showed 33% tigecycline resistance to this bacterium. E. coli, in our study, showed high resistance to third generation cephalosporins (93% ceftazidime and 90% ceftriaxone); similarly, more than 90% E. coli were found to be resistant to third generation cephalosporin by Mohammadi-Mehr, et al. [19]. Carbapenem resistance was as low as 10% in our study. Almost similar results reported by Aysen, et al. [20] showed 13.1% E. coli resistance to imipenem. Gunjal, et al. [21] reported 28.10% of E. coli isolates were resistant to amikacin and 48.20% resistance to gentamicin, whereas we found 7% and 38% resistance to amikacin and gentamicin, respectively. Fosfomycin, colistin, and tigecycline showed no resistance in E. coli strains.
In our study, Pseudomonas showed significant resistance to carbapenems (59% imipenem/meropenem), whereas a study published by Rakhee, et al. [22] showed 20.8% resistance to imipenem and a study published by Rajan, et al. [10] showed 12.9% carbapenem resistance to Pseudomonas. Pseudomonas also showed high resistance to third generation cephalosporins (53% cefoperazone/sulbactam and 39% to ceftazidime) and aminoglycosides (48% gentamicin and 41% amikacin) in our study. Radji, et al. showed 60.9% resistance to ceftriaxone and found that amikacin was the most effective antibiotic against Pseudomonas with 15.6% resistance [23]. We found colistin to be the most effective antibiotic against Pseudomonas with only 7% resistance.
Limitations
External validation of this cross-sectional survey cannot be determined. The analysis is based on single centre experience and associated comorbidities have not been taken into account. These comorbidities might be contributing towards Neyman's (prevalence) bias. The confounding factors, including age and gender, might affect overall ICU mortality.
Conclusions
Multidrug-resistant gram-negative infections are on the rise in ICUs and are one of the contributory factors in the increase in overall ICU morbidity and mortality. Resistance to important antibiotic groups, including quinolones, piperacillin-tazobactam, and carbapenems, has increased substantially over the past few years. It is suggested that empiric antibiotics should be used according to the local ICU unit antibiograms. A constant evaluation of current practices on the basis of trends in multidrug resistance and antibiotic consumption patterns is essential.
|
2018-04-03T05:16:47.336Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "33e3597fdaa2f5467cb23189755c493c0c972ef0",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/5243/1566921335-20190827-513-um6fod.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dd173b59e567bdee1326a9a84a1e5b2a477187c1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
37205052
|
pes2o/s2orc
|
v3-fos-license
|
Human Umbilical Cord Mesenchymal Stem Cell Exosomes Enhance Angiogenesis Through the Wnt4/β‐Catenin Pathway
Human umbilical cord mesenchymal stem cells (hucMSCs) and their exosomes have been considered as potential therapeutic tools for tissue regeneration; however, the underlying mechanisms are still not well understood. In this study, we isolated and characterized the exosomes from hucMSCs (hucMSC‐Ex) and demonstrated that hucMSC‐Ex promoted the proliferation, migration, and tube formation of endothelial cells in a dose‐dependent manner. Furthermore, we demonstrated that hucMSC‐Ex promoted wound healing and angiogenesis in vivo by using a rat skin burn model. We discovered that hucMSC‐Ex promoted β‐catenin nuclear translocation and induced the increased expression of proliferating cell nuclear antigen, cyclin D3, N‐cadherin, and β‐catenin and the decreased expression of E‐cadherin. The activation of Wnt/β‐catenin is critical in the induction of angiogenesis by hucMSC‐Ex, which could be reversed by β‐catenin inhibitor ICG‐001. Wnt4 was delivered by hucMSC‐Ex, and the knockdown of Wnt4 in hucMSC‐Ex abrogated β‐catenin nuclear translocation in endothelial cells. The in vivo proangiogenic effects were also inhibited by interference of Wnt4 expression in hucMSC‐Ex. Taken together, these results suggest that hucMSC‐Ex‐mediated Wnt4 induces β‐catenin activation in endothelial cells and exerts proangiogenic effects, which could be an important mechanism for cutaneous wound healing.
INTRODUCTION
Mesenchymal stem cells (MSCs) are adult stem cells with unique characteristics including longterm ex vivo proliferation, multilineage differentiation potential, and immunomodulatory properties [1]. MSCs have been suggested as promising candidates for a variety of therapeutic applications, such as immune disorders including graft versus host disease and systemic lupus erythematosus, bone and cartilage regeneration, neurological diseases, hepatic injury, acute renal failure, and myocardial infarction [2][3][4][5][6][7]. Paracrine action but not transdifferentiation has been considered as the predominant mechanism for the role of MSCs in tissue repair [8,9]. Cell-derived exosomes are emerging as a new mechanism in intercellular communication [10]. It has been proposed that MSC-derived exosomes are effective for reducing myocardial ischemia and reperfusion damage [11], alleviating liver fibrosis [12], and protecting against acute kidney injury [13]. Such suggestions indicate that MSC-derived exosomes may serve as a novel therapeutic tool for tissue repair; however, the mechanisms underlying these actions remain elusive.
Therapeutic angiogenesis has begun to make substantial inroads in the field of tissue engineering and wound healing [14,15]. Angiogenesis involves a variety of coordinated events including degradation of the extracellular matrix surrounding the parent vessel, migration and proliferation of the endothelial cells and mural cells to assemble the new vessel, lumen formation, and construction of the mural cell layer of the vessel wall with associated pericytes and/or smooth muscle cells [16]. Extracellular vesicles derived from human bone marrow mesenchymal stem cells could promote angiogenesis in tissue regeneration [17,18]. Exosomes enhance angiogenesis by delivering microRNAs, mRNAs, and active proteins [19][20][21][22][23][24]. We previously reported that exosomes from human umbilical cord mesenchymal stem cells (hucMSC-Ex) intensively enhance cutaneous wound healing; however, the effects of hucMSC-Ex on angiogenesis and the underlying mechanisms are not well characterized. Endothelial cells are the primary constituents of new vessels, and many endothelial cell functions are required for angiogenesis. In this study, we investigated the role of hucMSC-Ex in angiogenesis in vitro and in vivo. Our results showed that hucMSC-Ex promoted endothelial cell proliferation, migration, and tube formation. We found that hucMSC-Ex promoted angiogenesis to repair deep second-degree burn injury to skin by delivering Wnt4 and activating Wnt/b-catenin signaling in endothelial cells.
Cell Culture
HucMSCs were isolated and characterized, as described previously [25]. The cells in passages 3 and 4 were used for experiments. Human umbilical vein endothelial cell-derived cell line EA.hy926 and human lung fibroblasts (HFL1) were purchased from the cell bank of the Chinese Academy of Sciences (Beijing, People's Republic of China, http://english.cas.cn) and maintained in high-glucose Dulbecco's modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum (FBS).
Exosome Purification and Characterization
HucMSC-Ex and HFL1-Ex were purified by differential centrifugation, as described previously and with modifications [26]. Briefly, hucMSCs and HFL1 were cultured in serum-free DMEM (ExCell Bio, Shanghai, People's Republic of China, http://www. excellbio.com) for 48 hours. When the cells reached 50% confluence, the conditioned medium was collected and centrifuged at 300g for 10 minutes, at 2,000g for 10 minutes, and at 10,000g for 30 minutes to remove cells and cell debris. The clarified supernatant was then concentrated with 100-kDa molecular weight cutoff (MWCO) hollow fiber membrane (Millipore, Billerica, MA, http:// www.emdmillipore.com) at 1,000g for 30 minutes. The concentrated supernatant was loaded onto a 30% sucrose/D 2 O cushion (5 ml, density 1.210 g/cm 3 ) and then ultracentrifuged at 100,000g for 3 hours (Optima L-90K; Beckman Coulter, Brea, CA, https:// www.beckmancoulter.com). The exosome-enriched fraction was collected from the bottom of the tube and washed 3 times with phosphate-buffered saline (PBS) by centrifugation at 1,500g for 30 minutes with 100-KDa MWCO. Final exosomes were passed through a 0.22-mm filter and stored at 270°C. The protein content of the concentrated exosomes was determined using a bicinchoninic acid (BCA) protein assay kit (CWBIO, Shanghai, People's Republic of China, http://www.cwbiotech.bioon.com.cn/), and exosomal markers CD9 and HSP70 were determined using Western blot. Purified exosomes were identified by transmission electron microscopy. A drop of exosomes (20 ml) was pipetted onto a grid which was coated with formvar and carbon, standing for 5 minutes at room temperature. The excess fluid was removed with a piece of filter, and the sample was negatively stained with 3% (wt/vol) phosphotungstic acid (pH 6.8) for 5 minutes. After air drying under an electric incandescent lamp, the sample was analyzed by transmission electron microscopy (FEI Tecnai 12; FEI, Hillsboro, OR, http://www.fei.com). The diameter of exosomes was determined with the NanoSight LM10 instrument (NanoSight, Amesbury, U.K., http://www.nanosight.com). The samples were serially diluted in PBS to reach a particle concentration suitable for analysis with nanoparticle tracking analysis (NTA; 1.0 3 10 8 to 2.5 3 10 9 particles per milliliter; NanoSight) and then injected into the LM10 unit (approximately 500 ml) with a 1-ml sterile syringe. Videos were collected and analyzed using the NTA software (version 2.3, build 0006 BETA2; NanoSight). The protein content, as the quantification of exosomes, was determined by using a BCA protein assay kit (CWBIO). The purified exosomes were stored at 270°C until use. The final concentration of exosomes used for treating skin cells in vitro was 80 or 160 mg/ml, and a total of 200 mg of exosomes were applied to treat each animal.
Exosome Labeling and Internalization
HucMSC-Ex were labeled with the crosslinkable membrane dye CM-DiI (red), according to the manufacturer's protocol (Molecular Probes; Thermo Fisher Scientific, Waltham, MA, http://www. lifetechnologies.com/us/en/home/brands/molecular-probes.html). The labeled exosome suspension was filtered through a 100-kDa MWCO hollow fiber membrane (Millipore), and PBS was used as a negative control. EA.hy926 cells (5 3 10 3 per well) were seeded in 96-well plates and incubated with labeled exosomes at 37°C for 4 hours. The cells were then washed with PBS and fixed in 4% paraformaldehyde. The nuclei were counterstained with Hoechst 33342 (1:200, s). Confocal images were sequentially acquired with ArrayScan VTI (Thermo Fisher Scientific).
Cell-Counting Assay
A total of 3 3 10 3 EA.hy926 cells were seeded in each well of the 96-well plate and incubated at 37°C for 12 hours. After synchronization with 2% FBS for 24 hours, cells were cocultured with exosomes (80 and 160 mg/ml) or exosome diluent (PBS) for 24, 48, 72, and 96 hours. Cell numbers were counted at the indicated time points, and all samples were repeated in triplicate.
Cell Migration Assay
The migration of EA.hy926 cells was assayed by Transwell (Corning, Corning, NY, http://www.corning.com) chemotaxis chambers (8-mm-pore filters) and wound-healing assay. Briefly, after synchronization with 2% FBS for 24 hours, EA.hy926 cells (5 3 10 4 in 200 ml) were suspended in serum-free medium and added to the upper chamber. Next, 600 ml of media with 2% FBS containing hucMSC-Ex (80 and 160 mg/ml) or PBS was added to the lower chamber. Cells on the upper layer of the membrane were wiped up with swabs after incubation for 12 hours at 37°C. Cells that had migrated through the membrane were fixed with 4% paraformaldehyde and stained with crystal violet. The stained cells were observed under a microscope, and at least six fields of cells were assayed for each group.
For the wound-healing assay, when the EA.hy926 cells were grown to monolayer, wounds were scratched with a sterile plastic 10-ml micropipette tip, washed, and incubated with or without hucMSC-Ex for 12 hours. Images were taken with a digital camera at 0, 12, and 24 hours and measured manually with ImageJ software (NIH, Bethesda, MD, http://imagej.nih.gov/ij/). The data were reported as the ratio of migration relative to the control.
Endothelial Tube Formation Assay
The endothelial tube-formation assay was performed following the manufacturer's protocol (BD Biosciences, Franklin Lakes, NJ, https://www.bdbiosciences.com). Matrigel (50 ml) was added to each well of a 96-well plate and allowed to polymerize. HucMSC-Ex (80 and 160 mg/ml) or PBS was added to the synchronized EA.hy926 cells (2 3 10 4 ) plated on Matrigel. After incubation for 12 hours at 37°C, the cells were viewed under a microscope and photographed. Tube length was measured with ImageJ software.
Rat Skin Wound Model and Treatment
A rat model of skin-deep second-degree burn wound was established, as described previously [28,29]. Subsequently, 200 mg exosomes (hucMSC-Ex and HFL1-Ex) suspended in 200 ml PBS were injected subcutaneously at 3 sites. The normal group had no treatment. The animals were housed individually. At 1 week and 2 weeks after treatment, the rats were sacrificed, and the wound area was cut for further analysis.
Hematoxylin and Eosin Staining
The wound skin and surrounding skin (4 mm 2 ) were fixed in 4% paraformaldehyde (pH 7.4) and gradually dehydrated, embedded in paraffin, cut into 4-mm sections, and stained with hematoxylin and eosin for light microscopy.
Lentiviral Knockdown of Wnt4 in HucMSCs
The Wnt4 short hairpin RNA (shRNA) lentiviral expression vector was specifically selected for Wnt4 gene silencing, which was defined as Lenti-Wnt4-shRNA, and Lenti-GFP-shRNA was the negative control. The Wnt4 shRNA lentivirus vectors were generated by ligating the vector Tet-pLKO-puro. Wnt4 shRNA oligonucleotide sequences are as follows: forward, 59-CCGGCCCAAGAGA-TACTGGTTGTATCTCGAGATACAACCAGTATCTCTTGGGTTTTTG-39; reverse, 59-AATTCAAAAACCCAAGAGATACTGGTTGTATCTCGAGATA-CAACCAGTATCTCTTGGG-39. The sequences of control shRNA are as follows: forward, 59-CCGGGCAAGCTGACCCTGAAGTTC-ATCTCGAGATGAACTTCAGGGTCACGTTGCTTTTTG-39; reverse, 59-AATTCAAAAAGCAAGCTGACCCTGAAGTTCATCTCGAGATGAACTT-CAGGGTCACGTTGC-39. The recombinant lentivirus was produced by cotransfecting HEK293T cells with PLKO-GFP-shRNA or PLKO-Wnt4-shRNA, PU1562, and PU1563 plasmid by using Lipofectamine 2000 (Invitrogen; Thermo Fisher Scientific). The virus-containing supernatant was harvested at 48 and 72 hours after transfection. HucMSCs were transduced with the prepared lentivirus (Lenti-Wnt4-shRNA or Lenti-GFP-shRNA). Stable cell lines were obtained after selection with 1 mg/ml puromycin (Invitrogen) for 15 days. The expression of shRNA was induced by the addition of 80 mg/ml doxycycline for 2 days. The efficiency of Wnt4 knockdown was evaluated using quantitative reverse transcription polymerase chain reaction and Western blot. The stable cell lines were cultured in serum-free medium for 48 hours, and then the supernatants were collected and exosomes isolated for further study.
Cytoplasm and Nuclear Fractionation
Cytoplasm and nuclear fractionation was performed according to the manufacturer's instructions (HiScript II First Strand cDNA Synthesis Kit, R211-02; Vazyme Biotech, Nanjing, Jiangsu, People's Republic of China, http://www.vazyme.com). Briefly, cells were harvested and washed once with cold phosphate-buffered saline. Cells were then suspended in isolation buffer A mixed with protease inhibitors and rotated at 4°C for 1 minute. After 12,000g centrifugation at 4°C for 5 minutes, supernatant was collected containing the cytoplasm fraction. The remaining cell debris was then suspended in isolation buffer B mixed with protease inhibitors and rotated at 4°C for 1 minute and repeated 3 times every 10 minutes. Cytoplasm and nuclear fractionation was stored at 280°C to prepare for Western blot detection.
Statistical Analysis
All data were expressed as mean 6 SD. Statistical analysis was performed using GraphPad Prism 5.0 software (GraphPad Software, La Jolla, CA, http://www.graphpad.com). The means of different treatment groups were compared by one-way analysis of variance. A p value ,.05 was considered statistically significant.
Exosome Characterization and Internalization
HucMSC-Ex were successfully isolated using the differential centrifugation method and purified on a 30% sucrose/D 2 O cushion. The morphology of purified exosomes was observed by using Figure 1A and 1B, the exosomes had a characteristic saucer-like shape that was limited by a lipid bilayer with a diameter ranging from 30 to 100 nm. A comparison of the protein profiles of cell lysates and exosomes revealed that the component of cell lysates differed from that of exosomes (Fig. 1Di). The results of Western blot showed that hucMSC-Ex expressed exosomal markers such as CD9 and HSP70, and these proteins were more abundant in exosome fractions (Fig. 1Dii). To test the internalization of exosomes by endothelial cells, the CM-DiI-labeled exosomes were added into culture medium and incubated with EA.hy926 cells for 4 hours at 37°C. We found that red fluorescence was detected in the EA.hy926 cells, whereas the PBS group had no fluorescence, suggesting the internalization of labeled exosomes by EA.hy926 cells (Fig. 1C).
HucMSC-Ex Prompted the Proliferation and Migration of Endothelial Cells
To determine the effects of hucMSC-Ex on endothelial cell growth, we evaluated the viability of EA.hy926 cells treated with various concentrations of exosomes (80 and 160 mg/ml) for different times (24,48,72, and 96 hours). The results of the MTT assay showed that hucMSC-Ex promoted EA.hy926 cell growth in a dose-dependent manner ( Fig. 2A), and this finding was further confirmed by the results of the cell-counting assay (Fig. 2B). Endothelial cell migration is critical for angiogenesis [30]. To analyze whether the migratory ability of EA.hy926 cells was affected by hucMSC-Ex, EA.hy926 cells were incubated with different concentrations of hucMSC-Ex (80 and 160 mg/ml) for 12 hours. The results of both the transwell migration assay and the woundhealing assay showed that hucMSC-Ex incubation induced an enhanced migratory capacity in EA.hy926 cells relative to the control in a dose-dependent manner (Fig. 3).
HucMSC-Ex Improved the Tube-Formation Ability of Endothelial Cells In Vitro and Promoted Angiogenesis in a Cutaneous Burn Model In Vivo
The ability of hucMSC-Ex to promote angiogenesis in vitro was evaluated by using the endothelial tube formation assay. As shown in Figure 4A, hucMSC-Ex increased the tube length more efficiently than control in a dose-dependent manner. To study the effects of hucMSC-Ex on angiogenesis in vivo, we established a skin-deep second-degree burn model in rats. The results of histological evaluation of wounds at 1 week after infusion showed that the number of epidermal and dermal cells significantly increased in hucMSC-Ex-treated wounds (Fig. 4B), whereas wounds treated with PBS and control exosomes (HFL1-Ex) were still in a second-degree burn-injury state (Fig. 4B). The results of immunofluorescence staining for vascular endothelial cell marker CD31 showed that the hucMSC-Ex group had more CD31-positive cells in the wound area than the PBS and HFL1-Ex groups at both 1 week and 2 weeks after infusion (Fig. 4C). Taken together, these results revealed that hucMSC-Ex improved the function of endothelial cells in vitro and promoted angiogenesis in a cutaneous burn model in vivo.
HucMSC-Ex Promoted Angiogenesis by Activating Wnt/ b-Catenin Signaling
Many studies have proved that the Wnt signaling pathway plays important roles in angiogenesis [31][32][33]. We hypothesized that Wnt/b-catenin signaling might be involved in the biological effects of hucMSC-Ex on angiogenesis. We found that hucMSC-Ex improved the stability and nuclear accumulation of b-catenin in a concentration-dependent manner (Fig. 5A, 5B). EA.hy926 cells were treated with different amounts of hucMSC-Ex (80 and 160 mg/ml) for 12 hours, and the whole-cell lysates were collected. The results of Western blot showed that PCNA and cyclin D3 protein expression was increased in a dose-dependent manner (Fig. 5C-5E). To investigate whether alteration of cell junctional components could be responsible for the increase in cell motility, we detected the expression of N-cadherin and Ecadherin in EA.hy926 cells. We found that hucMSC-Ex treatment increased N-cadherin expression and decreased E-cadherin expression in endothelial cells (Fig. 5C). We next determined the importance of b-catenin activation in hucMSC-Ex-mediated angiogenesis in vivo. Coinjection of b-catenin inhibitor ICG-001 significantly reversed the increase of vascular density in the wound area from hucMSC-Ex (Fig. 5F). In summary, hucMSC-Ex activates Wnt/b-catenin signaling to enhance angiogenesis in the process of wound healing.
HucMSC-Ex-Delivered Wnt4 Was Critical for Angiogenesis In Vitro and In Vivo
Because of the special molecular structure of Wnt, it is believed that Wnt can be delivered by exosome in tumor metastasis [34] and tissue development [35]. Our recent study also discovered that Wnt4 is highly expressed in hucMSC-Ex and is required for hucMSC-Ex mediated cutaneous regeneration [29]. In this study, we wanted to explore the role of hucMSC-Ex-delivered Wnt4 in wound angiogenesis. We knocked down Wnt4 in hucMSCs by using shRNA (Fig. 6A). The enhanced nuclear translocation of bcatenin by hucMSC-Ex was abrogated by Wnt4 knockdown in EA. hy926 cells (Fig. 6B). Furthermore, hucMSC-Ex-induced formation of the longer pipes was also abolished by Wnt4 knockdown (Fig. 6C). Wnt4 knockdown delayed the expression of angiogenesis marker CD31 induced by hucMSC-Ex in wound healing (Fig. 6D). These results indicate that Wnt4 plays an important role in hucMSC-Ex-mediated angiogenesis in cutaneous wound healing.
DISCUSSION
MSCs have been reported to be ideal candidates for tissue repair. Our previous studies demonstrated that MSCs derived from human umbilical cords could efficiently repair ischemia and reperfusion-induced acute kidney injury and carbon tetrachlorideinjured hepatic injury [7,8,36]. MSCs have been shown to perform their therapeutic roles by promoting angiogenesis [37][38][39]. It is generally accepted that transplanted MSCs promote angiogenesis mainly through paracrine mechanisms [40]. Cell-derived exosomes have been described as the most important effective ingredients that play an important role in cell-to-cell communication [41,42]. In this study, we successfully isolated exosomes from hucMSCs and characterized hucMSC-Ex. Membrane dye labeling showed that hucMSC-Ex could be incorporated into EA.hy926 endothelial cells. This is consistent with reports previously showing the incorporation of exosomes into renal tubular epithelial cells [16]; however, the underlying mechanisms of internalization remain unclear. Camussi and colleagues suggest that adhesion molecules such as CD29, a4 integrin might contribute to this process [43,44]. Our flow cytometry detection showed that hucMSC-Ex were also positive for CD29 (data not shown). The effects of hucMSC-Ex on proliferation, migration, and tube formation of endothelial cells together contribute to new vessel formation [21]. Furthermore, our previous studies proved that hucMSC-Ex could alleviate liver fibrosis [15], protects against acute kidney injury [16], and enhance cutaneous wound closure [29]. Based on these findings, we proposed that hucMSC-Ex might achieve therapeutic effects by promoting angiogenesis; however, it remains largely unknown which component in hucMSC-Ex contributes to the proangiogenic activity.
Active mRNAs, microRNAs, and proteins can be transported by exosomes [45,46]. Many factors delivered by exosome have been found to play an important role in angiogenesis [19][20][21][22][23]43]. Endothelial cells that secrete exosomes containing miR-214 induce angiogenesis in human and mouse endothelial cells [19]. Exosomes derived from hypoxic leukemia cells enhance tube formation in endothelial cells by mediating miR-210 delivery [21]. Exosomal miR-135b from hypoxic multiple myeloma cells The cutaneous-burn rat models were injected with hucMSC-Ex with or without ICG-001 (1 mg per rat). PBS and DMSO were used as controls for hucMSCs and ICG-001, respectively. Representative immunofluorescence images of CD31 expression in the wound area are shown (3200). Abbreviations: DMSO, dimethyl sulfoxide; GAPDH, glyceraldehyde-3-phosphate dehydrogenase; hucMSC-Ex, exosomes from human umbilical cord mesenchymal stem cells; PBS, phosphate-buffered saline; PCNA, proliferating cell nuclear antigen. enhanced endothelial tube formation under hypoxia via the HIF-FIH signaling pathway [22]. Exosomes can reflect the hypoxic status of glioma cells and mediate hypoxia-dependent activation of vascular cells by delivering matrix metalloproteinases, IL-8, platelet-derived growth factors (PDGFs), caveolin 1, and lysyl oxidase during tumor development [23]. Endothelial progenitor cell-derived microvesicles activate an angiogenic program in endothelial cells by a horizontal transfer of mRNAs, such as mRNAs associated with the PI3K/AKT signaling pathway [43]. All of these findings suggest that exosomes may promote angiogenesis by transporting active molecules to endothelial cells.
The Wnt signaling pathway is critically involved in angiogenesis through the modulation of endothelial cell proliferation, migration, vascular sprouting, remodeling, and vascular system maturation [36][37][38]. Taverna et al. reported that treatment with exosomes from chronic myelogenous leukemia caused an increase in endothelial cell motility accompanied by a loss of VE-cadherin and b-catenin from the endothelial cell surface [47]. We previously demonstrated that hucMSC-Ex strongly activated Wnt/b-catenin signaling [29]. Wnt/b-catenin signaling inhibitor ICG-001 reversed the angiogenesis induced by hucMSC-Ex in an animal model. The previous studies suggested that exosomal proteins such as sonic hedgehog [48], HSP20 [49], IL-8, and PDGFs [23] might contribute to the proangiogenic activity. In addition, recent studies have reported that exosomes and extracellular vesicles could carry Wnt on their surface to induce Wnt signaling activity in target cells [34,35,50]. Because of lipid modification, Wnt cannot spread over a distance among cells [51]. Consequently, the range of Wnt signaling is narrowed. Panáková et al. suggested that exosome-like particles are the main carriers of Wnt molecules [52]. Exosomes promote the growth of Drosophila wings by delivering Wnt and activating Wnt signaling The rat wound models were treated with PBS, shControl-Ex, or shWnt4-Ex for 1 week. The expression of CD31 in the wound area was determined using immunofluorescence staining (3200). Abbreviations: GAPDH, glyceraldehyde-3-phosphate dehydrogenase; hucMSCs, human umbilical cord mesenchymal stem cells; hucMSC-Ex, exosomes from human umbilical cord mesenchymal stem cells; n.s., not significant; PBS, phosphate-buffered saline; PCNA, proliferating cell nuclear antigen; shControl, control-short hairpin RNA lentiviral expression vector; shControl-Ex, exosomes from controlshort hairpin RNA-transfected hucMSCs; shWnt4, Wnt4-short hairpin RNA lentiviral expression vector; shWnt4-Ex, exosomes from Wnt4-short hairpin RNA-transfected hucMSCs. [34]. Cancer-associated fibroblast-secreted exosomes mediate cancer cell Wnt11 autocrine production to promote tumor metastasis [35]. Menck et al. found that two different types of extracellular vesicles were involved in Wnt5a induction and transportation during the process of macrophage-mediated malignant invasion [50]. Our recent study also indicated that hucMSC-Ex Wnt4 prompts wound healing by activating Wnt/ b-catenin signaling [29]. Consequently, we explored the role of hucMSC-Ex Wnt4 in angiogenesis during tissue regeneration. Our data showed that Wnt4 knockdown delayed the expression of CD31 induced by hucMSC-Ex in wound healing. In conclusion, hucMSC-Ex could promote tube formation of endothelial cells in vitro and angiogenesis in a rat injury model. Our findings showed that hucMSC-Ex enhanced angiogenesis by delivering Wnt4 to activate Wnt/b-catenin in endothelial cells, which provides a mechanism for tissue repair and shows promise for tissue-engineering applications in regenerative therapy.
CONCLUSION
Our results have clearly demonstrated that hucMSC-Ex enhance angiogenesis in the repair of skin second-degree burn injury and that knockdown of Wnt4 in hucMSC-Ex delays tube formation of endothelial cells in vitro and the expression of CD31 in vivo.
|
2018-04-03T03:58:38.985Z
|
2015-03-30T00:00:00.000
|
{
"year": 2015,
"sha1": "36678e3d53e2472cb0cfe45ad5d0d382c7dd3354",
"oa_license": "CCBYNC",
"oa_url": "https://stemcellsjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.5966/sctm.2014-0267",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "492d3f429539e6feb24ebaa0a36ddf43dcde8dc5",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
268188535
|
pes2o/s2orc
|
v3-fos-license
|
PV Plant Equipment Labels and Layouts Can Be Validated by Analyzing Cloud Motion in Existing Plant Measurements
Large-scale photovoltaic plants collect monitoring and operational data at various spatial scales within the plant (e.g., strings, combiners, and inverters). Manual validation of the spatial position of these plant segments relative to the plant design requires on-site observations that may be prohibitively costly or labor intensive. This article presents a methodology for validating plant segment position based on operational data from the plant. By observing the delay between segment responses to cloud motion, predictions of their relative positions within the plant can be made. The method was demonstrated on combiner-level data from a 20-MW, operational photovoltaic plant in the United States. Several instances of apparently mislabeled combiners were identified from the analysis. A partial validation of 20 combiners was conducted by inspecting the plant, with results showing complete agreement between observation and predictions. Predictions derived from this methodology can serve as the basis for further plant inspection and corrective maintenance.
Abstract-Large-scale photovoltaic plants collect monitoring and operational data at various spatial scales within the plant (e.g., strings, combiners, and inverters).Manual validation of the spatial position of these plant segments relative to the plant design requires on-site observations that may be prohibitively costly or labor intensive.This article presents a methodology for validating plant segment position based on operational data from the plant.By observing the delay between segment responses to cloud motion, predictions of their relative positions within the plant can be made.The method was demonstrated on combiner-level data from a 20-MW, operational photovoltaic plant in the United States.Several instances of apparently mislabeled combiners were identified from the analysis.A partial validation of 20 combiners was conducted by inspecting the plant, with results showing complete agreement between observation and predictions.Predictions derived from this methodology can serve as the basis for further plant inspection and corrective maintenance.
I. INTRODUCTION
C ONTINUED impacts of climate change [1] provide the motivation for a global transition to carbon-free sources of electricity generation.Renewable sources of generation, including wind and solar photovoltaics (PV), are expected to represent a major share of a future clean energy portfolio.Least cost scenario analyses indicate that wind and solar should provide 60%-80% of electricity generation in the United States to meet the goal of transitioning to a carbon-free electricity system by 2035 [2].Solar alone makes up around 25% of the overall generation in these scenarios.Meeting these targets would require deployment at rates as high as four times current levels [2].
Achieving that level of deployment will require continued growth in installation and operation of large-scale PV facilities.
Operators of large-scale PV plants conduct extensive monitoring of plant operation and performance.These data are used to observe equipment failures, damage or off-design operation that can be used to guide operations and maintenance activities.In addition to monitoring at the overall plant level, data are typically collected at various segmented levels within the plant, such as strings, combiners, or inverters.This degree of monitoring leads to a significant number of labeled data collection entities.For example, a 20-MW plant may consist of thousands of strings, hundreds of combiners, and dozens of inverters.The potential for mistakes during the construction and commissioning process in labeling these entities reduces the confidence in plant operational data and represents a financial risk to plant operators.Due to the potential occurrence of mislabeled entities, monitoring that indicates malfunction in a section of the plant could lead to workers being deployed in the wrong region of the plant, or even an inability to determine where remediation is needed.Unfortunately, validating the labeling based on inspection or manual testing represents a potentially laborious and time consuming task that may itself be cost-prohibitive.
As an alternative, this article introduces a method to remotely validate the position of plant segments by observing cloud motion over the plant, derived from time-resolved generation data from the individual segments.To our knowledge, this concept was first proposed in a conference article by Ranalli [3] upon which this work is built.
The rest of this article is organized as follows.The principle of the method will be described in Section III, while a concrete application on combiner-level data from a utility-scale PV plant will be described in Section IV, including validation results in Section IV-G.Finally, Section V concludes this article.
II. BACKGROUND
Broadly speaking, describing the relationship between generation in different segmented areas within a plant falls into the topic of spatiotemporal variability of the irradiance resource [4], [5].Many previous studies have characterized spatiotemporal variability and its effect on irradiance forecasting.Studies have commonly described variability in terms of the correlation between spatially disparate measurements [6], [7].In particular, results have shown that spatial correlations in the signal result from advected cloud cover [8].As a result, it is well known that correlation decreases with increasing separation distance [4] due to the reduced likelihood of identical cloud patterns overhead with greater spatial separation.Further highlighting the relationship between spatial distribution of irradiance and cloud cover, many studies have observed anisotropy in the correlation in the along-and across-wind directions relative to cloud motion [9], [10] and studies have been conducted to attempt to model these effects [11], [12].
When considering generation from a PV facility, the spatiotemporal variability in the irradiance is aggregated, leading to an effective smoothing of the variable irradiance across the plant's spatial extent.Studies have attempted to describe this aggregation through its relationship to correlation between sensors within the plant [13].Studies have also utilized the principle of advected cloud features over the plant as a method of forecasting aggregate plant generation [14], [15].One previous model explicitly described the relationship between distributed areas responding to frozen cloud motion using transfer functions to represent the plant [16].
III. THEORETICAL PRINCIPLE OF THE METHOD
The method proposed in this article utilizes delay between spatially distributed plant segments subject to observed cloud motion to compute a prediction of segment positions relative to those clouds.A simplified example will be used to describe the mathematics behind the method this study proposes.Consider two sensors, P 1 and P 2 , believed to be separated by a distance d.Assuming advection of frozen clouds across the two points at a velocity V , the response of the the sensors will exhibit a relative delay, τ , in their irradiance signals [16].The delay-based separation distance, d , between these points can be computed using the simple equation Ideally, d and d would be equal, but in practice, differences between the two distances d and d indicate a mismatch between the expected positions of the sensors and the positions inferred from the cloud motion.Such a discrepancy could be caused either by uncertainty in the measurement of τ or due to and incorrect expectation about the actual separation distance between the points (i.e., their true locations are unknown).
A. In Two Dimensions
Extending the example to two dimensions, we can consider two discrete time periods (A and B) which exhibit cloud motion along two distinct vectors, V A and V B .For a target sensor believed to be located at P 1 , consider two other sensors as reference points, P 2 A and P 2B , each aligned relative to P 1 along their respective cloud motion vectors (CMVs), V A and V B .The measured time series delays, τ A and τ B , between P 1 and each of the reference points P 2 result in predicted distances, d A and d B , according to (1).These distances result in implied positions, P 1 A and P 1B , along the respective cloud motion axes.Stated another way, P 1 A and P 1B represent the apparent position of P 1 based on the signal time delay relative to each of the reference and V B by their respective delays, τ A and τ B , relative to the signal of P 1 .The resulting predicted positions of P 1 along the A and B axes are termed P 1 A and P 1B .In the depicted case, the measured delay between P 2 A and P 1 is less than expected, while the opposite is true for P 2B .points, P 2 .A depiction of this geometric interpretation can be seen in Fig. 1.
As before, mismatch between the locations P 1 A and P 1B , and the supposed position of P 1 indicates a discrepancy between the expected position of P 1 and that inferred from the cloud motion.We will term these mismatches e as in the following equation: ( The predicted position of the sensor, P 1 , derived from the time delays, can be calculated as the intersection of the lines perpendicular to each cloud motion vector, passing through P 1 A and P 1B .This is graphically depicted in Fig. 2. Defining a cartesian coordinate system with P 1 as the origin, the offset of position P 1 from P 1 can be calculated from the x and y components of vectors e A and e B as follows (a derivation may be found in the appendix): Inspection of these equations highlights the importance of using cloud motion vectors that are nearly perpendicular to produce well conditioned results.As V A and V B approach parallel (or antiparallel) orientation, the denominator of ( 3) and ( 4) tends toward zero, magnifying any uncertainty in the measurement of τ , which serves as the basis for the vectors e .
B. Averaging Multiple Reference Points
Expanding to a hypothetical plant, we use P 1 to represent a single target plant segment.Each other individual segment (i.e., string, combiner, inverter, etc.) represents a choice for the reference points, P 2 A and P 2B , removing the restriction that they lie exactly along the CMV relative to P 1 .These numerous references provide an opportunity to compute many candidate positions P 1 A and P 1B and to average to reduce the effects of noise and uncertainty in computation of the τ values.A graphical example of this process in the V A direction is shown in Fig. 3.Each reference point, P 2 A,i yields a predicted position P 1 A,i projected parallel to the cloud motion vector, V A .The average predicted position, P 1 A , is then computed by averaging each of these distances to yield an average projection from P 1 along the cloud motion vector.Mathematically, this is equivalent to averaging the individual vectors, e A,i according to Any resultant discrepancy between the inferred P 1 and the position P 1 (which is based on plant design drawings) represents the difference between the expected position of the plant segment and the apparent position of the segment as inferred from the measured signal time delays.As previously stated, any such discrepancy suggests a potential mislabeling and further investigation is warranted.
IV. CASE STUDY AND DISCUSSION
In order to test the principles of the proposed method and to develop a concrete implementation thereof, a case study was performed based upon data from an operational PV plant, with the goal of of cross-checking the expected segment locations.
The plant studied was an approximately 20-MW PV plant located in the United States.The plant was analyzed at the level of its 221 combiners, with individual capacities ranging from approximately 75 to 120 kW each.Data from the plant were extracted for each combiner for the entire year of 2022.Data consisted of generation current timeseries, in amps, at the combiner level.The data source's sampling interval was irregular to save on storage space, with data recorded only when a sufficiently large value change occurred.The fastest sampling occurred at intervals of 10 s, so irregularly sampled data were interpolated to even sampling at a 10-s rate for analysis.
The following steps were used to apply the proposed method to data from the plant, and a graphical flowchart representing the overall process is shown in Fig. 4.
1) Digitize plant layout to obtain expected locations of combiners.2) Select a target combiner, P 1 , leaving remaining combiners as potential references, P 2,i .3) Identify time periods with suitable CMVs.4) Compute average predicted positions, P 1 A and P 1B , of the target along a pair of CMV axes.5) Compute predicted target position, P 1 , for given CMV pair.6) Repeat 4) and 5) for each possible CMV pair from 3). 7) Repeat 2)-5) for each desired target combiner.
A. Digitizing the Plant and Selecting Targets
Combiner footprints were identified from the plant design drawings.In this case, combiner footprints were all rectangular in shape and aligned along the cardinal direction axes.To represent these footprints as a single "average" spatial position, the geometric centroid of each combiner footprint was computed [16].The layout of the plant use for the case study is shown in Fig. 5.Each individual combiner was analyzed sequentially to produce predictions for the entire plant.
B. Calculating Cloud Motion Vectors
A variety of methods exist for determining cloud motion vectors (CMVs).Some techniques allow the identification of cloud motion using All Sky Imagers [17], [18], satellite data [19], [20], or numerical weather prediction [13].Other techniques obtain the CMV by observing spatiotemporal correlations between members of a distributed measurement network [21], [22].In the present case study, we applied the method of Jamaly and Kleissl [22] on the effective measurement network made up of the individual combiners.Simply described, Jamaly and Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.Kleissl's method computes the delay between every pair of combiners.This method relies on computing the lag associated with the maximum value of cross correlation between the individual signals.The CMV orientation is determined as the direction that minimizes the variance in the corresponding velocity readings for all possible sensor pairings, and the CMV magnitude is computed as the median velocity at this direction.As discussed in Section III-A, it is necessary to analyze a minimum of two, nonparallel CMVs in order to predict the 2-D location of a target point.Practically speaking, these should each represent time periods during which the CMV is stationary and not varying in time.Additionally, as noted, the angle between CMVs must not be close to 0 • or 180 • to avoid singularities in the calculation.
For the purposes of this case study, CMVs were identified manually by visually comparing the output of the Jamaly and Kleissl CMV algorithm with animations of the time resolved intensity of the plant generation field.An example of the visualization using two snapshots in time is shown in Fig. 6.Suitable CMVs were identified for a total of 14 time periods, with durations ranging from 30 min to 1 h, between the months of June and November.CMVs were found with directions in all four quadrants of the cartesian plane.When defining "perpendicular" CMV pairs for the analysis, pairings were limited to those with relative angular separations between 45 • and 135 • .This resulted in 35 possible CMV pairings that were used for the analysis.
C. Computing the Position Along Each CMV
Predicting a combiner position according to (1) and (2) requires calculation of the delay, τ , between the target combiner and reference combiners.Several well known signal processing techniques could be employed for this purpose.Many studies have considered the lag at which peak cross-correlation occurs between the signals as a way to measure their relative lag, and as mentioned, that method is used by the CMV calculation [22].
For the purposes of this study, the delay was calculated from the phase of a transfer function between each pair of two points.The transfer functions were calculated by first normalizing individual signals to their mean, then applying a Hamming One benefit of obtaining the delay from the transfer function is that it enables noninteger values of the delay relative to the period of the data sampling rate to be computed.Additionally, and of specific importance to this study, delay can also be computed for cases whose value is less than the sampling time step.An example transfer function between two points, along with the curve fit for delay, is shown in Fig. 7.While the time series data used here had a sampling period of 10 s, in principle, the method could be used for data with a slower sampling rate (e.g. 30 s or 1 min data).For example, the magnitude of the delay indicated in Fig. 7 has a value of τ = 6.7 s, which is less than the 10 s resolution of the time series data and suggests that delays can be observed that are smaller than the sampling period.However, the ability to use lower frequency data is likely to face practical limitations in the ability to accurately identify the relative delay between combiner pairs.As spatial separation and time are related by the actual speed of cloud motion, lower frequency data may favor the use of slower cloud speeds, but focusing on lower cloud speeds also increases the likelihood of the frozen clouds assumption being violated by clouds evolving during their transit of the plant.Future research may wish to consider the degree to which the methodology is compatible with lower frequency time series data.
According to ( 1) and ( 2), delays were calculated between the input target combiner and the reference combiner as an output, considering all other combiners in the plant as potential references.A graphical visualization of the delays obtained for the plant during a single CMV window are shown in Fig. 8.As seen, delay values show a relatively smooth gradient in the direction of the cloud motion, corresponding to the leading or lagging of the individual combiner signals relative to the reference and their geometric separation.As observed in other studies [7], [9], [13], [23], the correlation between points decreases at greater distance from the input, particularly perpendicular to the cloud motion.This manifests as a reduction in the coherence of the transfer function at greater separation distances.
The delays were transformed into positions along the CMV axis using the equations described previously.As in Section II-I-B, delays from multiple reference points were averaged together to produce a single value for the predicted position along a given CMV axis.Several methods for selecting a subset of combiners to average were tested, including the use of all references, or downselecting to the closest neighbor references or those with the strongest correlation to the target point.Good results were obtained when computing the predicted positions along the CMV axis using the eight reference points with the highest average value of the coherence within a low-frequency window (f < 0.02 Hz).This ensured that only the points with the strongest relationship to the target point influenced the position prediction.This filtering was particularly helpful for managing the potential loss of correlation perpendicular to the cloud motion [9], [10].The same process was repeated for each suitable CMV to obtain the predicted positions along each axis.
D. Obtaining a Final Position Prediction
Given a unique CMV pair, the predictions along each CMV axis were combined into a single spatial prediction for the position of the target combiner using (3) and ( 4).An example of the prediction for a single combiner is shown in Fig. 9.The prediction shown in the figure is closely located to the expected combiner position, indicating that the relative delays observed are consistent with the site plan's location for that combiner, and that the error vectors e A and e B are small.The figure highlights the individual combiners that were used in computing the averaged position (those exhibiting the strongest coherence for each CMV axis, as described in the preceding section).The indicated combiners with the highest coherence tend to fall along the cloud motion vector relative to the reference, which is consistent with expected anisotropic correlation behavior observed in the literature [7], [9].The actual and predicted distance along the CMV vectors for each of these reference combiners is shown in Fig. 10, from which the values for e A and e B could be obtained.
E. Repeating for Each Target Combiner
While Figs. 9 and 10 showed the details of the prediction for a single combiner, validation of the entire plant's layout requires that predictions to be made for each combiner within the plant.The method was applied sequentially, considering each combiner as a target, with results shown in Fig. 11.Generally, the predicted location of each combiner falls approximately within its own footprint, indicating general agreement between the positions inferred from cloud motion and the site design documents.A few areas exhibited especially poor agreement, including the southern central and southwestern regions within the plant.
As described previously, poor agreement between the design plan and these inferred positions could indicate either uncertainty in the computation of the time delay, or potential mislabeling.To attempt to differentiate these effects, calculations were Fig. 10.Separation distances parallel to the CMV axis between the target and each reference.The two axes compare the expected distance as measured from the plant site plans and that computed from the delay.Each point indicates the distance for one of the combiners with strongest coherence for the given CMV axis, as shown in Fig. 9.In perfect agreement, points would all fall along the x = y dashed line.Thus, the vertical separation between each point and the dashed line represents the error vectors e A,i or e B,i .Fig. 11.Predictions for every combiner in the plant using a single CMV pair, as indicated by the arrows in the top left.Black dots show the position indicated by the site plan, while red lines and circles show the distance to the predicted position for the attached combiner.repeated for each of the possible CMV pairings with relative angles between 45 • and 135 • , as described in Section IV-B.The final position prediction, P 1 , was found by averaging the x and y components of all individual P 1j predictions, where the j subscript represents a single valid CMV pair.This provides an indication of the repeatability of the predictions, independent of the individual CMVs.In addition, the confidence of the predictions was estimated by computing the average separation between the individual P 1j and P 1 for each given target point.These results are shown in Fig. 12.
F. Identifying Potentially Mislabeled Combiners
Comparing the results in Figs.11 and 12(a) shows that results for a single CMV pair are relatively similar to those averaged across multiple CMV pairs.The colorization of scatter across the CMV pairs presented in Fig. 12(b) and shows that most combiners experienced uncertainties less than 40 m (combiner row spacing is around 37 m).More details of the uncertainty are presented in Fig. 13, which shows a histogram and cumulative distribution function (CDF) of the combiner-by-combiner uncertainty.When compared to the typical combiner row spacing, Fig. 13 indicates that 85% of combiner position predictions are repeatable to within the size of a combiner footprint, which suggests that the localization of the combiner predictions is reasonable.In conjunction with the visual from Fig. 12(a), it is possible to conclude that most expected combiner positions from the plant design documents were validated by the approach.
The several sections of the plant for which combiner predictions were not consistent with the site layout bear some further scrutiny and discussion.Two centrally located combiners in the north-most row appear to have their positions swapped with relatively high confidence [labeled rectangle A in Fig. 12(a)].Manual inspection of the transfer functions for that combiner pair showed that when clouds moved from west-to-east, the phase delay between the two combiners exhibited an opposite trend from neighbors, which is consistent with the implication that their positions have been swapped.
The apparent inconsistencies in the southern portion of the plant were also investigated for potential mislabeling.Though few of the combiners have predicted positions near their expected locations, the average scatter across predictions is relatively low, indicating that the predicted result is repeatable independent of the CMV direction.Notably, all combiner positions predicted with high confidence to be incorrectly located still fell within the footprint of the connected inverter, indicating that labeling errors tended to occur within the shared inverter connection.This is interesting, as it is consistent with observations of the plant operations team, which suggest that errors spanning multiple inverters are unlikely based on the wiring configuration of the plant.
To determine whether these predictions of incorrectly labeled combiners could be useful for identifying the combiners' true locations, the predictions in Fig. 12(a) were used to attempt a manual descrambling of the combiner labels in search of a pattern by which mislabeling may have occurred.A set of updated label predictions for combiners within the inverter area labeled C in Fig 12(a) are given in Table I.Updated label predictions were also computed for the southern central region of the plant.The predicted descrambled labels were then used in a reprocessing of the position prediction calculation, leading to Fig. 14.As is evident, the combiners that previously showed significant mismatch between expected and predicted position appear to have been corrected, i.e., mapped to a condition in which the delay-inferred and expected positions agree.
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.It is interesting to note that when comparing Figs.12(a) and 14(a), predicted positions changed for combiners whose labels had been unchanged in the descrambling process.This can be observed by comparing the position of combiner 5 between the two parts of Fig. 15.This observation highlights the somewhat circular nature of the approach developed in this study.That is, the expected combiner positions are used to calculate delays, delays are used to calculate predicted combiner positions, and predicted positions are compared to the expected positions to determine label accuracy.Any inconsistency in the original expected positions therefore impacts the quality of the delay calculations.In the worst case event that labels were randomly scrambled throughout the entire plant, it is unlikely that delay calculations would be able to yield sensible position predictions for the combiners.Stated another way, since delay is measured relative to a reference, in the absence of any confident reference position, no relative distance predictions are possible.In such an event, we hypothesize that other references (e.g., irradiance monitoring stations or aggregate level production data, such as inverter-level power) may be able to serve as the reference from which delays are calculated.In the present study, this is supported by the fact that all predicted location corrections used to produce Fig. 14 occurred by relabeling combiners that shared a common inverter.
G. Validating Predictions Against Operational Plant
As stated in the preceding section, application of the method allowed prediction of descrambled positions of plant combiners.The apparently mislabeled inverter in the southwest of the plant (rectangle C in Figs. 12 and 14) was targeted for validation of the Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.predictions against the actual plant.Maps of that inverter initially and after the predicted descrambling are shown in Fig. 15.In order to validate the predictions, combiners within that inverter footprint were individually disconnected in sequence for a period of at least 20 s prior to reconnecting, resulting in clearly visible zero readings for each combiner on the plant's data monitoring system.The "true" combiner labels were obtained by correlating the sequence of zero readings with the sequence in which the combiners were disconnected.The results of this validation, compared with the predictions made by the model, are shown in Table I.As the table indicates, perfect agreement occurred between predictions and observations.This agreement is also shown graphically in Fig. 15(b).
Additional manual inspections were conducted on a centrally located inverter (9 combiners) whose locations were predicted to agree with the site design documents, and on the two north-central combiners whose positions were predicted to be swapped.These plant locations are indicated by rectangles A and B in Figs. 12 and 14.As in the case of area C, the predictions made by the model agreed with observations for these additional combiners.Thus, the model exhibited a perfect validation for predictions on all 20 combiners that were manually cross-checked.The process of systematically traversing and individually disconnecting the combiners required several hours of technician time investment, not including test plan preparation time and time required for travel to the plant site.This represents the potential of this analytical approach to avoid a significant labor cost.
V. CONCLUSION
This article introduced a novel methodology for predicting the positions of individual segments within a photovoltaic plant by analysis of cloud motion over the plant from operational data.The principle of the methodology, introduced in Section III, could be applied in multiple ways depending on the data source, but one concrete implementation was demonstrated on combiner-level data for an actual plant in Section IV.The method is based on the concept of calculating the relative delay Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
between the plant segments as clouds advect over the plant.A period of fixed cloud motion allows calculation of the linear separation of two segments parallel to the cloud motion.By utilizing two periods with approximately perpendicular cloud motion vectors, a 2-D geometric projection of these positions can be performed, resulting in a prediction of the position for the target segment.
The concrete demonstration of the method utilized combinerlevel data from an operational utility-scale plant.In order to implement the method on this data, we employed methods from literature for identifying the cloud motion vector from the distributed combiner measurements and computed delay between the combiners via the phase of their transfer function.It was possible to predict the position of each combiner within the plant, demonstrating the basic functionality of the method.We found that the confidence in the results improved by averaging predicted positions over multiple cloud motion vector pairs, which also allowed for an estimate of the repeatability of the predicted position.While most predicted combiner positions matched relatively closely with their expected design plan positions, several apparent instances of combiner mislabeling were identified.We demonstrated that the methodology allowed a prediction to be made for how the combiner labels could be descrambled in these instances, resulting in a prediction of a "corrected" map of labels for the plant.A partial validation based on inspection of the plant showed that the method's predictions yielded perfect agreement with the true plant layout.
The present study effectively demonstrates the application of this proposed method.The approach may be of interest to operators of photovoltaic plants who have concerns about accuracy of the identification of plant segments (e.g., at string, combiner or inverter levels) and wish to validate these labels without laborious and time consuming hands-on approaches at the physical site.An open source software implementation of the method as used in the demonstration is made available, as described in the code availability section following the appendix.
There are some areas of further research that we hypothesize could improve the application of the method.First, as discussed in Section IV-C, some plants may only have access to data with sampling rates slower than the 10-s data used here.Analysis of the practical accuracy of the method when applied on lower frequency data may make it useful for a wider variety of plants.Additionally, the method demonstrated here relied on some manual interpretation to complete the analysis, e.g., manual calculations were used to identify suitable cloud motion vector time periods and to identify the potentially mislabeled sections of the plant.Developing approaches to automate these workflows may be a fruitful area for future research to improve scalability and allow the method to be quickly applied for validating an entire operational fleet.
APPENDIX DERIVATION OF (3) AND (4)
Referring to Fig. 2, define P 1 the origin and consider the position vector pointing to the final predicted position, P 1 .Recognizing that both vectors e A and e B are projections of P 1 onto the cloud motion vector axes, we can observe that the dot product between P 1 and e A is the square of the magnitude of
Fig. 1 .
Fig. 1.Example of predicting position of P 1 based upon two periods with different cloud motion.Points P 2 A and P 2B are projected along vectors V A
Fig. 2 .
Fig. 2. Geometric relationship for obtaining the predicted position, P 1 , relative to the supposed position, P 1 , based upon predictions P 1 A and P 1B along the CMV axes and the associated error vectors e A and e B .Positions P 1 A and P 1B are consistent with those shown in Fig. 1.
Fig. 3 .
Fig.3.Visual representation of a group of potential reference points, P 2 A,i , each of which have a unique P 1 A,i (green circles) based on delay between P 2 A,i and P 1 .Averaging those predicted positions along the direction of the CMV, V A , yields an averaged position P 1 A (at the distance of the green dashed line), which can be used to compute e A for subsequent calculations.
Fig. 4 .
Fig. 4. Flowchart representing the basic steps in the process of applying the methodology.
Fig. 5 .
Fig. 5. Layout of the case study plant.Rectangles indicate combiner footprints.Dots indicate centroids used to represent the effective combiner position.
Fig. 6 .
Fig. 6.Example visualizing cloud motion from the time resolved intensity of the plant.Figures show successive time instances of combiner generation from the data series.The arrow indicates the predicted cloud motion vector direction.A cloud front is visibly advancing across the plant in a northeasterly direction between the two frames.(a) Initial Time (b) Later Time.
Fig. 7 .
Fig. 7. Example transfer function between an arbitrary combiner pair.Phase plot shows the fit of the group delay, as indicated by a dotted line.Magnitude of delay is τ = 6.7 s.
Fig. 8 .
Fig. 8. Delay measured between each combiner and a central reference.Red dot indicates the reference combiner.The arrow indicates the CMV direction.
Fig. 9 .
Fig. 9. Zoomed view on northwest region of plant showing predicted position for a single target combiner.Arrows show the two CMVs used.Colored markers show the location of nearby combiners that exhibited the strongest coherence for each CMV.These were averaged to obtain the position along the CMV as described in Section IV-C.
Fig. 12 .
Fig. 12. Aggregated results across all possible CMV pairs.(a) Shows the average predicted position for each combiner, while (b) shows the average scatter of the predictions for each individual combiner across all CMVs.Dashed rectangles indicate areas that were used subsequently for manual verification (see Section IV-G).(a) Positions (b) Distances.
Fig. 13 .
Fig. 13.Histogram (bars) and CDF (line) showing the mean scatter associated with each combiner in Fig. 14.Red dashed line indicates the row spacing of 37 m.CDF indicates that 85% of combiners experience less scatter than the typical row spacing.
Fig. 14 .
Fig. 14.Aggregated results across all possible CMVs after swapping combiners based on the results of Fig. 12.(a) shows the average predicted position for each combiner, while (b) shows the average scatter of the predictions across all CMVs for each combiner.Dashed rectangles indicate areas that were used in manual verification (see Section IV-G).(a) Positions (b) Distances.
Fig. 15 .
Fig. 15.Map of the southwest inverter in rectangle C, where rectangles indicate combiner footprints with numbers indicating the plant drawing labels.Red lines and circles show the distance to the predicted position, and numbers in the circles indicate the predicted location of that combiner.(a) Arrangement before descrambling: note that predicted locations are not necessarily based on the predicted position falling inside of a combiner's footprint -they are assigned to the nearest combiner such that all combiners get a predicted location within an inverter.(b) Arrangement after reprocessing data following descrambling, predicted positions move due to the dependence of predictions on initial positions.(a) Before descrambling (b) Re-processed after descrambling.
PV Plant Equipment Labels and Layouts Can Be Validated by Analyzing Cloud Motion in Existing Plant Measurements Joseph Ranalli , Member, IEEE, and William B. Hobbs , Senior Member, IEEE e A | 2 − e Ay • | e B | 2 e Ax • e By − e Ay • e Bx B | 2 − e Bx • | e A | 2 e Ax • e By − e Ay • e Bx .(4) | e A | P 1 • e A = | e A | 2 .(7)WritinglikewiseforeByieldstwo equations with the only two unknowns as the components P 1x and P 1y .Writing the two dot product equations in a matrix form givese Ax e Ay e Bx e By Applying Cramer's rule to solve for P 1x and P 1y yields the forms shown in (3) and (4).
|
2024-03-03T18:08:36.337Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "cfe9eb7f69938eedfc68395363d4fb44480abbbe",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/5503869/5986669/10453268.pdf",
"oa_status": "HYBRID",
"pdf_src": "IEEE",
"pdf_hash": "2579ec77e112c3982f0ed0abfd93252ca98b8d60",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
90309406
|
pes2o/s2orc
|
v3-fos-license
|
Using coarse‐scale species distribution data to predict extinction risk in plants
Less than 6% of the worlds described plant species have been assessed on the IUCN Red List, leaving many species invisible to conservation prioritization. Large‐scale Red List assessment of plant species is a challenge, as most species’ ranges have only been resolved to a coarse scale. As geographic distribution is a key assessment criterion on the IUCN Red List, we evaluate the use of coarse‐scale distribution data in predictive models to assess the global scale and drivers of extinction risk in an economically important plant group, the bulbous monocotyledons.
INTRODUCTION
Plant diversity across the globe is at risk from anthropogenic changes such as habitat loss, degradation and overexploitation (Corlett, 2016), yet the extent and drivers of species extinction, as well as the strategies to counteract them, remain poorly known. These challenges are compounded by the sheer size of the plant kingdom, with 350,699 accepted species names on The Plant List to date (The Plant List, 2013). The size of the group, compared to comprehensively assessed vertebrate groups like birds (10,425 species) and mammals (5513 species), is one of the reasons plants are underrepresented on the International Union for the Conservation of Nature (IUCN) Red List of Threatened Species TM . The IUCN Red List contains 21,898 plant assessments (IUCN, 2016) which equates to 6% of described plants, although as much as a third of these assessments are considered out of date as they are either more than 10 years old or use an earlier version of the criteria ( IUCN Standards And Petitions Subcommittee, 2014). Of those plants assessed, 53% are listed as threatened (Critically Endangered, Endangered, Vulnerable) (IUCN, 2016), but risk is likely to be overestimated as assessment efforts have been preferentially directed towards the most threatened species and areas (Rodrigues et al., 2006).
To better understand the global threat status of plants, the IUCN Sampled Red List Index (SRLI) for plants assessed the status of a random sample of plant species to give a representative view of plant extinction risk globally (Brummitt et al., 2015). The results indicated a much lower proportion of species are threatened (21%) compared to species published on the current version of the Red List (53%), which nevertheless implies that as many as 74,000-84,000 plants could be at risk. Although Red List assessments in isolation are not an appropriate device for conservation prioritization (Possingham et al., 2002), they are an important component for a range of prioritization and funding schemes such as Mohamed Bin Zayed Conservation Trust (www.speciesconser vation.org), Save our Species (SOS; www.sospecies.org) and the Critical Ecosystem Partnership Fund (CEPF; www.cepf.ne t). The absence of a strategic prioritization system for plant Red List assessments means that many species could be missing out on conservation funding (Bland et al., 2015b).
Given the large number of plants remaining to be assessed on the IUCN Red List, devising approaches to facilitate their assessment or to include them in conservation prioritization without formal assessments are of the highest priority (Callmander et al., 2005;Royal Botanic Gardens, Kew, 2010). We therefore must refine our understanding of what factors contribute to high extinction risk and which plant species are likely to be at highest risk (Duffy et al., 2009;Brummitt et al., 2015). Predictive modelling of extinction risk is a widely used tool to quantify threat levels across taxa and prioritize conservation research (Purvis et al., 2000;Cardillo et al., 2008;Di Marco et al., 2014;Jetz & Freckleton, 2015). Machine learning models such as random forests are popular due to their ability to find patterns in large and complex datasets Bland et al., 2015a), and there is clear potential to apply these models to plants (Duffy et al., 2009). Existing studies of extinction risk in plants focus on national or regional scales, for example Amazonian plants (Feeley & Silman, 2009). Global extinction risk studies are more challenging as they rely on the availability of systematic, global data for the large majority of species under study. In particular, lack of high-resolution, high-quality occurrence data for most plant species presents limitations in scaling up the previous models (Bland et al., 2015b).
The World Checklist of Selected Plant Families (WCSP) is a coarse-scale dataset of species presence according to a hierarchical geographic coding system derived from the Taxonomic Databases Working Group (TDWG; Brummitt et al., 2001). Data are complete for a large number of plant families and comprehensive for broad taxonomic groups such as the monocotyledons (WCSP, 2014). The coding at level one is continental, level two is regional and level three is broadly equivalent to small countries and islands (Brummitt et al., 2001). This coarse-scale geographic coding of plant distributions has not yet been explored as a predictive tool for extinction risk assessment. In this paper, we quantify the effect of high-quality range data on variable importance by comparing models using both coarse-scale and fine-scale species distribution data.
Modelling extinction risk relies on identifying appropriate variables which correlate with extinction risk in species of known conservation status. Correlates broadly fall into intrinsic (life history traits, e.g. habit and dispersal mode, and ecological traits, e.g. range size) and extrinsic variables (environmental and anthropogenic) and the most informative models combine both types of data (e.g. Davies et al., 2011). For vertebrates, studies often focus on intrinsic traits such as body mass, fecundity and niche characteristics (Purvis et al., 2005;Di Marco et al., 2014). Intrinsic correlates of extinction risk are less researched for plants at higher taxonomic levels, including pollination syndrome, height, habit, sexual system and dispersal mode (e.g. Lalibert e, 2016), but there have been some recent advancements in understanding plant functional traits (e.g. Bullock et al., 2002;Pywell et al., 2003;Cornwell et al., 2014;and Diaz et al., 2016). Extrinsic variables are known to be good predictors of extinction risk (Lee & Jetz, 2010;Murray et al., 2014;Di Marco & Santini, 2015). The SRLI for plants found human impacts to be the greatest cause of threat to plants, particularly the conversion of natural habitats to agricultural land (Brummitt et al., , 2015 and global data on human threats are increasingly available for use in models (e.g. CIESIN & CIAT, 2005;Hansen et al., 2013).
We bridge this gap by focusing on bulbous monocotyledons, a relatively well-known plant group. Bulbous monocot is an informal term that refers to all Monocotyledons in the orders Liliales and Asparagales with a geophytic life-form and petaloid flowers (excluding Orchidaceae) (A. Trias-Blasi, Royal Botanic Gardens, Kew, London, pers. comm.). There are approximately 7000 bulbous monocot species in eight different plant families (Amaryllidaceae, Asparagaceae (subfamilies Scilloideae and Brodiaeoideae), Colchicaceae, Iridaceae, Ixioliriaceae, Liliaceae, Melanthiaceae and Tecophilaeaceae) (WCSP, 2014). Many bulbous monocot taxa are economically important due to their horticultural, medicinal and nutritional value (Marshall, 1993). Bulbous monocots, such as snowdrops, have been greatly affected by illegal collecting activities linked to international trade, which has led to their inclusion on the checklist of the Convention on Trade in Endangered Species of Wild Fauna and Flora (CITES) (Davis, 1999;Y€ uzbas ßio glu, 2008;Newton et al., 2014). Small range size, extractive activities and habitat loss from agricultural development, grazing, urban expansion, road building and tourism in littoral/montane sites also threaten species survival across global ranges and are likely to be good predictors of extinction risk ( € Ozhatay et al., 2013;IUCN, 2014). To date, the extinction risk of only c. 2% (148 species) of bulbous monocots has been assessed against the IUCN Red List criteria (IUCN, 2015), so the majority of species are categorized as Not Evaluated (NE). As bulbous monocots have complete geographic range data on the WCSP (WCSP, 2014), they are an excellent case study to test the applicability of extinction risk models, not only for monocotyledons but also for plants in general.
Using bulbous monocots as a study group, we build models to discriminate threatened and non-threatened species based on species-level data. Using species assessments from the IUCN Red List and SRLI as a training set, we build models to predict the threat status of bulbous monocots that are yet to be assessed. Our aims are to: (1) test the utility of coarse-scale distribution data in extinction risk models compared to finescale data; (2) identify correlates of extinction risk in bulbous monocots; and (3) predict levels of extinction risk in nonassessed species. If models perform well using coarse-scale distribution data, they will provide a much needed tool to understand the drivers of threat in bulbous monocots and to prioritize conservation efforts at a global scale. It could also provide opportunities for application to other plant groups and be up scaled to the whole plant kingdom.
METHODS
Predicting species extinction risk and prioritizing conservation actions can be achieved in five steps: data collection, model validation, predictive modelling, prioritization and review of the process (Fig. 1).
Distribution data
We obtained a list of all bulbous monocot species and associated Taxonomic Databases Working Group (TDWG) level 3 distributions from the World Checklist of Selected Plant Families (WCSP, 2014). We partitioned the list into two groups: assessed and non-assessed, based on IUCN Red List and Sampled Red List Index assessments. Species assessed as Critically Endangered (CR), Endangered (EN), Vulnerable (VU), Near Threatened (NT) and Least Concern (LC) formed the assessed group, comprising 148 species, and Data Deficient (DD) and Not Evaluated (NE) species formed the non-assessed group, comprising 6439 species (Table 1). We excluded species classified as Extinct (EX) or Extinct in the Wild (EW), five species with missing distribution data, and families with either no assessed species (Ixioliriaceae and Tecophilaeaceae) or fewer than eight assessed species (Melanthiaceae). PRIORITISATION 11. Compare predicted and observed geographical hotspots of risk. 12. Evaluate cost-effecƟveness of prioriƟsaƟon approaches. 13. Evaluate uncertainty in prioriƟsaƟons and cost-benefit analyses. Figure 1 Framework of steps to predict species extinction risk and prioritize conservation action. This analysis starts with the first three steps: data collection, model validation and modelling predicted extinction risk. To ensure effective conservation on the ground, we recommend the results of these steps inform conservation prioritization and that the process is reviewed and refined as data sources and model techniques are improved.
Correlates of extinction risk
We selected predictor variables based on expected correlates of extinction risk and data availability. We used seven variables grouped into taxonomy, geographic distribution, human impacts and conservation action (Table 2). We used the count of TDWG level 3 regions (representing small countries or islands) within a species range as an indicator of range size, with a score of one representing endemic species. We also included an indicator of isolation, measured as percentage of islands across a species distribution. Data on life history traits are sparse and incomplete across plant groups. For the majority of plant groups, only life-form data (habit) are consistently available, which are redundant for this analysis as all bulbous monocots are by definition geophytes. To indicate differences in life history traits that could mediate extinction risk we used taxonomic family to account for shared evolutionary history. We use taxonomy as a surrogate for phylogeny due to data availability. Although there are some phylogenetic studies of bulbous monocot families (Seberg et al., 2012;Chen et al., 2013;Garc ıa et al., 2014), it is currently difficult to incorporate phylogenetic relatedness in machine learning when there is no consolidated phylogeny at the group level. In addition, whilst phylogeny can be informative, it is currently not possible to include phylogeny in a computationally appropriate way in machine learning models, and data on life history traits is preferable for both predictive power and interpretability (Bland et al., 2015a). Within each TDWG level 3 region, we queried GIS layers of human impacts and conservation action to calculate average values across all the regions within each species range. Human impact variables included the following: Human Footprint Index (WCS, 2005), human population density (CIESIN & CIAT, 2005) and global forest loss (Hansen et al., 2013). We used forest loss as a surrogate for habitat loss, as it has been shown to have adverse effects on a diverse range of habitats, for example grassland and savannas (Boakes et al., 2010). We measured conservation action as the percentage area formally protected by intersecting TDWG level 3 regions with protected area polygons (IUCN & UNEP-WCMC, 2014). We then calculated average percentage area across each species distribution in an equal area cylindrical projection.
As TDWG level 3 regions are not equal in terms of area coverage, each region was normalized by the area of the region before averaging across the species range. We base predictions on current levels of threat and do not attempt to predict future risk levels under different scenarios, so climate change variables were not included.
Extinction risk models and spatial analysis
We normalized predictor variables and split the data to create a training set of assessed species and a prediction set of non-assessed species. Multiple methods are available to model and predict extinction risk Bland et al., 2015a;Luiz et al., 2016). Phylogenetic least squares is a popular method to explain extinction risk (Purvis et al., 2000;Cardillo et al., 2008), but is unsuitable for groups with no consolidated global phylogeny and shows limited predictive power (Bland, 2015a). Because of the absence of consolidated global phylogenies for bulbous monocots (but see species available on Open Tree of Life) and our focus on predicting the status of non-assessed species, we use the machine learning model 'random forests', an ensemble of decision trees that repeatedly split predictors into increasingly homogenous groups. Random forests show high predictive power in extinction risk analyses compared to other machine learning tools (Bland, 2015a). These models were trained with assessed species and associated predictor variables to model the probability of threat in nonassessed species (Kuhn, 2008;R Core Development Team, 2014). Within the training set, we categorized threat status as 'threatened' (Critically Endangered, Endangered, Vulnerable) and 'non-threatened' (Near Threatened and Least Concern) due to difficulty in discriminating five imbalanced categories (e.g. 83 Least Concern and 7 Critically Endangered species; see Appendix S1 in Supporting Information, Table S1.2) (Hand, 2012;Luiz et al., 2016). To test for variation in model results due to sampling bias, we first ran models separately for IUCN Red List and SRLI species. To maximize sample size, we then grouped all species to test likely correlates of extinction risk.
To reliably predict the status of non-assessed and assessed species, it is imperative to assess model performance, that is, the ability of the model to accurately predict the extinction risk of species of known conservation status. We set model parameters to run 500 trees to prevent overfitting and assessed predictive accuracy using 10-fold cross-validation repeated five times on the training set. We used area under the receiver operating characteristic curve (AUC) as the most appropriate measure of model performance, as it assigns equal weight to both sensitivity and specificity whilst accounting for imbalanced threat categories in our training set (see Appendix S1, Table S1.2). We selected the optimal probability threshold by maximizing the Youden Index (sensitivity + specificity À 1; Perkins & Schisterman, 2006). We then undertook variable selection to understand if drivers identified by the SRLI project are also correlates of extinction risk in bulbous monocots, which were underrepresented in the original assessment. We measured variable importance with the mean decrease in Gini Index; a large decrease indicates high levels of statistical dispersion (Breiman et al., 1984).
Some variables are likely to correlate with extinction risk at subnational and local levels. Therefore, we also tested models based on fine-scale range size data, using a subset of assessed species for which these data were available. We used IUCN polygon range maps for IUCN Red List species. Because range maps are not available for SRLI species, we constructed maps based on occurrence points. We buffered each occurrence point with a 10-km radius and created a minimum convex polygon around those points. The 10-km radius was chosen to represent the area over which a population is expected to experience threats. We calculated range area in km² and extracted mean values of variables across species ranges. These models quantify the effect of high-quality range data on variable importance, but cannot be used to predict extinction risk in non-assessed species for which range size data are not available.
We predicted the status of 6439 species with the best model and mapped the spatial distribution of predicted threatened species to identify potential hotspots of threat. These maps were compared with the distribution of threatened bulbous monocot species that are currently on the IUCN Red List to identify priority areas where predicted threat levels are high, but few or no species have been assessed. Such gaps can be used to prioritize future conservation assessments.
RESULTS
Assessing the ability of our model to predict extinction risk in species of known conservation status The model results demonstrate good predictive performance with a high area under the receiver operating characteristic curve (AUC = 0.98) indicating a good fit of the model to the data (Pearce & Ferrier, 2000) and high classification accuracy (91% of species were correctly classified during cross-validation). The model accurately classified 88% of threatened species (sensitivity) and 93% of non-threatened species (specificity; Table 3).
There was good discrimination between the threatened and non-threatened classes with particularly strong discrimination between Critically Endangered species and non-threatened species (Near Threatened and Least Concern) (Fig. 2). All Critically Endangered species were correctly predicted, indicating strong predictive capacity for species with high extinction risk. Near Threatened species, which are on the threshold between threatened and non-threatened, had the lowest proportion correctly classified. Identification of individual species misclassified revealed that some species (almost all of which are endemic to a single TDWG level 3 unit) were consistently misclassified across models. We found that classification accuracy for endemic species was lower than the average for the model (86% compared to 91%). Of particular note is the higher misclassification rate of species endemic to Cape Provinces (5 of 26 endemics were misclassified, 81% accuracy), including the two outliers in Fig. 2 (Crinum variablemisclassified as threatened and Romulea aquaticmisclassified as not threatened).
Identifying correlates of high extinction risk
Analysis of variable importance revealed that human impact variables (Human Footprint Index, human population density and forest loss) and the conservation action variable (percentage area under formal protection) were strong correlates of extinction risk (Fig. 3). The surrogate for range size calculated from TDWG regions was also important with the second greatest contribution to the predictive ability of the model. Plant family and isolation were not strong correlates of threat but still contributed to overall predictive performance, possibly because of interaction effects with other variables. The model based on fine-scale range data derived from IUCN polygons and SRLI point data again showed good discrimination between threatened and non-threatened classes (see Appendix S1 , Table S1.4), with all but one species correctly classified. Species range size had a much larger contribution to model performance when we used fine-scale range data than in models using TDWG distributions (see Appendix S1, Fig. S1.2).
Predicting the status of non-assessed species
When we applied the model with the highest area under the receiver operating characteristic curve (AUC) to predict extinction risk in bulbous monocots, we found that 35% (2237 of 6439) of species were predicted to be threatened (see Appendix S2, Table S2). Sequential removal of variables with the lowest contribution to the model reduced model performance, so the final model contained all seven predictor variables (Table 2). Running a separate model on IUCN Red List species resulted in reduced model performance (78% of species correctly classified; see Appendix S1, Table S1.3), whilst the model on SRLI species resulted in high model performance (91% correctly classified) but is based on a sample of only 38 assessed species. There was no substantial difference in variable importance between the two models based on IUCN Red List and SRLI species, so all species were pooled to fit the final model. Analysis of model predictions by geographic region reveals a strong geographic clustering of extinction risk predictions. 'Hotspots' of global threat include the following: Turkey and Cape Provinces (> 153 species predicted to be threatened), Iran, central Chile, Greece, Spain, Lebanon, Syria, Northern Provinces and Transcaucasus (> 90 species predicted to be threatened; Fig. 4c). Regions with a high density of species assessed as threatened were reflected with a high density of predicted threatened species, for example, in Greece, Spain and Cape Provinces. In contrast, some regions, for example Ecuador, with a high density of assessed threatened species were not predicted to have a high density of threatened species (Fig. 4b,c).
DISCUSSION
In light of global environmental change, it is paramount to identify which species are at high risk of extinction and to diagnose drivers of risk in order to reverse biodiversity declines. Acquiring this information is particularly challenging for species-rich and poorly studied groups such as plants (Brummitt et al., 2015). Bulbous monocots are a good study group as they are a relatively well-known group of plants comprising horticulturally important plants which are widely traded (Davis, 1999;Y€ uzbas ßio glu, 2008;Newton et al., 2014), and threatened in the wild ( € Ozhatay et al., 2013;IUCN, 2014). We present the first global analysis of bulbous monocots, giving much needed indications of threat levels and correlates of risk across this economically important plant group. This information is crucial in order to prioritize geographic regions and individual species for future conservation interventions, including IUCN Red List assessments and trade controls.
IUCN Red List assessments and indices underpin a number of the Aichi Biodiversity Targets of the Strategic Plan for Biodiversity 2011-2020 as well as the cross-cutting Global Strategy for Plant Conservation. Achieving Target 2 of the Global Strategy for Plant Conservation 'an assessment of the conservation status of all known plant species, as far as possible, to guide conservation action' by 2020 will require rapid advances to increase the rate and reduce the high cost of species assessments (Baillie et al., 2008;Juffe-Bignoli et al., 2016). In addition, with limited funding there will undoubtedly be trade-offs between increasing the number of plant assessments and keeping current assessments up to date . A recent review of progress in achieving the Global Strategy for Plant Conservation indicated that Target 2 is unlikely to be met by 2020 (Sharrock et al., 2014), highlighting the urgent need for rapid and costeffective assessments (Bland et al., 2015b(Bland et al., , 2016. Our model predicts that 35% of non-assessed bulbous monocots are currently at risk of extinction, resulting in a potential 2254 additional threatened species to the 59 already assessed as threatened on the IUCN Red List (IUCN, 2015). This estimate is higher than the global average for plants based on the SRLI (20%) (Brummitt et al., 2015). However, these predictions are much closer to extinction levels in another horticulturally important group, the cacti (Cactaceae), for which 31% of species are at risk (Goettsch et al., 2015).
Coarse-scale global analyses can be used to predict hotspots of threat and systematically prioritize species assessments using local data (Smith et al., 2009). In addition to predicting known hotspots of threat, for example Turkey and Cape Provinces, our model predicted hotspots where no bulbous monocot species have yet been assessed, for example central Chile (72% of species predicted to be threatened), and some hotspots where no species have yet been assessed as threatened, for example Iran, Lebanon and Syria (Fig. 4b, c). Many species-poor regions exhibit high levels of predicted extinction risk (Fig. 4a,c), for example, more than half of India's 50 species are predicted to be threatened, including 25 endemic species. On the other hand, some species-rich areas are predicted to have few or no threatened species, for example Namibia where none of its 204 species are predicted to be threatened. This global view can be used to prioritize international conservation efforts to target the most vulnerable regions, thereby increasing cost-effectiveness and demonstrating value for money. The drivers of extinction risk in bulbous monocots have received limited research at the group level, yet trade in bulbs has been increasing (CITES Trade Database, 2016) and the value of international floriculture exports, including bulbs, increased from US $8.5 billion in 2011 to US $20.6 in 2013(UN Comtrade, 2014. Our model highlights common patterns in the human impact and biological variables used to predict extinction risk. As also highlighted by other taxonwide studies (Cardillo et al., 2008;Murray et al., 2014;Di Marco & Santini, 2015) and the SRLI project, human impacts were key in distinguishing between threatened and non-threatened species. The Human Footprint Index and population density were strong correlates of extinction risk (Fig. 3). This is likely due to close correlation with habitat degradation through urbanization, the second most common threat for bulbous monocots on the IUCN Red List (IUCN, 2014).
Range size was also an important correlate, reflecting the inherent role of range size in IUCN criteria (IUCN, 2001). The strongest effects were seen in species with narrow ranges (fewer than three Taxonomic Databases Working Group level 3 regions; see Appendix S1, Fig. S1.3a). The relationship between percentage area protected and likelihood of extinction is less clear, perhaps reflecting poor data coverage or because protected areas are not usually created and managed for plants. Whilst the World Database on Protected Areas is the best available global dataset for protected areas, it is compiled from the submission of national data and, as such, the data are incomplete and outdated for many countries (UNEP-WCMC, 2015). Issues of data coverage are particularly prevalent in Middle Eastern countries, for example Turkey, where bulbous monocot species richness is highest. Consistent with previous studies, plant family does not appear to be a good correlate of extinction risk (Fr eville et al., 2007). Isolation was also a poor correlate but the indicator could be strengthened if population data were collected more widely. Understanding how drivers of threat in bulbous monocots compare to plants as a whole is an important step in ensuring plant conservation strategies are suitable at the group level.
To be confident in predictions of extinction risk in nonassessed species, it is paramount to assess the capacity of the model to accurately predict risk in species of known conservation status. Our model showed high classification accuracy (91%) in line with extinction risk models for other taxa, for example 90% for mammals in Di and 95% for birds in Machado & Loyola (2013). However, the accuracy of our model in predicting threatened species (88% model sensitivity) is an improvement on previous models, for example 68% in Di and 24% in Machado & Loyola (2013). The ability to accurately identify threatened species is appropriate for conservation problems as it is more risk averse to predict species as threatened when they are not (false positives) than to predict species as not threatened when they are (false negatives). Model performance is particularly high for Critically Endangered species, whereas Near Threatened species had the lowest classification accuracy. This may reflect the lack of quantitative criteria to define the Near Threatened threshold (IUCN, 2001), meaning that the boundary for this category is more 'fuzzy' than for threatened categories.
Although model performance was good, poor data availability imposes limitations on the inferences we can make from the results (Table 4 shows limitations and recommendations). Model validation was based on a sample of 148 assessed species which were used to predict the extinction risk of 6439 non-assessed species. Jetz & Freckleton (2015) consider predictions to be sufficiently reliable when 60% of species have been assessed and are used to train predictions for non-assessed and/or Data Deficient species. To predict extinction risk in bulbous monocots through modelling alone, this would equate to random assessment of 3952 species before then predicting the remaining 40%. We therefore recommend first validating this approach using random samples at national and subnational levels, focusing on areas for which there is an abundance of plant information and assessments. For example, the South African National Biodiversity Institute (SANBI, 2014) has nationally assessed 1987 bulbous monocots and is a good candidate for further model validation. However, obtaining suitable re-assessment data to validate extinction risk models remains a challenge (Bland et al., 2016) and such validation is in itself biased and may not estimate the true accuracy of the model (Hastie et al., 2009).
Our model assumes the relationships between correlates and assessed species in the training set are transferrable to non-assessed species. This assumption is substantial considering the apparent bias towards threatened species assessments in IUCN Red List. Nevertheless, models run separately for IUCN Red List and randomly sampled SRLI species did not show substantial differences in variable importance. The most important predictor variables were indicators of habitat degradation, a non-species-specific threat, which the SRLI found to hold true across plant groups under random sampling (Brummitt et al., 2015).
Most predictor variables used in the model show strong correlations with extinction risk; however, increasing the number and range of variables is likely to better capture variations in extinction risk globally. For example, forest loss data, mainly based on forest cover derived from satellite imagery, may not adequately represent the nuances of habitat degradation such as selective logging (Burovalova et al., 2015). We recommend that land cover/land use data are included alongside forest loss in future models.
As our approach is restricted to systematic, globally available datasets, the range of species-specific threat data that could be included in the model was limited. Inclusion of such data, for example overharvesting of wild populations for trade, is likely to improve the accuracy of predictions yet are currently only available for a small number of species (see Table 1 in Smith et al., 2011).
This study highlights that models based on coarse-scale species distribution data can provide rapid and low-cost options for preliminary assessments of extinction risk and conservation prioritization. As the Taxonomic Databases Working Group system for the World Checklist of Selected Plant Families represents the most comprehensively compiled distribution data presently available for plants (WCSP, 2014), the high performance of these models has implications for future application to predict extinction risk for all plants when the checklist is complete. In line with previous studies, our findings indicate that human impacts and species range size are key drivers of extinction risk in bulbous monocots, providing much needed insight into correlates of threat in the group. Our model predicts that 35% of bulbous monocots may be threatened with extinction and highlights predicted hotspots of threat for systematic prioritization of species assessments. In an era of budget constraints for biodiversity assessments, our approach provides a low-cost option to achieving ambitious conservation targets based on limited information and financial investment.
SUPPORTING INFORMATION
Additional Supporting Information may be found in the online version of this article: Appendix S1 Supplementary tables and figures.
Appendix S2 Online database of model results.
B I O S K E T C H
Sarah Darrah is interested in plant biodiversity indicators and the application of extinction risk modelling to conservation prioritization schemes.
Author contributions: All authors contributed to research design and writing of the manuscript. A.T.-B. and S.B. jointly conceived the main objectives. Data were supplied from multiple sources as outlined in the text and references. L.B. provided coding script and S.D. carried out the modelling and analysis.
|
2019-04-02T13:05:07.525Z
|
2017-04-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4dedaaef5ce0869de4ceefbdb7883f49d82689be",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/ddi.12532",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "33cd150347e180c5e41c11feb31b4b05d19c670c",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
119529536
|
pes2o/s2orc
|
v3-fos-license
|
Anomalous Frequency-Dependent Conductivity near the Quantum Hall Transition
The dynamical transport properties near the integer quantum Hall transition are investigated at zero temperature by means of the Dirac fermion approach. These properties have been studied experimentally at low frequency omega and low temperature near the nu=1 filling factor Hall transition, with the observation of an anusual broadening and an overall increase of the longitudinal conductivity Re sigma_{xx} as a function of omega. We find in our approach that, unlike for normal metals, the longitudinal conductivity increases as the frequency increases, whilst the width Delta B (or Delta nu) of the conductivity peak near the Hall transition increases. These findings are in reasonable quantitative agreement with recent experiments by Engel et al. as well as with recent numerical work by Avishai and Luck.
I. INTRODUCTION
The two-dimensional (2D) electron gas placed in a strong perpendicular magnetic field exhibits, in the presence of a disordered one-particle potential, the exact quantization of the Hall conductivity σ xy accompanied by the vanishing of the longitudinal conductivity σ xx . Whilst this integer quantum Hall effect (IQHE) is now fairly well understood [1], the transition region between consecutive Hall plateaus has recently attracted a good deal of attention by the condensed-matter physics community. Extensive experimental [2,3,4,5,6], theoretical [7,8,9,10] as well as numerical [11,12] work has been carried out to characterize the transport properties in the transition region. This region arises due to the existence of delocalized electronic states responsible for the jump in the Hall conductivity between two consecutive Hall plateaus. Within this region σ xx takes up a narrow peak-like form.
The detailed study of the transition region between quantum Hall plateaus is important in testing and enhancing our understanding of the IQHE and of the underlying physics of localization in the presence of a magnetic field. In this paper we contribute to the advance of the theoretical description by showing how the Dirac fermion approach [8,13,14,15], a successful theoretical treatment in the description of the IQH transition, affords a detailed explanation of some puzzling recent experimental measurements by Engel et al. [16] and Balaban et al. [17] of the frequencydependent (AC) conductivity in the IQH system at low temperatures.
Hitherto, much work has concentrated on the characterization of the static transport properties in the neighborhood of the transition region, where the approach to the localization-delocalization transition is dominated by a diverging localization length ξ∼|E − E c | −ν . More recently, some attention has been devoted to frequency-dependent properties, where the approach to the critical point E c is characterized also by diverging time scales. The frequency-dependent AC conductivity is an interesting and probably characteristic quantity of a 2D conducting QH system where σ xx is non-zero (metallic regime). In contrast to normal metals, where a Drude-like behavior is observed, the conductivity of a 2D QH system has been reported to display an increasing behavior with frequency [16]. Thus it is indicative of a new class of systems, different from normal metals (for which the conductivity decreases with frequency, or temperature). This can be seen from the peak-shape of σ xx (B, ω) as a function of the magnetic field B in the recent IQHE measurements of Ref. [16]: the width ∆B of the conductivity peak broadens as the frequency (or temperature) is increased. Numerical simulations, based on the lowest Landau level projection, initially gave indications only for a decreasing (yet non-Drude like) σ xx (ω) [18]. However, a more recent numerical investigation by Avishai and Luck [19] has provided strong evidence for a broadening of the longitudinal conductivity peak with frequency. This is a convenient point where to make the case for the theoretical approach -based on Dirac fermions -that is to be used in this paper and also to put it into the right perspective. Most -although by all means not all [1,20] -of the theoretical schemes that have been so far developed to explain the IQHE on a microscopic basis, rely on the concept of localization of the single-electron states in the presence of any finite amount of disorder. Generally speaking, it is sufficient to have a finite but narrow band of extended states near the center of each Landau level, and total localization everywhere else in the density of states (DOS), to explain most features of transport in the IQHE. A detailed theory, stemming from the field-theoretic and scaling approach to Anderson localization in the absence of a magnetic field, has been developed by Pruisken et al. [21,22] and rewarded with some experimental evidence for its correctness [23]. This approach makes use of the concept of composition of the σ xx and σ xy conductivities and leads to a scaling theory where these conductivities are both universal (although the actual value of σ xx is yet undetermined exactly at the transition) and the degenerate extended states at each transition are concentrated into single-energy points, say in the DOS. Outside these points, corresponding to the free-electron Landau energy levels, there is total localization, although in the original paper by Levine et al. [21] the possibility of a band of extended states near the plateau-to-plateau transition could not be ruled out. This approach certainly represents a useful picture for most numerical as well as experimental studies of the IQH transition, leading to the concept of quantum critical point that may prove useful also to other areas of research in condensed-matter theory. The approach of Pruisken et al. [21,22] certainly remains an appealing global picture reproducing the pattern of the IQH transport experiment. However, our point of view here is that the single-energy extended-states picture cannot be the full story in a detailed microscopic understanding of the IQH transition. Infinitely-narrow single-particle levels are a paradigm for bound states (point spectrum), not for extended ones (continuous spectrum). This is in line with Heisenberg's indetermination principle; for if a quantum particle can be made to sample a band of extended states with energy width W (via e.g. an infinitesimal disturbance) it will acquire a momentum uncertainty µ being some effective inertial mass (not just in the sense of periodic potential's band structure). If we take the disordered potential to be characterized by a spatial-correlation characteristic length ℓ, then ∆pℓ ∼h yields So, an infinitely narrow band would lead to an infinite effective mass or zero mobility and no conduction. This argument -though by no means a proof -works for the atomic as well as for the perfect periodic lattice limits. It is also important to understand conduction in doped semiconductors, where infinitely narrow impurity levels represent weakly-bound localized states giving way to an impurity conduction band only in the limit of heavily doped samples.
The outstanding example we know of a single-energy localized degenerate state is that of a Landau level in the impurity-free electron gas. There, the enormous degeneracy of the Landau levels is a consequence of the independence of energy on the orbital momentum L z . In our view, as soon as impurity collisions take place there will be new states, some of them extended and ready to accept the scattered electrons. The finite band width of such states ensures the mobility of the electrons being scattered and therefore conduction. The finite band-width picture for the extended states would thus seem to be in agreement with basic quantum physics principles. We should stress, however, that there exist 1D models of electrons in random potentials [7,8,24,25,26,27] which do demonstrate the possibility of single energy extended states. Nevertheless, these models have a singular DOS (vanishing or divergent at this energy), a situation which is not covered by the above qualitative argument. In this paper we aim at describing physical 2D systems with a regular DOS and no extreme behavior. The Dirac fermion model with an inhomogeneous mass, as used in this paper, is a plausible representation for electrons undergoing the IQH transition in the presence of a random potential, as discussed in Ref. [8]. As explained also in the next Section, initial work with this model relied on perturbative calculations which led to unphysical results for the density of states [8]. However, it was shown by one of the present Authors that a non-perturbative approach would cure this problem yielding all the desired features with the sole random mass ingredient. Moreover, the non-perturbative calculation leads to a narrow, but finite band width for the extended states ( [10,28], see also [29]). As we have explained, we find this feature of the model rather attractive, together with the fact that the approach lends itself to a number of predictive analytical calculations not possible otherwise with the single-energy picture. The Dirac fermion formulation was recently also strengthened by a mapping of the network model [11] for the IQH transition onto a Dirac effective Hamiltonian with both inhomogeneous mass and scalar as well as vector potentials [30]. In our view, the inhomogeneous mass is sufficient to give the full picture, and at the same time the simplest.
The purpose of this paper is to investigate the low-frequency behavior of the conductivity peak on the basis of a theory accounting directly for the IQH delocalized states, using this effective and appealing approach based on Dirac fermions. This approach has so far been used to account for the static transport properties of the IQH transition, and the method can be extended to include thermal fluctuations [31]. Frequency-dependent behavior is, however, very similar to temperature behavior [16], since dynamics enters in the formalism through the Matsubara frequencies, which are themselves proportional to temperature. We will therefore work only with the frequency-dependent Dirac fermion approach. A complementary approach to the frequency-dependent conductivity was worked out by Polyakov and Shklovskii using the hopping mechanism of localized states [32]. They derived power laws for the broadening of the conductivity peaks due to frequency, current and temperature. In contrast to our approach, which works very close to the conductivity peak at low frequency, they study the broadening in the regime of higher frequency where the electronic states become localized if the frequency goes to zero. The success of this hopping approach calls for an extension of these ideas to the regime where the states are delocalized. This is precisely what we shall do in the following using the method of Dirac fermions which seems to afford a good deal of predictive power.
The paper is organized as follows. In Section II we briefly recall the basic features of the Dirac fermion approach to the IQH transition. This approach is implemented in Section III in order to directly evaluate the AC longitudinal conductivity σ xx (ω) from the Dirac fermion propagator in which a weakly-disordered one-particle potential is accounted for. The main features of the AC conductivity are described in the light of recent numerical as well as experimental investigations of dynamical scaling in the IQH transition region. In Section IV the description is specialized for the frequency dependence of the conductivity peak width, which is found to be in agreement with some recent measurements by Shahar et al. [33] carried out for the (related) temperature dependence of the longitudinal resistivity width and by Balaban et al. [17] for the frequency dependence. For non-vanishing frequencies the results of our calculation also agree with the experimental data of Engel et al. [16] and the numerical data of Avishai and Luck [19]. Section V contains our conclusions and outlook.
II. THE MODEL AND DERIVATION OF THE CONDUCTIVITY
The main features of the IQH transition are captured quite effectively by a tight-binding model in which, although the real system has no lattice, the electrons hop over a scale given by the magnetic length [8]. This is closely related to the Chalker-Coddington network model [11] in which electrons hop from region to region with random tunneling and random magnetic flux. Both models [8,30] lead in the large-scale approximation to an effective Hamiltonian describing the dynamics of Dirac fermions (with a random mass or coupled to a random vector potential) where the energy is measured in units of the hopping parameter t of the original lattice model. ∇ j is the lattice differential operator in the j-direction and {σ j } are the Pauli matrices. This Hamiltonian, with a random mass term M and zero random vector potential A j , is a reasonable starting point for the description of the IQH transition between plateaus at filling ν = 1. One important issue consists in what type of randomness is realistic in the approach with Dirac fermions. Ludwig et al. argued that the case of a random Dirac mass is insufficient to describe the generic IQH transition since it has a vanishing DOS at the transition point. Yet the random Dirac mass is reasonable on an intuitive basis because it is the most relevant random contribution to the Dirac Hamiltonian in terms of symmetry breaking [34] (see discussion in Ref. [8]). However, it was shown by several other Authors and approaches that the DOS becomes non-zero at the transition when treated on a non-perturbative basis [10,28,29]. Therefore the random mass case, in contrast to that of a pure random vector potential, ought to represent a generic model for the IQH transition and this is the point of view adopted in this article. We choose a random mass M = m + δM with mean m and a Gaussian distribution with δM r δM r ′ = gδ rr ′ , g being a measure of the strength of disorder.
The frequency-dependent conductivity at T = 0 reads [35] where G is the one-particle Green's function of H D . In the following we will use the standard approximation of small ω for σ xx (ω) [7] σ xx (ω) ≈ ω 2σ xx (ω, 0).
This approximation will be shown to be equivalent to the Einstein relation, as can be deduced from the expression forC(k, η − iω, E) which will be derived below. According to the Dirac fermion approach of Ref. [10], the two-particle Green's function describes a diffusive behavior between the Hall plateausC where ρ is the average DOS and D the diffusion coefficient (notice that we use a notation for D different from that of Ref. [10]) with (for an infinite cutoff) .
The parameters m ′ and η ′ ≡ πgρ have been evaluated within a saddle point approximation [10]. They obey the following equations (taking the limit η → 0) and with the integral I given by Here we have cut-off the k-integration to |k| ≤ λ. This is necessary because the integral I would not otherwise exist. The cut-off corresponds to a minimal length scale in the real system, i.e. the lattice constant in our model, which is usually the mean free path of the particles. It reflects the fact that (quasi)particles cannot be considered as independent on arbitrary short scales.
We notice that this is the Einstein relation. Going back to the general case, Eqs. (13) and (15) imply for the conductivity the following expression This represents a simple scaling form of the type since only the combination m ′ /η ′ enters the expression.
Now we have to evaluate m ′ and η ′ from Eqs. (10) and (11). For ω, η ′ = 0 Eq. (10) can also be written as e 2π/g e 2πiω/η ′ g = 1 + and since we are interested in the small frequency regime ω ≈ 0 we find it useful to work out a closed approximate analytic solution of this equation. For ω ≈ 0, we have gI ≈ 1, from Eq. (10). Then Eq. (11) implies m ′ ≈ m/2 and the exponential term in Eq. (18) can be expanded to give, in leading order with m c = 2λ exp(−π/g). This is a cubic equation in η ′ from which we take the solution with This solution reproduces the correct result in the limit ω → 0, namely η ′ → √ a for a ≥ 0 and η ′ → 0 for a < 0 [31]. Using the approximate values of m ′ and η ′ from above we can study For b ∝ ω large compared to (m 2 c − m 2 )/4 we have η ′ ∝ ω 1/3 from Eq. (19). Consequently, the scaling behavior of the conductivity is given in this regime by This, of course, does not hold for all values of ω because η ′ is not a power law for very small ω. The general behavior of the real and imaginary parts of σ xx (ω) is shown in Fig. 1, for the illustrative values m c = 0.01 and m = 0.009.
IV. BROADENING OF THE σXX PEAK
Much of the experimental and numerical work on this problem has concentrated on the conjectured universal scaling behavior of the peak width of σ xx (B, ω). It is clear from our calculation in the previous Section that the peak width of our model does not vanish with vanishing frequency. There are, at this point, two possible attitudes for this fact: either our model does not capture the physics of the real systems, or the non-vanishing width is too small to be resolved in the experiments [23] or in the computer calculations [11,12]. There are alternative models, for instance the 2D Dirac fermions with random vector potential in place of the random mass, which do have a vanishing peak width [8]. However, these models have an unphysical behavior due to a singular DOS and a peak height different from experimental observations [6]. Since within a non-perturbative calculation the peak height and the smooth behavior of the DOS in the case of a random mass are in good agreement with experiments, it is more likely that the second point of view applies to our model. This we have advocated for in our presentation of the theoretical framework used in this paper, as given in the Introduction. Moreover, very recent experiments indicate that the peak width does indeed not vanish in the zero temperature and zero frequency limit [33,17]. The exponential dependence of the width on the disorder parameter may explain why it is difficult to measure the narrow peak width, in particular in samples with weak disorder.
The broadening of the peak width can be seen in Fig.2, where Re(σ xx ) is plotted as a function of the average Dirac mass m and the frequency ω for, as an illustration, disorder strength g = 0.6 corresponding to the value m c = 0.01. For m ≈ m c the conductivity Re(σ xx ) varies roughly like ω 2/3 as one would expect from Eqs. (22) and (23). However, away from m c the broadening does not, strictly speaking, describe a power law.
V. DISCUSSION AND CONCLUSIONS
The broadening of the conductivity peak has also been studied numerically by Avishai and Luck [19]. They studied the scaling of the real part of the conductivity σ xx (E, ω), finding that there is indeed a scaling law of the type for the real part of the conductivity. These Authors found values for κ between 0.31 and 0.43. We can use our result from Eq. (16), where E − E c is replaced by the parameter m driving the IQH transition. However, in contrast to Eq. (24), we do not obtain a simple scaling form for arbitrarily small frequency (see Fig.2). Still, for weak disorder the agreement with Ref. [19] is qualitatively reasonable, at least for small values of m − m c (the numerical values quoted are for the critical point).
In general, if we insist on a scaling law, the exponent κ is related to the dynamical exponent z and to the localization length exponent ν by κ = 1/zν. From κ = 1/3 we could evaluate ν because the usual argument for the dynamical exponent of non-interacting particles is z = 2 [19]. The latter arises from the assumption that the diffusion coefficient and the DOS are constant near the critical point in the two-particle Green's function of Eq. (7). However, if the DOS itself behaves like a power law, ρ ∼ ω α , then the exponent z depends on α. For instance, the pure system of Dirac fermions has α = 1 and there is an effective exponent α = 1/3 in the case of a random mass, according to the discussion below Eq. (22) (and see also Fig. 3). Then Eq. (14) implies D ∼ ω −α , and we obtain from a simple calculation for the two-particle Green's function of Eq. (7), with k ∼ L −1 and η = 0 whereC is the scaling function. With α = 1 (pure system) this implies z = 1 and with α = 1/3 it gives z = 3/2. The above scaling form does not describe the asymptotic behavior for ω ∼ 0, but rather the effective behavior probably relevant for most of the numerical calculations. The result in Eq. (22) is also different from the Drude theory [36], suitably adapted to accomodate the presence of the Lorentz force on the scattered electrons: where the zero field DC Drude conductivity σ 0 and the cyclotron frequency ω c are defined in the usual way (τ is a collision time) The frequency scale in our study is determined by the value of the hopping parameter t in the tight-binding model of Ludwig et al. [10]. The physical frequencyω is related to the dimensionless frequency ω via ω =hω/t. Moreover, if we employ the parameter b = 2πm 2 c ω/4g with the illustrative value of m c = 0.01, the dimensional frequency is related to b viaω ≈ 4 × 10 16 b Hz for a value of t = 1 eV. (Note, however, that our choice m c = 0.01 (i.e. g = 0.6) is probably only qualitatively significant, since it is difficult to estimate the value of disorder strength g in a real sample. The above value may be too large for real systems that have less disorder).
Our assumption of small frequency breaks down ifhω becomes of the order of the characteristic energy scale given by the hopping rate t. The latter is of the order of 1 eV in realistic systems; thus, we expect a cross-over frequency of aboutω ≈ 10 15 Hz. Typical experiments were performed for frequencies between 0.2 and 14 GHz [16]; this is well below the cross-over frequency, and our small frequency approximation should hold.
In a recent paper Shahar et al. [33] have studied the behavior of the longitudinal resistivity, in the neighborhood of the transition from the QH liquid to the Hall insulator, known to be of the form These Authors found a deviation from the conventional scaling form ν 0 (T ) = T 1/zν . Instead, they suggest to fit their experimental data with the form where α, β > 0 (both parameters α and β depending strongly on sample properties). A very similar result was found by Balaban et al. [17] for the frequency-dependent conductivity at zero temperature. These results are consistent with our finding of a deviation of the broadening from a power law for very low frequencies. It is a consequence of the non-zero band width of delocalized states in the disorder-dependent interval [−2λ exp(−π/g), 2λ exp(−π/g)] present in our model.
In conclusion, we have developed a theoretical treatment for the dynamical transport properties of the IQH system near the plateau-to-plateau transition. Our results indicate an increase of the longitudinal conductivity accompanied by the broadening of the conductivity peak as the frequency is increased. This behavior follows, at large frequencies, a power law also found in recent experiments [16] and numerical studies [19]. However, for very low frequency we found deviations from a power law, similarly to what was found in recent experiments by Shahar et al. [33] and Balaban et al. [17]. For larger values of the frequency, the scaling behavior of the conductivity is recovered with an exponent 1/3 which is in agreement with the recent work by Avishai and Luck [19]. These results follow from a theoretical treatment in which the delocalized states responsible for the IQH transition are properly accounted for. 1e-08 2e-08 3e-08 4e-08 5e-08 6e-08 7e-08 8e-08 9e-08 1e-07
|
2019-04-14T02:18:42.263Z
|
1998-11-16T00:00:00.000
|
{
"year": 1998,
"sha1": "051dc7242d1ed17ef3731529f03977cfb71f555a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9811218",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5d1330c4c9ff1ff12e70d049968eac1ab237a695",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
13916756
|
pes2o/s2orc
|
v3-fos-license
|
Similar patterns of clonally expanded somatic mtDNA mutations in the colon of heterozygous mtDNA mutator mice and ageing humans
Highlights • Colonic crypts with mitochondrial dysfunction accumulate with age in PolgA+/mut mice.• Mitochondrial dysfunction is caused by clonally expanded mtDNA point mutations.• The mutations are random and their expansion is not subject to selective constraints.• Colonic crypts of aged humans have a similar mtDNA mutation spectrum and phenotype.• PolgA+/mut mice are a good model to study mitochondrial dysfunction in ageing colon.
Introduction
Ageing is a stochastic process characterised by a decline in the replicative and regenerative processes within tissues, resulting in impaired tissue homeostasis and increased susceptibility to disease and eventually death (Kirkwood, 2005). Damage to the mitochondrial DNA (mtDNA) resulting in respiratory chain dysfunction has been proposed to be a significant contributor to the ageing phenotype (Bratic and Larsson, 2013;Larsson, 2010;Linnane et al., 1989).
The human mitochondrial genome is a covalently closed molecule of $16.5 kb, encoding 13 proteins, 2 rRNAs (12s and 16s) and 22 tRNAs (Anderson et al., 1981). The 13 mtDNA encoded proteins form essential subunits of the oxidative phosphorylation system (OXPHOS), and in the absence of mtDNA expression the OXPHOS system breaks down (Larsson et al., 1998). MtDNA is present in multiple copies in the mitochondrial matrix where it is compacted to form nucleoprotein complexes, with the aid of mitochondrial transcription factor A (TFAM) (Kukat and Larsson, 2013).
MtDNA has a $10 fold higher mutation rate than the nuclear DNA thought to be primarily due to endogenous replication errors mediated by mtDNA polymerase (Zheng et al., 2006) and possibly also unrepaired oxidative lesions. Special mechanisms exist to minimise maternal transmission of mutated mtDNA (Fan et al., 2008;Freyer et al., 2012;Krakauer and Mira, 1999;Stewart et al., 2008b); however humans still frequently inherit low level mtDNA heteroplasmy (He et al., 2010;Payne et al., 2013). Given the multicopy nature of mtDNA within cells, and the fact that most mtDNA Clonally expanded mitochondrial DNA (mtDNA) mutations resulting in focal respiratory chain deficiency in individual cells are proposed to contribute to the ageing of human tissues that depend on adult stem cells for self-renewal; however, the consequences of these mutations remain unclear. A good animal model is required to investigate this further; but it is unknown whether mechanisms for clonal expansion of mtDNA mutations, and the mutational spectra, are similar between species. Here we show that mice, heterozygous for a mutation disrupting the proof-reading activity of mtDNA polymerase (PolgA +/mut ) resulting in an increased mtDNA mutation rate, accumulate clonally expanded mtDNA point mutations in their colonic crypts with age. This results in focal respiratory chain deficiency, and by 81 weeks of age these animals exhibit a similar level and pattern of respiratory chain deficiency to 70-yearold human subjects. Furthermore, like in humans, the mtDNA mutation spectrum appears random and there is an absence of selective constraints. Computer simulations show that a random genetic drift model of mtDNA clonal expansion can accurately model the data from the colonic crypts of wild-type, PolgA +/mut animals, and humans, providing evidence for a similar mechanism for clonal expansion of mtDNA point mutations between these mice and humans. ß 2014 The Authors. Published by Elsevier Ireland Ltd. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/3.0/). point mutations are recessive, clonal expansion of mtDNA mutations must occur until a critical threshold level is reached that impairs the respiratory chain. Data from computational models (Coller et al., 2001;Elson et al., 2001;Taylor et al., 2003) and mitochondrial mutation assays (Coller et al., 2005) suggest that mtDNA mutations arise early in life and subsequently expand to levels high enough to cause respiratory chain dysfunction. This defect can be readily identified by the absence of histochemical staining for cytochrome c oxidase (COX) activity (Old and Johnson, 1989), and this serves as an excellent biomarker of mtDNA defects in somatic tissues. Although the mechanism of clonal expansion is not definitely known, there is some evidence that random genetic drift may be sufficient (Elson et al., 2001;Kowald and Kirkwood, 2013;Taylor et al., 2003). In a number of human replicative tissues that depend on adult stem cells for self-renewal (e.g. colon, stomach, small intestine, liver and pancreas), somatic mtDNA mutations have been found to clonally expand and cause an accumulation of COX deficient cells with age (Fellous et al., 2009;McDonald et al., 2008;Taylor et al., 2003). We have previously shown that in the ageing human colon somatic mtDNA point mutations occur at random and their expansion is not subject to selective constraints (Greaves et al., 2012). The functional consequences of mtDNA defects in human stem cells remain largely unknown, partly due to a lack of robust stem cell markers but also due to the static nature of human tissue samples available at only one time point. An animal model which shows similar evidence of age-related clonally expanded somatic mtDNA mutations is therefore likely to prove valuable in enabling more detailed phenotyping of the functional consequences of mtDNA defects in stem cell populations, particularly as more robust stem cell markers are available for mouse tissues (Barker et al., 2007;Itzkovitz et al., 2012).
We have previously shown that aged wild-type mice display significantly lower levels of COX deficient colonic crypts ($1.5% at 36 months old) compared to aged humans ($15% over the age of 70) (Greaves et al., 2011). If random genetic drift is the mechanism by which COX deficiency occurs in mouse colonic crypts, it may be that the shorter lifespan of the mouse does not allow enough time for the threshold mutation load to be reached (Kowald and Kirkwood, 2013). Therefore normal ageing mice are not suitable for the investigation of mtDNA defects in ageing stem cells. The mtDNA mutator mouse (PolgA mut/mut ) has a homozygous mutation (D257A) in the proof-reading domain of the catalytic subunit of the mtDNA polymerase, which results in increased levels of mtDNA mutations and a severe premature ageing phenotype. This mouse model has established a causal relationship between the accumulation of mtDNA point mutations and ageing (Kujoth et al., 2005;Trifunovic et al., 2004). The PolgA mut/mut mice accumulate mtDNA mutations to a significantly higher frequency than seen in normal human ageing ; however, mice heterozygous for the D257A mutation (PolgA +/mut ) accumulate only a moderate level of mtDNA mutations (Kraytsberg et al., 2009). Furthermore, they demonstrate age-related respiratory chain deficiency in the heart and duodenum (Vermulst et al., 2008), similar to humans.
Here we compare the frequency of COX deficient colonic crypts in the PolgA +/mut mice with that of colonic epithelium samples from ageing humans. We show that the PolgA +/mut mice accumulate COX deficient crypts in an age-dependent manner, and that the mutations causing the COX deficiency are clonally expanded mtDNA point mutations. These mutations are similar in location and pathogenicity to those detected in the ageing human colon and also appear to lack selective constraints. Computer simulations suggest that increasing the mtDNA mutation rate within a random genetic drift model of clonal expansion can result in the observed phenotypes in the PolgA +/mut mouse, suggesting a similar mechanism for clonal expansion of mtDNA point mutations in the colonic crypts of this mouse model and humans.
Mouse strains and colon samples
Mitochondrial mutator mice (PolgA mut/mut ) were generated that had a knock-in missense mutation (D257A) in the second endonuclease proofreading domain of the PolgA catalytic subunit of the mtDNA polymerase (Trifunovic et al., 2004). Colon samples were collected from an ageing series of 10 heterozygous mutator mice (PolgA +/mut ) (aged 18-81 weeks old).
COX/SDH histochemistry and DNA isolation from single colonic crypts
Colon samples were collected, mounted for sectioning and frozen in isopentane previously cooled to À160 8C in liquid nitrogen. Cryostat tissue sections (10 mm) were cut on to glass slides and dried at room temperature for 1 h. A standard histological stain (Haematoxylin and Eosin) was performed on 10 mm colon sections to examine normal tissue morphology. Dual colour histochemistry was performed to determine the magnitude of COX deficient crypts and for subsequent laser micro-dissection and DNA extraction. Sections were incubated in COX medium (100 mM cytochrome C, 4 mM diaminobenzidinetetrahydrochloride and 20 mg ml À1 catalase in 0.2 M phosphate buffer pH 7.0) at 37 8C for 25 min. Sections were washed in phosphate buffered saline (PBS) (3 Â 5 min) and then incubated in SDH medium (130 mM sodium succinate, 200 mM phenazinemethosulphate, 1 mM sodium azide, 1.5 mM nitrobluetetrazolium in 0.2 M phosphate buffer pH 7.0) at 37 8C for 35 min. Colon sections were washed in PBS (3 Â 5 min) and dehydrated through graded ethanol (70%, 95% and 2 Â 100%) and two concentrations of Histoclear TM (National Diagnostics, Atlanta, USA) and mounted in DPX. The percentage of COX deficient colonic crypts was identified in transverse colon sections at multiple different levels and approximately 500 crypts were examined per tissue sample.
For laser micro-dissection frozen sections of colon tissue (15 mm) were mounted on polyethylenenaphthalate slides (Leica Microsystems). Sections were exposed to dual colour histochemistry, as described above, and sections were air-dried after dehydration through graded ethanol. Single COX positive and COX negative colonic crypts were cut using Zeiss PALM micro-dissection system into sterile 0.5 ml PCR tubes, samples were centrifuged at 14,000 rpm for 15 min and lysed in standard lysis buffer (Taylor et al., 2003).
MtDNA sequencing of individual colonic crypts
The single crypt lysate (see above) was used as the DNA template to establish the complete sequence of the mouse mitochondrial genome from the single microdissected crypts. A single stage PCR reaction was employed that involved amplification of the mitochondrial genome using 30 pairs of forward and reverse primers to generate overlapping fragments of 1-1.2 kb spanning the whole mouse mtDNA genome. The primer pairs were tagged with M13 sequence to enable sequencing of the PCR products with a universal M13 primer that is designed to anneal at 58 8C. The DNA lysate was taken to a 1:10 dilution and PCR reactions were implemented in 37.5 ml volumes using a mastermix comprising 1Â LA PCR buffer (Mg 2+ ) (Takara Bio Inc.), 0.2 mM dNTPs, 0.9 mM primers, 5 U LA Taq DNA Polymerase (Takara Bio Inc.) and 3.75 ml of single cell lysate (1:10 dilution).
Reaction conditions were 94 8C for 1 min followed by 35 cycles of 94 8C for 20 s, 58 8C for 20 s and 72 8C for 2 min. The optimal final extension was at 72 8C for 2 min. PCR products were taken to a 1:2 dilution and purified using TSAP (Promega) to remove excess primers, and samples were sequenced using BigDye v3.1 terminator cycle sequencing chemistries on an ABI3130xl Genetic Analyser (Applied Biosystems). The sequence for each crypt was aligned to the C57Bl/6J mouse reference sequence (GenBank Accession number NC_005089) and the consensus DNA sequence for that mouse using SeqScape software (Applied Biosystems) to determine the somatic mtDNA point mutations that accumulated in the crypts over time. This was repeated and PCR products resequenced to confirm the somatic mtDNA mutations detected. Heteroplasmy levels were estimated based upon the relative peak height of electropherograms.
Long-range PCR of individual crypts
The single crypt lysate (see above) was used as the DNA template for long-range PCR to determine whether large-scale circular mtDNA deletions were present. Two rounds of PCR were carried out. PCR was performed using Takara LATak PCR system (Takara Bio Inc.) according to manufacturer's recommended conditions. After 30 cycles of first round PCR (95 8C for 20 s, 68 8C for 16 min), 2 ml of the first round product was used as template for the second round PCR reaction (same as above) and 20 further cycles of PCR carried out. First round PCR primers were L272-L301 and H16286-H16254; second round primers were L1275-L13004 and H15833-H15804 giving an expected product size of $14.5 kb PCR products were subjected to electrophoresis through a 0.8% agarose gel and the band sized against a 1 kb Ladder (Promega).
In silico modelling of clonal expansion of mtDNA mutations
We developed a stochastic model of mtDNA mutation expansion in stem cells by random genetic drift. The model was designed and run in MATLAB (version 8.0 MathWorks, Massachusetts, United States) and is available from the author upon request (CS). Each stem cell contains a constant number of 200 mtDNA molecules (Coller et al., 2001), which replicate (relaxed replication) during each cell cycle and are then segregated symmetrically to the daughter cells. After the division one of the daughter cells is kept and the other discarded along with the mtDNA molecules it contains (asymmetric division). In each division there is a certain probability (equal to the mtDNA mutation rate) that a pathogenic mtDNA mutation occurs. The mutated mtDNA molecule can then clonally expand to take over the stem cell, which becomes COX deficient. To match the experimentally observed COX deficiency data from WT, PolgA +/mut and PolgA mut/mut mice, we performed a parameter scan of the mtDNA mutation rate, assuming >75% mutated mtDNA was enough to confer COX deficiency. The mtDNA mutation frequency was determined by averaging 1000 simulation runs of 1080 divisions (36 months). Full details are included in Supplementary File 5.
Respiratory chain deficiency in colonic crypts
Dual COX/SDH histochemistry and Haematoxylin and Eosin (H&E) histology were performed on colonic epithelial tissue of a series of ageing (18-81 weeks) PolgA +/mut mice. The H&E staining showed normal colonic epithelial morphology. The COX/SDH histochemistry detected colonic crypts that were both partially and fully COX-deficient, present in a random, mosaic pattern throughout the tissue ( Fig. 1a and b). The number of fully COX deficient colonic crypts increased markedly with age, from 0% in 18 weeks to $14% in 81 week old animals (Fig. 1c). The frequency of partially COX deficient crypts also increased with age from 0% in the youngest animals to 8% in the oldest. Analysis of longitudinal sections through COX deficient crypts (Fig. 1b) confirmed that, as in the human colon, COX deficiency was present from the base to the apex of the crypts, including within the stem cell compartment. When the data from the 81 week PolgA +/mut mice were compared to the frequency of COX deficiency in the ageing human colon (Taylor et al., 2003), there was no significant difference (p = 0.854, unpaired t test) (Fig. 1d).
MtDNA mutations in PolgA +/mut mouse colonic crypts
To determine whether COX deficient colonic crypts in the PolgA +/mut mouse were due to clonally expanded mtDNA point mutations, we laser micro-dissected at least 8 individual COX deficient and 4 COX positive colonic crypts from three 81 week old PolgA +/mut animals, and sequenced the entire mitochondrial genome. Long-range PCR was carried out to look for the presence of large-scale mtDNA deletions.
No large-scale mtDNA deletions were detected in either COXpositive or COX deficient crypts (Supplementary File 1a). In week old PolgA +/mut mice and aged humans >70 years old (Taylor et al., 2003). The percentage of COX deficient colonic crypts was not significantly different between 81 week PolgA +/ mut mice and aged humans >70 years old, p = 0.854 (unpaired t test). contrast, somatic mtDNA point mutations were detected in both COX-positive and COX deficient PolgA +/mut mouse colonic crypts, (Supplementary File 2). In the COX positive crypts the majority (67%) of the mtDNA point mutations detected were present at low levels of heteroplasmy ( 50%) and were therefore unlikely to affect mitochondrial function (Fig. 2a). Eighty percent of the changes observed were base transitions (Fig. 2b) and 49% affected complex I genes (Fig. 2c). In COX deficient crypts, 81% of the mutations detected were base transitions, predominantly C > T transitions (49%) and G > A changes (17%) (Fig. 2b). They were mainly present in protein encoding genes with 34% in complex I genes and 27% in complex IV genes (Fig. 2c). Seventy-three percent of these mutations were present at heteroplasmy levels !50% and were non-synonymous mutations predicted to cause amino acid substitutions ( Fig. 2a and Supplementary File 2). Twenty-five percent of the mutations were detected in mt-tRNA and mt-rRNA genes at heteroplasmy levels of !75% up to homoplasmy. Example electropherograms of homoplasmic and heteroplasmic mutations detected in COX deficient crypts are shown in Supplementary File 1b.
To investigate whether somatic mtDNA mutations occurred at random in the PolgA +/mut mouse colon we grouped the mutations according to gene location. There was no significant difference between the observed frequencies of somatic mutations in each gene type and those expected due to random chance, based on the relative proportion of the genome occupied by each gene (p = 0.139, x 2 test, Fig. 3a). Furthermore, there was no significant difference in the ratio of synonymous: non-synonymous changes across the different heteroplasmy classes (p = 0.6765, x 2 test, Fig. 3b), suggesting a lack of mutational bias towards the clonal expansion of deleterious mutations in the PolgA +/mut colon.
3.3. The levels of clonally expanded somatic point mutations of mtDNA are similar in the colon from ageing PolgA +/mut mice and humans A comparison of the nature of somatic mtDNA mutations causing COX deficiency in colonic crypts of PolgA +/mut mice and ageing humans was performed (Greaves et al., 2010(Greaves et al., , 2012(Greaves et al., , 2006Taylor et al., 2003) (Supplementary File 3). There was no significant difference in the gene location of the mutations in COX deficient crypts in the PolgA +/mut mouse and human colon, with 60% of the mutations affecting genes encoding subunits of complexes I and IV, and 25% in non-coding RNA (tRNA and rRNA) genes in both datasets (p = 0.2204, x 2 test, Fig. 4a).
Next we compared the types of mtDNA mutations (transitions, transversions and insertions/deletions) and found a significant difference in the types of mutations detected in COX deficient colonic crypts between PolgA +/mut mice and humans (p = <0.0001, Fig. 4b). Although the majority of changes were base transitions ($80%) in both data sets, no insertions/deletions were detected in PolgA +/mut mouse colonic crypts, whereas they accounted for $10% of changes in the ageing human colon (Greaves et al., 2012). Additionally, a higher proportion of transversions were observed in COX deficient crypts from the PolgA +/mut mice (19%) than in humans (5%).
To determine whether the mechanism underlying COX deficiency in colonic crypts was the same in the human and PolgA +/mut mouse data sets, somatic mtDNA mutations were assigned pathogenicity according to the following criteria: (1) not previously identified as a polymorphic variant (as reported in Genbank or transmitted through the PolgA mut/mut germline (Stewart et al., 2008b)); (2) changed an amino acid or occurred in a tRNA or rRNA gene; (3) present at levels >50%; and (4) associated with decreased activity of a respiratory chain complex (COX deficiency) (Greaves et al., 2010;Taylor et al., 2003). The frequency of pathogenic mtDNA point mutations was similar (p = 0.8194, x 2 test) in COX deficient colonic crypts from PolgA +/mut mice (62%) and humans (59%) (Fig. 4c). This indicates that COX deficiency is caused by the clonal expansion of at least one pathogenic mtDNA point mutation in $60% of colonic crypts in both ageing humans and PolgA +/mut mice.
No evidence for purifying selection of somatic mtDNA point mutations in colonic crypts from PolgA +/mut mice
To determine whether somatic mtDNA mutations are subject to selective constraints in the ageing PolgA +/mut mouse colon, we There was no significant difference in the types of changes between the two (x 2 test p = 1.000). (c) Gene location of mutations in individual mtDNA encoded genes in COX deficient and positive colonic crypts. There was a significant difference between the location of the mtDNA mutations detected in COX positive and COX deficient crypts (x 2 test p = <0.0001). compared our dataset to Mus musculus mtDNA mutations reported in Genbank and to mtDNA mutations transmitted through the germline of mtDNA mutator mice (Stewart et al., 2008b). In the PolgA +/mut mouse colonic crypts, the ratio of non-synonymous substitutions per non-synonymous site (dN) to synonymous substitutions per synonymous site (dS) gave a value of 0.783, providing no significant evidence of purifying selection against mutations causing amino acid substitutions in protein encoding genes (p = 0.3402, Fisher's exact test). dN/dS values of $1 signify an absence of selection on the analysed sequences. This was significantly different to mutations observed in the mtDNA mutator mouse germline (dN/dS = 0.310) and normal mouse strains (dN/dS = 0.0640) where mutations were more frequently synonymous changes (Fisher's exact test, p = <0.0001 in both cases) (Fig. 5). There was also a substantially higher number of mutations in the first and second codon positions of protein encoding genes of PolgA +/mut mouse colonic crypts (Fig. 6a) compared to the mtDNA mutator mouse germline (Fig. 6b) and normal mouse strains (Fig. 6c), further demonstrating an absence of evidence for purifying selection, as changes in the first and second codon positions frequently result in amino acid substitution (Stewart et al., 2008a,b). Furthermore, the random distribution of mtDNA point mutations observed in the PolgA +/mut mouse colonic crypts (Fig. 3a) was not observed in either the mtDNA mutator mouse germline (Fig. 6d) or normal mouse strains (Fig. 6e), which both show a non-random distribution of mtDNA mutations, with purifying selection against protein encoding mutations (p = 0.0005 and p = <0.0001 in mutator mouse germline and normal mouse strains respectively, x 2 test).
Random genetic drift can explain the expansion of mtDNA mutations in the colonic crypts of both PolgA +/mut mice and humans
Random genetic drift has been hypothesised to be a plausible explanation for the clonal expansion of mtDNA mutations in both mitotic (Coller et al., 2001;Taylor et al., 2003) and post-mitotic human cells (Elson et al., 2001), leading to respiratory chain deficiency. There are questions as to whether this could also be a plausible mechanism of clonal expansion in short-lived animals, as modelling studies have shown that in post-mitotic tissues such a mechanism cannot explain the experimental data (Kowald and Kirkwood, 2013), however such modelling has not been carried out in mitotic tissues from short-lived animals where it could be argued that many cell generations may elapse even within a short lifespan.
To determine whether the frequency of COX deficient crypts observed in the PolgA +/mut mice compared to wild-type was the result of accelerated clonal expansion by random genetic drift, a computational simulation was developed based on previously established models and experimental data (Coller et al., 2001;Elson et al., 2001;Taylor et al., 2003;Vermulst et al., 2007). The effects of increasing mtDNA mutation rate on frequency of COX deficient crypts in the PolgA +/mut , and wild-type mice were simulated. For completeness, the frequency of COX deficient crypts in 3, 6, 9 and 12 month old PolgA mut/mut mice was calculated and these data were included in the simulation (Supplementary File 4). All simulation parameters are detailed in Supplementary File 5. The results of the simulations show that changing only the mtDNA mutation rate can lead to accelerated clonal expansion and the levels of COX deficiency we observe experimentally (Fig. 7). This suggests that, like in the human system, random genetic drift can explain the clonal expansion of mtDNA mutations in the colonic crypts of mice during ageing.
Discussion
Here we have demonstrated that PolgA +/mut mice display evidence of age-related clonally expanded somatic mtDNA point mutations in the colonic epithelium, similar to the findings in normal human ageing. We have shown that PolgA +/mut mice accumulate COX deficient colonic crypts with age and by 81 weeks of age their phenotype is very similar to the pattern of COX deficient colonic crypts found in 70-year-old human subjects (Taylor et al., 2003). We have identified that COX deficiency is associated with the clonal expansion of random, somatic mtDNA point mutations, and that there is no evidence of purifying selection in this tissue, similar to observations in ageing humans (Greaves et al., 2012). Furthermore, we have demonstrated that the random genetic drift model of clonal expansion, which has been shown to model clonal expansion in human colonic epithelium very well (Taylor et al., 2003), can also model the mitochondrial phenotypes observed in wild-type and PolgA +/mut mice.
We did observe differences in the types of mtDNA point mutations detected in PolgA +/mut mouse and human COX deficient colonic crypts, principally in terms of frameshift mutations. This was not surprising given the fact that the PolgA +/mut mice are ''other'' changes across the different heteroplasmy classes in the PolgA +/mut mouse colonic crypts. There was no significant difference in the types of mutations observed between the heteroplasmy classes (x 2 analysis, p = 0.6765) Abbreviations: Nsyn, non-synonymous and syn, synonymous changes. engineered to have a proof-reading deficient polymerase and in a recent study involving mtDNA mutator mice <1% of the mutations were insertions/deletions in a dataset of 442 mutations (Ross et al., 2013). Thus the moderate size of our mtDNA mutation dataset (241 mutations) and the relatively low insertion/deletion rate in PolgA +/ mut mice renders it unlikely that we would observe any insertions/ deletions. Nevertheless, the majority of the mutations in both the PolgA +/mut mouse and human colon were base transitions, supporting the hypothesis that somatic mtDNA point mutations occur early in life due to mtDNA replication errors and/or spontaneous cytosine deamination (Vermulst et al., 2007) and clonally expand throughout adulthood (Coller et al., 2001;Elson et al., 2001). Only a minor number of base transversions, which were principally A-T changes, were observed in both humans and mice providing little evidence of 8-oxo-deoxyguanine mediated mutagenesis and oxidative damage.
This study confirms that mutations in genes other than MT-CO genes can result in COX deficiency, most commonly mutations in non-coding RNA genes (tRNAs and rRNAs), which can cause translational defects, affecting the abundance and activity of mtDNA encoded respiratory chain subunits. Interestingly, in both the PolgA +/mut mice and ageing humans, we also noted that some COX deficient crypts contained only one pathogenic mutation in a complex I gene. Respiratory chain complexes are believed to associate to form supercomplexes (Schagger and Pfeiffer, 2000) and in patients with mitochondrial disease, mutations in structural subunits of complex I have been associated with multiple respiratory chain defects (Hinttala et al., 2006). Furthermore, complex I defects are frequently associated with complex IV defects in the ageing human colon (Greaves et al., 2010). Thus it is possible that a single pathogenic complex I mutation may also affect the stability and activity of complex IV and play a causal role in the COX deficiency. We observed colonic crypts that did not contain a single pathogenic mtDNA point mutation. However, these crypts commonly presented with multiple heteroplasmic mutations and the observed COX deficiency may therefore have been caused by a combined effect of multiple heteroplasmic mutations.
In the PolgA mut/mut mouse germline there is strong purifying selection against mtDNA mutations in the first and second codon positions of protein encoding genes, as well as a reduction in tRNA and rRNA mutations (Stewart et al., 2008b). This was significantly different to what we observed in the PolgA +/mut mouse colon, confirming that mtDNA in germ cells and mtDNA in ageing somatic tissues appear to be subject to different selection pressures (Greaves et al., 2012). These differences are likely due to selection for mitochondrial fitness during oocyte development via the mtDNA bottleneck to protect the germline from deleterious mtDNA mutations (Stewart et al., 2008a), and a subsequent lack of selection in the somatic tissues, agreeing with the disposable soma theory of ageing (Kirkwood, 1977). Despite protective mechanisms in the germline, low level heteroplasmic mtDNA mutations are still inherited (He et al., 2010;Payne et al., 2013) and pre-existing germline mtDNA point mutations have been shown to accelerate the clonal expansion of somatic mtDNA point mutations Fig. 4. Gene type, location and clonal expansion of somatic mtDNA point mutations in the ageing PolgA +/mut mouse and human colon. (a) Gene location of mutations in individual mtDNA encoded genes in COX deficient colonic crypts. There was no significant difference between the PolgA +/mut mouse crypts and the human crypts (x 2 test p = 0.2204). (b) Types of changes observed in COX deficient colonic crypts. There was a significantly higher frequency of insertion/deletion mutations in the human crypts (x 2 test p = <0.001). (c) Percentage of COX deficient colonic crypts containing at least one pathogenic mtDNA point mutation in the PolgA +/mut mouse and human colon. There was no significant difference between the mouse and human crypts (x 2 test p = 0.8194).
Fig. 5.
Genetic consequences of somatic mtDNA mutations in the PolgA +/mut mouse colon. The ratio of non-synonymous mutations: synonymous mutations in PolgA +/ mut mouse colonic crypts, the mtDNA mutator mouse germline and normal mouse strains. The ratio of non-synonymous to synonymous mutations is significantly higher in PolgA +/mut mouse colonic crypts compared to both the mtDNA mutator mouse germline and the normal mouse strains (p = <0.0001, Fisher's exact test).
in ageing (Ross et al., 2013). Our data lends further support to this as increased levels of respiratory chain deficiency and somatic mtDNA mutations are observed in the colon of PolgA +/mut mice, which have a higher germline mutation frequency than normal ageing mice (Greaves et al., 2011).
Our model of mtDNA point mutation clonal expansion by random genetic drift has shown that a simple increase in the mtDNA mutation rate can explain the differences in the frequencies of COX deficient crypts we see between the wild-type and PolgA +/mut mice. These modelling simulations are in contrast to those carried out previously which concentrated on post-mitotic tissues from short-lived animals and showed that random genetic drift is unable to explain the experimental data on incidence of COX deficient cells in rats (Kowald and Kirkwood, 2013). These apparently conflicting data may be attributable to tissue-specific differences. Data from a study using heteroplasmic mice containing two different mtDNA genotypes in which the authors measured the relative contribution of each genotype in individual colonic crypts over time showed that in individual crypts from 4 month old mice there was a mixture of the two genotypes, however in the crypts of the 15 month old mice two distinct crypt populations were observed with one or the other genotypes predominating. These data fit with a model of neutral drift (Jenuth et al., 1997), however when they looked at liver, kidney, spleen and blood, the genotypes segregated in a very different manner, with strong tissue-specific selection for one genotype or the other. In addition, in post-mitotic tissues in these mice, there was no segregation of the mutations at all, with similar proportions of each genotype present throughout the life-course (Jenuth et al., 1997). Thus, whilst our data from the colon are compatible with random genetic drift as a mechanism for clonal expansion of mtDNA point mutations in both mice and humans; this may not be applicable to all tissues.
Conclusions
Here we show that clonal expansion of somatic mtDNA mutations without clear selective constraints leads to respiratory chain deficiency in the ageing of colonic epithelial tissue both in PolgA +/mut mice and humans. This indicates that the PolgA +/mut mouse may be a valuable model in which to study the cellular responses invoked by the mosaic tissue distribution of mitochondrial dysfunction that occurs with age. Given the fact that stem cell markers are much more well-characterised in mice (Barker et al., (Stewart et al., 2008b) and (c) mtDNA sequences of mouse strains from Genback, Mus musculus. To compare between the classes we took the sum of the 3rd codon position and standardised that as 1 (value/3rd codon sum). CP1-3: codon positions 1-3. (d) Positional mutation frequency (number of mutations/base pairs) of observed vs expected (based on a random distribution) mtDNA point mutations transmitted through the mtDNA mutator mouse germline (Stewart et al., 2008b) (p = <0.0001, x 2 test) and (e) mtDNA sequences of mouse strains from Genbank, Mus musculus (p = <0.0001, x 2 test). Fig. 7. In silico modelling of the clonal expansion of mitochondrial DNA mutations in colonic crypt stem cells. Each symbol is the mean of 1000 simulated cells, each containing 200 mtDNA molecules with the mutation rates described, with a cell division rate of once per day for 3 years. To match the experimentally observed COX deficiency data from WT (Greaves et al., 2011), PolgA +/mut (Fig. 1) and PolgA mut/mut mice (Supplementary File 4), we performed a parameter scan of the mtDNA mutation rate, assuming >75% mutated mtDNA was enough to confer COX deficiency. The mtDNA mutation frequency was determined by averaging of 1000 simulation runs of 1080 divisions (36 months). The proportion of simulated cells containing >75% mutant mtDNA ( sim COX deficient cells) predicted by the model at the three mutation rates shown, closely matches our experimental data from wild-type, PolgA +/mut , and PolgA mut/mut mice. 2007; Itzkovitz et al., 2012), we believe that the PolgA +/mut mouse could provide information that is currently lacking about the effects of clonally expanded mtDNA mutations on human stem cell function and normal tissue homeostasis, in both ageing and agerelated disease.
Funding HLB, LCG, and DMT were supported by the Newcastle University Centre for Brain Ageing and Vitality supported by BBSRC, EPSRC, ESRC and MRC as part of the cross-council Lifelong Health and Wellbeing Initiative (G0700718). CS and AZ were supported by the BBSRC (BB/F016980/1 and BB/H011471/1). TBLK was supported by the National Institute for Health Research Newcastle Biomedical Research Centre (17/08/2011) based at Newcastle Hospitals NHS Foundation Trust and Newcastle University. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. DMT was also supported by the Wellcome Trust (Strategic Award 096919/Z/11/Z).
Author contributions
HLB and JBS carried out the experimental procedures; CS, AZ and TBLK carried out the in silico modelling; JBS carried out statistical analysis; LCG, DMT and NGL designed the study; HLB and LCG wrote the paper. All authors commented on and made intellectual contributions to the manuscript at all stages of writing.
|
2016-10-19T14:47:13.801Z
|
2014-07-01T00:00:00.000
|
{
"year": 2014,
"sha1": "bb12855e15824639ef718bd3d116b980980e255b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.mad.2014.06.003",
"oa_status": "HYBRID",
"pdf_src": "Elsevier",
"pdf_hash": "c936d624ecfe399ddf17504f2c72a452602bf4b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
259616073
|
pes2o/s2orc
|
v3-fos-license
|
Safe Vessel Operations – The Tacit Knowledge of Navigators
: The collision regulations include several qualitative terms without providing guidance as to how these terms could be understood in quantitative terms. These terms must therefore be interpreted by navigators, which poses a problem for autonomous ships. Extend the knowledge of how navigators interpret the collision regulations, with a specific focus on how they interpret the rule covering the requirement to proceed at a safe speed. Qualitative study based on interviews of a convenience sample of eight Norwegian navigators. Data was analysed with systematic text condensation. Navigators characterise safe speed as a speed in which they have control. Navigators do not look at different factors mentioned in the collision regulations in isolation, but within the context of the situation. Determining the safe speed of a vessel is more complicated than made out in the literature. As autonomous ships will have to cooperate with conventional vessels, their programming must include the knowledge of how the collision regulations are interpreted by human navigators
INTRODUCTION
Collisions have been the second top cause for shipping casualties and incidents in 2022 [1]. The Norwegian Maritime Authority -which collects incident statistics that combine Norwegian vessels regardless of location, and foreign vessels operating in Norwegian waters -reports that in every year since 2011 at least 16 collisions have occurred [2].
To prevent collisions from occurring, the International Maritime Organization (IMO) has published the International Regulations for Preventing Collisions at Sea 1972 (COLREGs). These rules apply to all vessels upon the high seas and in all waters connected therewith navigable by seagoing vessels [3]. As such, maritime autonomous surface ships (MASS) will also be required to follow these rules.
Having entered into force in 1977 -they were presumably written without having modern autonomous cargo vessels in mind. The COLREGs include various qualitative terms -such as "early", "substantial" and "safe" -without providing any information as to how these terms could be understood in quantitative terms. The result is a rule system that relies heavily on the interpretation of the navigator. While ambiguity is a desired trait of the COLREGs (a completely prescriptive and rigid rulesystem would be infinitely complicated [4]), it has led to a situation where there may be a large discrepancy between the legal interpretation of the COLREGs and the conventional way navigators avoid collisions [5]. In practice this means that navigators are pressured both to follow convention, in order to avoid collision, and the law, to avoid prosecution should anything go wrong [5].
This distinction between the legal interpretation and convention was highlighted in a study by Dreyer [6], where it was shown that vessel speeds predicted by legal interpretation of the COLREGs and actual observed vessel speeds did not align: The idea put forward by legal scholars that visibility is the most important factor when it comes to safe speed [7][8][9] was not mirrored in the data of actual ship behaviours.
As collision avoidance between vessels is seen as a game of coordination, where navigators on different vessels have to independently choose mutually compatible strategies [5], the control system of a MASS must not only be aware of the legal interpretation of the COLREGs, but also of the conventional way navigators apply the rules in practice. Indeed, if MASS are "too strict" in following the legal interpretation of the COLREGs they mightat times -jeopardize the safety of a ship encounter [10].
As a better understanding of the conventional way navigators apply the COLREGs in practice is necessary, this study aims to extend the knowledge of how navigators interpret the rules, with a specific focus on how they interpret the rule covering the requirement to proceed at a safe speed.
BACKGROUND
Rule 6 of the COLREGs deals with safe speed. It requires that "every vessel shall at all times proceed at a safe speed so that she can take proper and effective action to avoid collision and be stopped within a distance appropriate to the prevailing circumstances and conditions". To determine what speed may be considered safe, the COLREGs provide a number of factors that shall be among those taken into account, including visibility, traffic density, manoeuvrability of the vessel, background light, the state of wind, sea and current, the proximity of navigational hazards and the draught in relation to the available depth of water [11].
Participants
A purposive sample of two fast ferry captains and six maritime pilots (eight men, no women) aged 33 -61 years working in Norway participated in the study. The lack of gender difference largely reflects the situation in the maritime industry where the majority of seafarers are men [12]. The strategy for selecting the study subjects (purposefully) was influenced by homogenous sampling (in terms of professional background) and convenience sampling [13]. The concept of saturation was considered when deciding on the amount of interviews to conduct in this study [14]. Saturation is achieved "when gathering fresh data no longer sparks new theoretical insights, nor reveals new properties of your core theoretical categories" [15]. Following the eight semi-structured interviews that were conducted, saturation was achieved.
The professional seafaring experience of the participants ranged from 8 -38 years. Seven participants had 21 years of experience or more.
Interview Procedure
The author conducted the interviews. One interview was conducted via the videotelephony software program Zoom Meetings, one interview was conducted in a meeting room at the interviewer's workplace and the rest of the interviews were conducted at the homes of the interviewees. The interviews lasted from 58 minutes to 2 hours and 6 minutes. A semi-structured interview guide was used as a tool to obtain detailed descriptions of the seafarers' experiences in order to grasp the tacit knowledge of seafarers that is so important in ensuring safe vessel operations. The main questions were: How do you ensure the safe and smooth operation of your vessel? What factors go into your decision for setting your vessels speed? How do you determine safe speed? Could you rank influencing factors by importance?
All interviews were recorded and transcribed.
Data Analysis
The data collected in this study was analysed by means of systematic text condensation [16]. The approach is described as a four-step procedure: (1) reading the transcripts to get an overall impression and identifying preliminary themes; (2) extracting meaning units from the transcripts and sorting them into codes and code groups; (3) condensing the meaning within each code group; (4) summarizing the content into meaningful descriptions [16,17]. The author conducted all steps of the analysis. In this regard it must be noted that the author's background as a navigational watch officer with knowledge and experience within the field has influenced the process of collecting and interpreting data. As the final descriptions were developed and refined over time, the interview transcripts were read repeatedly to ensure that the constructed descriptions were grounded in the empirical data.
Ethical Considerations
The Norwegian centre for research data approved the study. The interviewees received an information letter and provided consent to participate. They were informed that they could withdraw from the study at any time (until publication) without providing any reason. Data was treated confidentially and information about the seafarers is presented in such a way that they are not identifiable.
FINDINGS
It was found that navigators predominantly experience a vessels speed to be safe when they feel comfortable with the ship and feel that they are in control. While COLREG rule 6 -the rule covering the safe speed requirement -mentions several factors, and legal scholars have pointed to visibility as being the most important factor, the navigators had a different view. Navigators highlight that the factors affecting safe speed are very dependent not only one another, but also the context of the situation. Indeed, as the context is often confused and complicated, ranking different factors by importance will likely be an oversimplification that does not cover all scenarios. While visibility is seen as an important factor, the impact visibility has on "safe" speed depends on the specific circumstances of the situation. These findings are elaborated below. The findings include authentic illustrative quotations (AIQ), which are not necessarily direct citations but descriptive synthesized quotations that aim to grasp the essence of the opinions voiced by all interviewees [16,18].
Ensuring Safe and Efficient Navigation
When asked how they ensure safe and efficient navigation, interviewees responded by firstly mentioning one of the following two concepts: Comfortableness with the vessel, and knowledge of the area. How comfortable they are with the vessel they are on depends on both the manoeuvrability of the vessel itself, as well as outside factors affecting the vessel. When the navigator is comfortable with the vessel, less attention is required for keeping the vessel on course. This frees up mental capacities that can be focused on other important tasks such as overseeing the traffic situation.
If you are very comfortable with the vessel, and you encounter bad weather, then you do not need to use so many brain cells and energy on thinking about how to turn the vessel.
The same principle applies to being comfortable with the area the navigator is navigating in. Being well versed in the area includes being aware of the safe path(s) through the area, navigational aids and dangers as well as areas where encountering other traffic is likely.
If you know the area, the way, the courses, and the navigational aids, then you can function as a human sensor: even if there is a technical failure in the vessel's navigation equipment, you should still be able to find your way.
Actively utilising the available navigational aids means that navigators can traverse an area without having to constantly check the (electronic) navigational charts or relying on technical support. This both introduces redundancy as well as it frees up mental capacities which the navigator can then focus on other important tasks.
The Meaning of Safe Speed
When it comes to safe speed, it was difficult to get a clear definition of the concept. During some interviews it seemed as if the interviewees understanding of the concept was inconsistent.
Safe speed is a speed which allows you to stop before you get into a dangerous situation. If something suddenly appears in front of you, you must be able to stop. This would mean that you should not be underway when visibility is so poor that you cannot see past your own bow. But in reality, safe speed is so individual that it is difficult to define properly. We go through tight waterways with full speed because we feel like we are in control of the vessel. So maybe safe speed really is the speed that you as the navigator feel safe in.
The above AIQ illustrates how the interviewee initially thought of the legal understanding of the term safe speed, and later adjusted the meaning according to how they apply it in practice. This gap between legal interpretation and the conventional way seafarers determine safe speed was pointed out specifically by another interviewee.
Safe speed is quite juridical … I don't know, but that term is perhaps very broad. When I think about setting a speed that is safe, I don't usually think about the COLREGs. What I'm concerned about is that the vessel steers and moves as I want it to, and that I feel confident that I can navigate safely.
The importance of keeping control of the vessel and the situation was echoed by the majority of the interviewed navigators. Factors such as manoeuvrability of the vessel, traffic situation, external environmental factors and navigation area play a large role in this regard.
The most important thing is that you feel in control of the vessel and the situation around you. Going with full speed reduces your options and means you require more room to manoeuvre. Reducing the vessels speed generally increases your manoeuvrability and provides additional flexibility. It also means that you have more time to evaluate and execute the correct choices. But be careful to not reduce your speed too much -you will sacrifice your steering and lose control.
As the navigators tightly coupled safe speed to the feeling of being in control, they stated that for any situation there is no such thing as the one correct safe speed.
Safe speed is an unclear term. In the same situation one navigator may proceed at a safe speed of 10 knots, while another proceeds at 5 knots. It will be wrong to set any boundaries, as that may force some navigators to proceed at a speed that they do not feel comfortable with -which would also be dangerous. Maybe that is why the term is a bit unclear -to give navigators some leeway to navigate in a way that is most comfortable to them.
Standard Speed and when to Deviate
When setting the vessels speed in practice, the interviewees unveiled that full speed ahead is the default. The speed generally only gets adjusted when the navigator deems this necessary to stay comfortable and in control.
If there is no traffic you go with full speed. Sometimes you meet captains who want to reduce in certain areas, and that wish gets respected.
However, some interviewees shared that a reduction of speed may sometimes be a bureaucratic process that might involve repercussions. As a result, they sometimes feel pressured to proceed at speeds that they themselves deem unsafe. Examples of these situations were coupled solely to vessels with passengers on board.
In the passenger ferry industry, we proceed at high speeds because we must keep a schedule. People expect to arrive on time. There is a conflict of interest here: We don't want accidents, but we also have an obligation to get people from A to B on time. In practice this means that you only reduce speed for very special things -and as a result we don't reduce speed more than a couple of times a year. But you can see the same happening with cruise ships -300metre-long vessels going through the fjords at 25 knots, even in the middle of the night, just because the passengers should wake up in a new place the next morning. It's completely wild.
Consideration for others was also mentioned as a reason for reducing the vessels speed. A vessels wake can cause problems for other vessels, particularly small craft and moored vessels, and navigators highlighted that they would reduce their speed in particular areas to reduce the size of their wake -and thereby keep any disturbance to others to a minimum.
Specific Moments to Consider when Setting a Safe Speed
In the following subsections, different specific moments that navigators consider when setting a safe speed will be presented. This illustrates both what navigators deem important to consider, as well as highlight which conclusions navigators draw from the information they gather. When asked if there is some sort of hierarchy that determines that some moments are more important than others, some initially pointed to a specific moment that they deemed most impacting. This quickly changed however, and the interviewees pointed to how the factors are dependent on one another, and that the importance of the different moments depend on the context. Because of the many dependencies, interviewees were critical of the possibility of creating a general safe-speed-flowchart, which could be followed to determine the safe speed in that particular situation. One interviewed navigator voiced restrained optimism for the possibility of creating such a flowchart for one specific vessel in one specific location but also mentioned that a general flowchart would be complicated as there is so much variance in how the different factors affect which speed would be safe.
Is Slower Safer?
As mentioned in 4.2 above, the most important thing about safe speed is being in control. So, while reducing speed gives the navigator more time to evaluate and execute their options, it also amplifies the effect of external weather factors -such as wind and current -on the vessel. After reducing the vessels speed below a certain point, most vessels will even lose their ability to manoeuvre. As a result, the interviewed navigators disagree with the sentiment that a reduction of speed necessarily leads to a safe speed. Indeed, examples of the opposite have been shared by many interviewees.
In some of the Norwegian ports there are speed restrictions limiting speed to 5 knots. For many vessels, going at a speed of less than 5 knots in these ports is unsafe. Fast ferries are much easier to steer when going 10 to 12 knots, and some of the old cruise ships do not swing -but only go straight ahead -when going at less than 10 knots. The same applies for some of the other more confined areaswhen you go too slow, the wind and current takes you and you run aground. Reducing to zero in these areas would be lunacy -so personally I like to keep a little higher speed to be in control of my own fate.
Visibility
Visibility is mentioned as the first factor to consider in the COLREGs and is generally seen as the most important factor for the determination of safe speed by the legal community, where it is stated that is not safe to go fast when visibility is poor. But when is visibility poor? While not all navigators provided the same values, they seemed to agree that more than 1 nautical mile visibility can be considered good, between 5 cables and 1 nautical mile they start to raise their alertness, and below 5 cables they would consider reducing speed. Additionally, the interviewees highlighted the following concepts as important: The size of the vessel you are on, the amount of navigable space around you and the reason for the reduced visibility. 900 metre visibility is completely fine on a vessel that is 100 metres long, but for a vessel that is 300 metres that same visibility does not seem so fine anymore. But it also depends on the area you are in: In open waters you have so much room to manoeuvre that a reduction in visibility really doesn't have an effect anymore -especially since we have such good equipment. With radar you can see even in thick fog. The only time where radar cannot help you in reduced visibility is when you encounter wet snow -then you get false echoes and cannot trust the radar picture.
The above AIQ highlights how navigators canunder specific circumstances -deem a visibility range of 900 metres as completely fine. The interviews highlighted that the importance of visibility is not independent, but instead depends on the context as well. Only when other safety margins are reducedsuch as navigating in a narrow channel or in an area of high traffic -would navigators start to adjust their speed. If, however, they encountered reduced visibility in open waters with no other traffic, they would continue proceeding at their normal speed. In general, the interviewed navigators mentioned visibility less with regards to collision avoidance, but more with regards to keeping the vessel on track. They voiced their content with both the available and planned aids to navigation along the Norwegian coast and stated that they used classical i.e., visual navigation methods as their preferred way of navigating along the coast. A reduction in visibility would mean that they would need to switch to technical navigation methods instead.
You can obviously use the chart and radar to sail in this area, but we mostly use these tools to check for other traffic. The navigation happens mostly by eye: We use the aids to navigation that we have along the coast, as for example the sector lights. That is a very pleasant way of navigating. But when visibility is poor, we must switch to technical navigation. Then we must allocate more time to utilizing those tools and have less time for looking outside the window.
The danger of not being able to detect another vessel in poor visibility was not generally seen as great enough to warrant a reduction of speed no matter the context. Furthermore, it was pointed out that it is generally smaller pleasure craft that are most at risk of not being discovered in bad weather -and that these would generally not be out on the water in bad weather.
But this is a type of risk assessment. When it is dark, visibility is low and there are gale force winds that mean that I have a bit of wave clutter on the radar, then I do not expect small vessels to be out on the water. And then I don't reduce speed just because of the off chance that they could be there.
The above AIQ highlights the kind of risk assessment that takes place. While in that instance it was highlighted why a reduction of speed may not be necessary it was also highlighted by navigators that if they pass areas where they know the likelihood of encountering small vessels to be larger, they would either try to take a different route or reduce speed preemptively.
Traffic
While there is generally less traffic in Norway than in other parts of the world, traffic was mentioned as an important factor throughout the interviews.
The interviews showed that dense traffic is a somewhat vague concept, that depends on a lot of other factors. Firstly, not only the number of vessels in the area is important, but also how they are positioned and how they are manoeuvring. Traffic that is organised in a way that encounters are minimized -as for example in a traffic separation scheme -would be considered less dense than traffic that is unorganized. Additionally, navigators described that -when compared to open watersfewer vessels were required in confined waters for them to feel as though traffic was dense. The types of vessels encountered also influences the perception on the density of traffic -leisure vessels are seen as less predictable and therefore more difficult to collaborate with than vessels with professional crew on board. Finally, traffic is dense or not dense in relation to the vessel you are on yourself. If you experience numerous vessel encounters from different directions, the manoeuvrability of your vessel will determine how constrained you will feel. As a result, traffic density in the same situation might be considered low when steering a highly manoeuvrable vessel, and high when steering a vessel that is hardly manoeuvrable at all. Overall, traffic is not considered to be dense if they feel comfortable in their ability to keep clear from all vessels. The more difficult it gets to understand and react to other traffic, the more navigators feel that traffic is becoming dense.
I feel traffic to become dense when I feel that I cannot steer away from the different vessels with my standard speed in a proper manner.
Interestingly, the issue of traffic was generally not discussed in terms of what to do when you encounter dense traffic, but more in the way of how you can actively avoid getting into situations with dense traffic and numerous close quarters situations.
I will always try to avoid getting into situations where I will experience multiple vessel encounters. Instead, if I notice that I am running into such a situation, I will rather reduce speed ahead of time, wait for the situation to clear, and then continue with normal speed. If I were to continue and then reduce when encountering the dense traffic, my reduction of speed introduces new dangers, such as drift. In an area where there is little space and maybe current this introduces a new danger in itself -and the last thing I want to do in an already difficult situation is to add more distracting factors.
Looking ahead like this means that navigators look at traffic density not only reactively, but proactively. They proactively look out for situations where dense traffic may occur, and try to either not get into that situation, or come prepared. This tendency for proactivity was also highlighted by navigators stating that they will not only consider traffic that they have observed, but also traffic that has not been observed yet.
There are areas where the likelihood of encountering other traffic is just so much higher. In open waters we encounter fewer vessels than when passing ports and cities. And then there are times where we know that more pleasure craft will be on the water -such as the national day.
Area
For the area moment, both the proximity to shore or other navigational hazards and available depth of water was combined. The most important aspect of the area is that the navigator must be comfortable navigating in it. Furthermore, the area plays a large role in providing context: The effect of both visibility and other traffic were enhanced when they were taking place in a confined area.
The interviewees working onboard fast ferries basically did not see proximity to shore or other navigational hazards as problematic and stated that they would proceed at full speed even when close to shore.
There are times where we have rocks and shore within 5 metres of the side of the vessel, but we still go with full speed. Tight spaces by themselves do not warrant a reduction in speed. This is likely due to the generally supreme manoeuvrability of the fast ferries employed in Norway. The maritime pilots who work on many different types of vessels had a more nuanced view. The pilots highlighted the superiority of a U-turn over a stopping manoeuvre when encountering a dangerous situation. As a result, the consensus was that the border between open and confined waters was where the vessel could safely execute a U-turn. A differentiation between open and confined waters therefore depends on the manoeuvrability of the vessel involved. However, from experience, the maritime pilots stated that most vessels below 140 metres in length, having 5 cables of water around them, would be navigating in what they would consider to be open water.
When it comes to the effect the depth of water has on safe speed, the fast ferry navigators stated that the waters off the Norwegian coast are generally so deep that it does not have an effect. While some of the maritime pilots highlighted the increase in turning circle and stopping distance in shallow water, the interviewees indicated that they would reduce speed in shallow areas with the sole intention of reducing the effect of squat and the resulting possibility of touching the bottom.
Wind, Waves and Current
Interviewees stated that wind is a factor of great importance, that needs be taken into account during nearly all operations. This includes not only the wind speed, but also the wind direction. Wind is seen as more problematic when blowing perpendicular to the vessels course, and less problematic when blowing parallel to the vessels course. The effect of wind speed on safe vessel speed is generally seen to be inverted, i.e. high wind speeds require high vessel speeds. This is because the drift inducing effect wind has on a vessel is larger at lower speeds, and less at higher speeds.
It is wind that we struggle with the most. Wind causes you to drift, and if you then reduce speed you drift even more. That is why you need high speed in high winds.
Reduction of drift is important for several reasons. If you are in a tight space, the introduction of drift makes the space even tighter as the required leeway angle to keep the vessel on course means that the vessel takes more space in the waterway. The leeway angle increases with increased drift or reduced vessel speed, up to a point where the vessel will not be able to keep on track and risks being pushed aground. Finally, large drift may lead other traffic to become uncertain about your intentions, as illustrated by the AIQ below: Our own leeway angle can, in some places, create uncertainty with regards to my intentions. So that if I compensate for drift with adjusting my course, it can look like Iʹm steering straight towards someone -even though Iʹm not. I want to avoid creating wrong signals -or signals that can be misunderstood -at all times.
The effect of waves on safe speed was generally not connected to collision avoidance, but rather to the reduction of forces that may cause damage to the vessel. Interviewees therefore mentioned that high waves would cause a reduction in speed to reduce the chance of damages to the own vessel.
Interviewees did not mention current as a factor that induces drift but were more focused on current that sets either in the same, or opposite direction to the vessels course. In this regard the navigators highlighted that current that sets opposite to the vessels course is generally seen as having a positive influence on control over the vessel, while current that sets with the vessel has a negative influence on control over the vessel. Vessels that proceed against the current might be able to reduce their speed over ground to zero, while maintaining enough speed through water to maintain manoeuvrability. On the other hand, it is virtually impossible to come to a stop when the current sets in the same direction as the vessel, as the vessel will loose steering due to low speed through the water before ever coming to zero speed over ground. With this being said, navigators still stated that in practice current only has an impact on their alertness, and not on their selection of speed.
Background Light
Background light had two meanings for the interviewees -it could come from both inside and outside the navigational bridge. In any case, it is seen as a disturbance and -where possible -steps were being taken to reduce their occurrence. This includes asking others on the bridge to switch off any background light on the bridge, as well as a case where navigators took contact with a quay to ask them to modify a newly installed floodlight in a way that it becomes less interfering.
Navigators stated that the disturbing effect of background light is largest when navigating in unknown areas, and is significantly reduced by both modern support technology such as radar and AIS and when a navigators knows the area so well that he is able to quickly filter out background light and focus on the lights that are important for safe navigation.
In practice this means that background light influences safe speed only when the navigator does not feel comfortable with the situation.
In a normal setting when experiencing background light, the radar image gives me such a good picture of where I am, where I am going, where I am going to turn, and which boats are around that it does not affect my set speed.
DISCUSSION
The results presented above show that the real-world problem of determining safe speed is too complex to be adequately captured by overly simplistic descriptions. The interviews show that the different factors affecting safe speed cannot be looked at in isolation, but within the context in which they occur on the water. Navigators therefore do not determine safe speed by following rule 6 of the COLREGs word for word, taking into account each factor in order, but instead interpret it as a goal-based rule. Navigators equate the requirement of proceeding at a speed where they can take proper and effective action to a speed where they feel in control and adjust their speed accordingly. Importantly, navigators do not only focus on being in control in the current situation, but also in the foreseeable future. This understanding is exemplified by navigators mentioning reducing speed in open waters and good conditions to avoid meeting other vessels in confined waters with possibly less favourable conditions.
The Gap Between Work-as-Done and Work-as-Imagined
This way of determining safe speed is in contrast with the way legal scholars approached this problem, taking each factor for itself and interpreting its effect on the safe speed in isolation. This indicates a difference between the work-as-done by the navigators and the work-as-imagined by theorists and legal scholars and is in line with the findings of a study, where the speeds of vessels in different visibility conditions was analysed [19]. That study found that contrary to the legal understanding of "safe speed", vessels did not significantly reduce their speed in poor visibility. A large distance between how work is imagined, and how work actually is done indicates an ill-calibration at the blunt end to the challenges and risks encountered at the sharp end of real operations [20]. This distance might be attributed to legal scholars having a worldview where safety and compliance with rules are the only factors that affects speed. In reality, it is widely known that "human behavior in any work system is shaped by objectives and constraints which must be respected by the actors for work performance to be successful" [21]. These objectives and constraints can often be contradictory. In practice, the interviewees have shared how the objective to proceed at a safe speed may clash with the objective to follow the rules (as with the case where some speed restrictions in place in Norway would require navigators to proceed at unsafe slow speeds), or with the economic objectives of the shipping company (as with the case where navigators are pressured to proceed at high speeds in order to stay on schedule).
With collision avoidance being a game of coordination, where navigators on different vessels have to independently choose mutually compatible strategies [5], it is feasible to predict that MASS designed according to how work is imagined and not how work is done will have trouble coordinating with conventional vessels. Furthermore, as informal worksystems and adaptations often develop when humans come into contact with systems designed according to work-as-imagined [22], one can expect seafarers on other vessels to develop new ways of interacting with MASS that are designed according to work-asimagined. These new habits may be degrading safety and causing new types of hazardous situations in the shipping routes and fairways [23].
As the ability to elicit and represent the knowledge of experts is a growing concern in systems design [24,25], the results of this paper can be seen as an exchange of knowledge between navigators and the designers of MASS, hopefully contributing to bridging the gap between work-as-imagined and work-as-done.
Limitations
The findings and generalisability of this study must be seen considering some limitations. The informant group is made up of a limited number of navigators that were selected as part of a convenience sample. Only Norwegian navigators were included in the study, leaving the possibility that navigators of other countries interpret the rules in a different way.
Exploring the possibility of different interpretation of the COLREGs by navigators educated in different countries is something that could be looked at in future research. However, considering the international nature of the maritime industry, where navigators work with international colleagues and are subject to international regulation, the conclusions drawn may still have broad relevance and should be further investigated to find whether they resonate with the navigators in general.
CONCLUSION
The objective of this study was to extend the knowledge of how navigators interpret the rules, with a specific focus on how they interpret the rule covering the requirement to proceed at a safe speed. Although a small-scale qualitative study, valuable insight into the tacit knowledge of navigators and how they interpret the requirement to proceed at a safe speed was obtained.
It was found that the most important aspect for navigators with regards to safe speed was the feeling of being in control. The major factors impacting this feeling was the navigator's comfortableness with both the vessel and the area they are navigating in.
The navigators' interpretation of the factors mentioned in COLREGs rule 6 shows how navigators must determine the safe speed in a real world that is complex, and where each factor must be seen in relation to the context of the overall situation. This breaks with the view of how legal scholars approach this problem, where each factor is analysed in isolation. While legal scholars conclude that it is unsafe to proceed at high speeds in low visibility, navigators have no problem with proceeding through fog at high speeds, given that they are in open waters with no other traffic around.
Interesting take-aways include the fact that a slower vessel speed is not safer by default. Indeed, a too low speed can also be an unsafe speed. Another interesting take-away is that navigators include future situations in their determination of safe speed in the present. Navigators are aware of situations where a change in speed does not affect the safety of navigation in the present but has an impact of the safety of navigation in the future. An example here would be navigators reducing their vessels speed in open waters ahead of a confined waterway, with the intention of letting another vessel leave the waterway before entering the waterway themselves.
The conclusion of this paper is that determining a safe vessel speed is more complicated than made out in the literature. As the MASS of the future will have to collaborate with conventional vessels, it is important to ensure that MASS are not programmed with only work-as-imagined in mind, but also by considering the work-as-done in practice.
|
2023-07-11T16:52:20.279Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "8ac3ecfc9c1806ec79cd304dc1a8c3bc453c61d3",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12716/1001.17.03.09",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "79be8d912f819e1b4b2933b6bdb9a2424cf2c0b9",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
15118012
|
pes2o/s2orc
|
v3-fos-license
|
A Survey of the Practice and Perspectives of Chinese Acupuncturists on Deqi
Deqi refers to the special sensation and reaction sensed mainly by both acupuncturist and patient when a needle was inserted into the acupoints and is considered to be vital to achieve acupuncture effect. For acupuncturist, it is important to judge and control Deqi in clinical practice. However, enough attention is paid to patients' feelings rather than acupuncturists' nowadays. We thus conducted this survey to determine acupuncturists' perspectives about Deqi and to further find the proper way to induce Deqi. A total of 250 questionnaires were sent out to acupuncturists and 202 (80.8%) were returned. According to the results, most acupuncturists believe that Deqi is vital to obtain preferable clinical effects. The reliability of acupuncturists' Deqi sensation ranks as sinking> tightening> astringent. The reliability of patients' Deqi sensations ranks as sourness> numbness> distention> heaviness> pain. The reliability of influential factors ranks as manipulation> specificity of acupoint> TCM constitution> disease status> patient's psychological condition> acupuncturists' psychological guidance> clinical environment. This study is believed to provide additional evidence to the qualitative and quantitative research of Deqi in the future.
Introduction
Acupuncture is one of the major treatment modalities in traditional Chinese medicine (TCM). For more than 2500 years of practice, it has been widely accepted by general population in China for its curative effect, wide range of indication, simplicity, and safety in practice. It has gradually become a global therapeutic method in recent decades. According to the theory of traditional acupuncture, the effect of acupuncture is achieved by regulating the channel Qi. Therefore, Qi arrival (Deqi), also known as needling sensation, is considered to be closely related to the acupuncture efficacy [1]. In clinical practice, Deqi is measured by the sensation of acupuncturists' finger and patient's reaction. Generally speaking, when Deqi occurs, an acupuncturist may feel sinking (Chen), astringent (Se), and tightness (Jin) around the needle by his/her fingers. Meanwhile, the patient may sense soreness (Suan), numbness (Ma), distention (Zhang), and heaviness (Zhong) around the acupoint; sometimes, Deqi sensation could be different like coldness, warmness, pain electric-shock feeling, and so forth. However, the intensity and property of acupuncture Deqi may differ as a result of the different physical and psychological conditions of individuals, which made it difficult to be comprehensively applied in clinics.
Evidence-Based Complementary and Alternative Medicine
Deqi sensation scale, as an important qualitative and quantitative measuring tool for Deqi, was applied to acupuncture clinical trials and mechanism studies recently [2][3][4][5]. In 1989, the Vincent Deqi scale was invented with 20 adjectives based on the McGill pain questionnaire. And then there were the Park Deqi scale and the MacPherson Deqi scale followed by [6,7]. However, these scales had mainly focused on the patients' sensations but no attention had been paid to the Deqi sensation of acupuncturist. Further, the Southampton Deqi scale was drafted based on the suggestions of both patients and acupuncturists but failed to discriminate the noxious pain sensation from Deqi sensations according to a German trial [8,9]. The Massachusetts general hospital acupuncture sensation scale (MASS), which was modified based on the subjective acupuncture sensation scale (SASS) [10], was composed of various needling sensations and had a measurement of the spreading of Deqi and patient's anxiety during needling. It has already been validated as a reliable and valid tool to measure Deqi in healthy young Chinese people [11]. However, till now no standardized, valid, and reliable Deqi scale has been formed due to the lack of sufficient evidence.
In TCM classics, the process to achieve Deqi is also called "Qizhi," which means Qi arrival or Qi obtained through acupuncturist's manipulations after needle insertion. One of the chief indicators of achieving Deqi is the sensation change felt by acupuncturist's fingers. The activation, retention, and spreading of Deqi are closely and directly related to acupuncture manipulation techniques, while recent literature indicates that research attention has been merely paid to the patients' or healthy subjects' needle sensations, with ignorance of the Deqi sensation felt by acupuncturists. Therefore, we conducted this survey by consulting acupuncturists, who are engaging in clinical practice in Chinese hospitals, to explore acupuncturist's perspectives on Deqi and to further understand if there are different views among acupuncturists with different levels of experience.
Methods
The questionnaire for measuring acupuncture Deqi in acupuncturist was initially designed by 2 senior acupuncture experts, together with 2 clinical acupuncturists and 2 doctoral candidates. All participants of questionnaire design embrace rich experiences in acupuncture treatment with skillful acupuncture manipulation techniques. The current questionnaire we presented in this paper is an autonym questionnaire in Chinese that is finally completed after 5 times revisions according to acupuncture experts suggestions and comments. It contains three parts with 16 items relating to acupuncture Deqi. According to the latest literature [12], 90% acupuncturists thought Deqi was related to clinical efficacy. Based on assumption that it reached 96% in this study, a sample size of 200 at least is needed to achieve 90% power to detect a statistical significance by using a two-sided binomial test. The target significance level was 0.05. Thus, 250 acupuncturists were needed assuming a 25% dropout rate. A total of 250 questionnaires were sent out to 250 acupuncturists regardless of age or gender in 44 hospitals with express delivery. The included hospitals were geographically distributed in Zhejiang, Jiangxi, Hunan, Shanxi, Qinghai, Sichuan, Yunnan, and Guizhou province and Beijing, Tianjin, and Chongqing municipal city. The rationale for choosing these hospitals in different areas of different directions of China is aiming to avoid deviations caused by the dominance of a specific acupuncture theory or schools in one region. Among them, there were 34 top grade hospitals (77.2%), and most of them are TCM hospitals, 7 second grade hospitals (16%), and 3 community hospitals (6.8%). All questionnaires were completed by included acupuncturists independently.
The data of questionnaire was collected back-to-back by two researchers. Then, the data was completely and accurately transferred to computerized database for data processing including double data entry, edit checks, data cleaning, coding, and reconciliation. Continuous variables were summarized as means (SDs) and discrete data as frequency and percentage.
3.1.
Participants. 250 questionnaires were sent out, and 202 (80.8%) were returned. 49.5% of the respondents were male. The participants were aged from 19 to 59 years (mean, 33.5), with working years ranging from 1 to 45 years (mean, 9.0 years) (1-5-year experience, = 96; 6-10-year experience, = 49; 10+ years of experience, = 57). Regarding the education level, there are 8 with associate degree, 93 participants with bachelor's degree, 73 with master degree, and 18 with Ph.D. degree. Regarding technical title, there were 7 physician assistants, 98 resident doctors, 43 attending physicians, 41 associate physicians, and 13 chief physicians. Table 1, regarding the question "whether the Deqi was crucial to clinical efficacy, " 194 acupuncturists responded, and 85.57% of them thought that in most cases Deqi was crucial to clinical efficacy, and 8.76% thought it was absolutely crucial, while 2.06% of them stated that in most cases Deqi is not vital to therapy, and 3.61% of them were not sure. No one chose that Deqi was not related to treatment at all. Regarding the question "whether a higher intensity of Deqi resulted in a more preferable efficacy, " most acupuncturists (65.80%) did not think so, while 34.20% of the participants agreed. For the reason why the higher Deqi intensity did not yield better efficacy, some (8)) in the order of their reliability for telling the arrival of Qi.
/
Shown in Figure 2 acupuncturists (45.67%) thought it was related to patients' acceptance and tolerability to acupuncture, because a higher intensity possibley resulted in harm of healthy qi or even noxious stimulation. 26.77% thought it was attributed to individual difference. 3.94% thought it was complicated. And 23.62% did not give any answers. sinking (95.14%), astringent (76.22%), and others (3.78%). In Question (7), acupuncturists ranked the reliability of telling Deqi according to their sensations shown in Question (6) in sequence. Figure 1 demonstrated that acupuncturist believed that sinking was ranked at the first place, while tightening the second, and astringent the third, in the reliability of telling Deqi based on their personal sensations.
The Way to Judge Deqi. As shown in
Answers to Question (8) demonstrated that main sensations of Deqi reported by patients were distention (96.43%), soreness (92.86%), numbness (86.73%), heaviness (70.92%), and pain (53.06%). In order to determine the reliability of telling Deqi by patient's subjective sensation, acupuncturists were required to answer Question (9). The result in Figure 2 showed that acupuncturist believed that soreness was ranked at the first place, while numbness the second, and distention the third, followed by heaviness and pain, in the reliability of telling Deqi based on patients' subjective sensations. Table 3, Question (10), regarding the influential factors of Deqi during clinical practices, was a multiple choice question. The main influential factors of Deqi were manipulation (98.43%), patient's body constitution (95.29%), acupoint (87.43%), the state of illness (80.10%), and psychological condition of patient (68.06%). The importance and reliability of Deqi's influential factors to Deqi was ranked in the order as manipulation of acupuncturist> specificity of acupoints> constitution of patient> status of illness> pathogenetic condition of patient, as shown in Figure 3.
The Way to Control Deqi. As shown in
Regarding Question (12), the most frequently used manipulation to promote Deqi sensations was the combination of rotating, lifting, and thrusting (72.54%), and supplementary manipulations such as scrape, shake, and press were also used by some acupuncturists (17.62%), while just one kind of manipulation was seldom applied. The questions (13) showed that most acupuncturists (74.21%) believed that the depth of insertion to elicit Deqi sensation should be based on the patient's conditions, and 25.79% thought that deep insertion was easier, but few chose shallow insertion. In response to the question of whether the strong sensation produced by rotating the needle in single direction was true Deqi or not, 46.87% acupuncturists agreed with this point. According to Question (15), the majorities of acupuncturists (98.45%) are able to induce Deqi right at the moment of needle insertion, even without needle manipulation. But the possibility was not high for 21.05% of acupuncturists thought it <20% and 38.42% of acupuncturists thought 20%-40%, indicated in Question (16).
Deqi Is
Crucial to Acupuncture Efficacy. In our study, 94.33% acupuncturists stated Deqi was crucial to clinical efficacy. Of these, 85.57% thought it was crucial in most cases while 8.76% thought it was always crucial. No one denied its importance completely. A recently randomized controlled trial (RCT) of Bell's palsy demonstrated that the strong Deqi could result in better outcomes [2], while another study stated no pain relief in osteoarthritis patients [13]. Regarding the intensity, the majority of acupuncturists did not believe that a stronger intensity would increase the benefits while 34.20% agreed. This result was similar to the results reported by Han [14], who showed that a low frequency (2 Hz) had better effects on easing pain than a high frequency (100 Hz). There Evidence-Based Complementary and Alternative Medicine 5 is no doubt that Deqi is crucial to clinical efficacy in our study. However, future clinical trials are required to confirm this.
Deqi and Acupuncture Manipulations. Our investigation showed that the most significant influencing factors of
Deqi were the acupuncturist's manipulation. Acupuncturists' manipulation was also reported to be the most important influencing factor in clinical studies. It is accepted that manipulation could induce the release of Deqi and promote the degree of Deqi or alleviate the strong Deqi sensation. It is also known that different types of manipulation can result in different Deqi sensations. For promoting or controlling Deqi, acupuncturists preferred to use the combined manipulations of rotating, lifting, and thrusting and also used the supplemental manipulations such as scraping and shaking.
As the issue of whether the strong feeling caused by rotating with one single direction was due to Deqi, acupuncturists themselves did not come to an agreement. Sometimes, nearly all acupuncturists have experienced that Deqi occurred just as the needle was inserted without any other manipulations, but the probability was very low. Different depth of insertion has aroused considerable debate, and more researchers favor the idea that with deep insertion it is easier to produce Deqi sensations. Functional magnetic resonance imaging (fMRI) has shown that deep electroacupuncture on GB34 and GB35 could generate stronger Deqi sensations and more effectively modulate the pain-related neuromatrix than shallow electroacupuncture [15]. Other studies also proved that deep acupuncture could result in higher Deqi sensations scores [16] and increase the skin and muscle blood flow [17] in healthy subjects. However, some studies report the opposite views. A fMRI research declared no significant differences in the blood oxygen level dependent (BOLD) responses by the deep and shallow stimulations [18]. RCT also showed that the same effects were achieved by both deep and superficial acupuncture in 6 Evidence-Based Complementary and Alternative Medicine idiopathic anterior knee pain patients [19]. Our questionnaire results demonstrated that no one thought the shallow insertion made it easier to produce Deqi sensations, and 25.79% of the acupuncturists stated that deep insertion made it easier, while the majority of the acupuncturists (74.21%) explained that it depended on the patients' conditions and was not relevant to the depth of insertion. The shallow stimulation usually means that the needle tip reaches the subcutaneous tissue, while the deep stimulation may arrive at the muscular or nervous tissues. Deqi sensation may not be completely relevant to the deep tissues. One study provided evidence that Deqi was not relevant to the deep median nerve contact nor median nerve penetration during needling in P6 point with ultrasound measurement [20]. Another study also explained that it depended on the patients' conditions and no relevance was attributed to the depth of insertion because in healthy subjects the skin and muscle blood flow increased with no significance comparing deep and shallow stimulation, but in fibromyalgia patients there were significant differences [5].
The Deqi Sensation of Acupuncturists and Patients.
It seems that the feelings of patient and acupuncturist are often used to judge whether Deqi has been generated or not, but few researches have demonstrated the real details of judgment. The results of this study showed that the majority of acupuncturists recognized whether the Deqi had been achieved just by their fingers' sensations during manipulations. These sensations were mostly tightening, sinking, and astringent, which were similar as those described in textbooks [1]. Interestingly, the Deqi sensations experienced by fingers were ranked as sinking, tightening, and astringent according to the reliability. In our survey, the patients' sensations were mainly described as soreness (Suan), numbness (Ma), distention (Zhang), heaviness (Zhong), and pain (Tong). The results showed the order of frequency as distention, soreness, numbness, heaviness, and pain, and the orders of its reliability level scores ranked as soreness>numbness>distention>heaviness>pain. For the top three sensations, although there were different words and expressions to describe Zhang, Suan, and Ma, some previous researchers also demonstrated similar results [21][22][23][24]. So, Deqi sensation in both acupuncturist and patient, occurrence probability, and the order of reliability were revealed. These results may provide some advice for further quantification studies for Deqi.
Influencing Factors of Deqi.
Except for acupuncturist's manipulation, the common influencing factors of Deqi included the specificity of acupoint, the constitution of patient, and the patients'psychological factor as shown in our study. For the specificity of the acupoint, studies had shown that it existed and was related to clinical effects closely [25], but it needs further clinical study to determine its direct relationship with Deqi. Yet for all that, according to common sense, the feeling of pain is evident with needling the terminal points on the four limbs, while the soreness and distention are noted in thick muscle points. Meanwhile, the conditions of patient such as constitution, illness state, and psychology were considered as the important factors to Deqi in Chinese ancient and modern literatures [26][27][28]. As noted in "Ling Shu Jing," the speed of Deqi emergence is faster in patient with yang excess constitution than the one with yin excess. Compared with healthy volunteers, patients suffering chronic pain tended to acquire a much stronger Deqi sensation [29]. Some researchers believe that Deqi is the brain awareness and consciousness because the sensations of the subjects were the same between sham laser acupuncture and true laser [30]. However, in the classical literature such as the "Huangdi Neijing" and the "Zhenjiu Dacheng," psychological factors are important with Deqi, not only in the process of acupuncture but also in influencing the clinical outcomes. Clinical trials revealed that Bell's palsy patients with the personality factors of excitability, sociability, braveness, and intellectuality had an easier time to gain Deqi [31], and the anxiety and dominance were correlated with the treatment effects of primary dysmenorrhea [32]. In addition, the anxiety also affected the heart rate variability in healthy subjects [33].
Limitations.
The main limitation of this survey is that the contents are designed according to Chinese acupuncturist's customs, including manipulations, acupuncturist's needling sensations, patient's sensations, and the factors of Deqi, and some items may be difficult to understand and do not conform with international conventions. Most of the acupuncturists came from the grade-three general province hospitals, and opinions from lower grade hospitals were insufficient. The results are from acupuncturists' general perspectives; a supplementary survey of the patients' views is necessary.
Conclusions
Our survey demonstrates that Deqi is important to clinical effects according to the acupuncturist's views. The integrated manipulations are the most common way to promote Deqi sensation. The reliability of primary acupuncturist fingers' Deqi sensations ranks as sinking> tightening >astringent. The reliability of primary patients' Deqi sensations ranks as sourness>numbness>distention>heaviness>pain as reported by patients. The reliability of primary patient's Deqi sensation factors ranks as manipulation>specificity of acu-point>TCM constitution>disease status>patient's psy-chology>acupuncturist's psychological hint>clinical environment.
In short, this paper shows the perspectives of Chinese acupuncturists on Deqi. The results may provide some evidences to the qualitative and quantitative research of Deqi. To formulate and evaluate a Deqi sensation scale, it may be better to include both of the sensations of acupuncturist and patient on the basis of the credible rank. In clinical research and practice, the influential factors of Deqi should be considered.
|
2016-05-12T22:15:10.714Z
|
2015-04-15T00:00:00.000
|
{
"year": 2015,
"sha1": "7cfcde0bd1b2bfaae9d537addd106181d1e92da3",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2015/684708.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2b4fdc8f1ede04f4fffc89c34b1cbdf96fed5bec",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250091424
|
pes2o/s2orc
|
v3-fos-license
|
What factors are associated with utilisation of health services for the poor elderly? Evidence from a nationally representative longitudinal survey in China
Objectives To investigate the status and factors of healthcare service utilisation among the poor elderly in China. Methods We selected the poor elderly from the China Health and Retirement Longitudinal Study (CHARLS) from 2011 to 2015. The main outcome measures include utilisation indicators for the probability and costs of outpatient/inpatient services. Based on modified Andersen behaviour model, a two-part model is designed to analyse the factors of the health service utilisation of the poor elderly. Results The visit rate of outpatient services increased from 15.05% in 2011 to 21.26% in 2015, and the hospitalisation rate increased from 7.26% to 14.32%. The median cost of outpatient and inpatient services in 2015 for the poor elderly were 350 RMB and 10 000 RMB, respectively, and the out-of-pocket ratios were 85.2% and 53.3%, respectively. 78.44% of the people who actually needed healthcare did not use health services, and the main reason was financial difficulties (42.32%). The poor elderly who are higher educated, have children, live in central regions and have social security and a poor health status and who do not smoke or drink are more likely to use health services. The need factor plays a crucial role in determining health service utilisation. Conclusions The poor elderly tends to have a worse health status and a heavier medical burden but had a lower utilisation of health services. Predisposing, enabling, need and health behaviour variables should be considered to improve the health service utilisation and the health of the poor elderly.
INTRODUCTION
In China, the speed of ageing in the population is grim. By the end of 2017, the number of elderly people aged 60 years and above reached 241 million, which accounts for 17.3% of the total population. 1 Compared with developed countries, the ageing of the population has some features in China, such as the great number of old people and a rapid increase of this number, which are unequal to the economic development and nationalised social security systems. 2 This imbalance has led to the problem of 'getting old before getting rich'. 3 Old people, who generally have a worse health status and a higher level of multimorbidity and chronic disease conditions, have an increased need for healthcare and actually consume a disproportionate and inappropriate share of services. 4 5 According to statistics, the 2-week morbidity rate of the elderly increased from 25.0% in 1993 to 56.9% in 2013. 6 In addition, it has been documented that medical expenses are much higher for the elderly than for other age groups. In China, on the one hand, some scholars have analysed the health service utilisation of the elderly in a poverty-stricken area in China. The results show that the 2-week prevalence rate is 62.7%, and the 2-week visit rate is 37.3%, which is significantly different from STRENGTHS AND LIMITATIONS OF THIS STUDY ⇒ A modified Andersen behavioural model of health services was applied as a theoretical framework to comprehensively and rationally analyse the factors influencing health service utilisation among poor older adults. ⇒ The spatial price deflators were constructed to adjust the poverty line standards in urban and rural areas, so that the poverty line standards can be applied to the urban and rural elderly in China. ⇒ This study used a two-part model for multifactor analysis of health service utilisation through logit regression and generalised linear regression, which better overcomes the problems caused by the discontinuity, heterogeneity and non-normality of health service data in data analysis. ⇒ The health service utilisation in this study used selfreported measures, which were more prone to potential errors than administrative records.
Open access
the national average at that time. 7 A further decline in health level leads to an increase in the burden of medical expenses, forming a vicious circle. Chinese and foreign scholars have confirmed through research on different regions of the world that economic difficulties are the primary reason why the poor population does not seek medical care. 8 9 In addition, studies have shown that older adults' perceptions of their own health status can also affect medical treatment behaviour. Through analysis, 21.6% of the elderly self-assessed that their health status was poor, but the measurement results showed that the proportion of elderly people with functional disabilities was as high as 71.7%. 10 On the other hand, the Chinese Public Administration Society suggested that the annual medical expenses of the elderly, which accounted for 30% of total healthcare costs, were 2.56 times higher than the medical expenses of the total population. 11 However, due to a low socioeconomic status, the high price of medical services and an incomplete health insurance system, efficiently using health services for the elderly is difficult. Particularly, this problem among the elderly who live below the poverty line is even worse. A good deal of evidence indicates that the interaction between poverty and worse health is a causality that runs in both directions. Previous studies show that old people with a low economic status are much more likely to have worse health outcomes than better-off people. [12][13][14] Worse health is often associated with substantial healthcare costs that aggravate the economic hardship. 15 Meanwhile, poverty could also cause worse health. The poor elderly suffer from a multiplicity of constraints that result in worse health outcomes. 16 The poor elderly are thus caught in a vicious circle: poverty breeds worse health, and worse health maintains poverty. 12 Currently, poverty reduction and healthcare for all have been identified as the primary drivers towards achieving sustainable development goals both nationally and worldwide. Since the Reform and Opening Up, China has made remarkable achievements in poverty reduction, which makes the country a major contributor to the world's poverty reduction endeavours. However, the poverty of the Chinese elderly remains a concerning issue in our society. 17 Typically, the inequality between the health demand and the healthcare utilisation of the poor elderly is still unclear. Many studies and a variety of models have been conducted to investigate old people's healthcare utilisation from various aspects. [18][19][20][21] However, studies on the health status and health service utilisation of the poor elderly population are relatively scarce, and few national surveys exist to research the health-related issues of the poor elderly population. This study, based on the data from the China Health and Retirement Longitudinal Study (CHARLS) from 2011 to 2015, employs the Andersen health behaviour model as a theoretical framework to analyse the status and influencing factors of the health demands and healthcare utilisation among the poor elderly. Our results will provide the grounds for decision making for further promoting the health equality of the elderly and improving their utilisation of health resources.
METHODS Data
The data used here come from the national baseline survey of the CHARLS collected from 2011 to 2015. 22 23 The CHARLS is a biennial survey that was initiated in 2011 and conducted by the National School of Development at Peking University. It is a nationally representative longitudinal survey that collected information on Chinese residents aged 45 years and above and their spouses regarding assessments of social, economic and health circumstances. To ensure cross-study comparability of the results, the CHARLS was harmonised with leading international research studies in the Health and Retirement Study (HRS) and intended to provide a high-quality public microdatabase with a wide range of information that serves the needs of scientific and policy research on ageing-related issues. 24 Based on multistage probability sampling, 10 257 households and 17 708 individuals were studied through face-to-face computer-aided personal interviews. Ethical approval for this study was not required because it was based exclusively on publicly available data. All subjects were informed of a grant of confidentiality that legally protected their responses.
The CHARLS collected information on the income, consumption, private and public transfer payments of the elderly. Crucially, in 2011, 2013 and 2015, the economic information modules in the questionnaire were similar. Since the elderly are highly dependent on the family, their income and consumption are usually based on the family. So we use the household income to measure their economic status. The household income of the elderly includes the family's wage income (the sum of wages, bonuses and cash benefits), agricultural income (the family's income from agricultural and forestry products minus input costs), income from animal husbandry and aquatic products (the family's total income from animal husbandry and aquatic products minus their own consumption and input costs), self-employed income, public transfer payment and other income. Consistent with the estimates obtained from other studies, we find that there are a large number of missing values and extreme values in the income variables, such as some outpatient subsamples had a household income of less than zero. In addition, the underestimation of income due to deliberate under-reporting might be a problem. Enlightened by the prior study, we adopted the household total expenditure per capita (EPC) as a proxy for financial status, which is the sum of the household food EPC, household monthly EPC and household yearly EPC. 25 Furthermore, in order to ensure the international comparability of the research, this paper selected the standard of the international poverty line published by the World Bank (US$1.9 per person per day). According
Open access
to the standard of the international poverty line ($1.90 a day) and the Purchasing Power Parity (PPP) of each year, we calculated the poverty line of urban and rural areas in RMB in 2011, 2013 and 2015. 26 Then, we used the spatial price deflators to adjust the poverty line standards in urban and rural areas, so that the poverty line standards can be applied to the urban and rural elderly in China. After excluding the respondents with key variables missing or for not reaching the standards, 3760 respondents over 60 years old who live on or below the poverty line were ultimately selected for this paper from 2011 to 2015.
Dependent variables
To account for the observable differences in health needs, this study considers the 1 month morbidity, chronic disease prevalence and self-reported health status of the poor elderly population. 4 5 The dependent variables in our analysis reflect the intensity and expenditure of different healthcare utilisations. We consider the following measures of health service utilisation: (A) the probability of outpatient visits during the 1 month that precedes the survey date; (B) the individual expenditure for the outpatient visits during the past month; (C) the probability of being hospitalised during the year that precedes the survey date; and (D) the individual expenditure for inpatient visits in the past year.
Independent variables
In this study, the independent variables are chosen based on the Andersen Behaviour Model (Andersen, 1968), which was introduced in the late 1960s to help understand the use of health services, define equitable access to healthcare and assist in developing policies to equalise access to healthcare. 27 The original model considered that health service utilisation was related to three predictors, which were described as people's predisposition to use services, the factors that enable or impede their use of services and their need for healthcare. With the further development of the model, the Andersen model developed to the third stage, which formed three components with linear relationship: (1) the main determinants; (2) health behaviours; and (3) health outcomes. 28 The modified Andersen behavioural model has shown that health behaviours can serve as an indicator of positive health beliefs and health literacy and have an impact on healthcare underutilisation. Healthy behaviours, including diet, exercise and self-care, are associated with health service utilisation, which in turn affect health outcomes. 29 Up to now, increasingly more studies have employed this model and its variations to assess the utilisation and outcomes of healthcare services for both general and vulnerable populations. 21 In this study, we use a modified Andersen behavioural model of health services as a theoretical framework to analyse the factors associated with health service utilisation among the poor elderly. Our model includes four types of variables, namely, predisposing, enabling, need and health behaviour variables. 30
Predisposing variables
The predisposing component centres on the idea that some individuals have a propensity to use services more than other individuals, and this tendency can be predicted from individual characteristics prior to an illness episode. In the present paper, the predisposing factors include gender, age, education and marital status. Age has been divided into the three groups of 60~69, 70~79 and 80+ years (we labelled these three groups of elderly people as 'young-old', 'the mid-aged old' and 'the eldest old', respectively). Education has the following four categories: (1) illiterate; (2) primary school; (3) middle school; and (4) high school and above. Marital status has been divided into the two categories of: (1) married (including cohabitating and the spouse being away for job purposes) and (2) unmarried (including separated, divorced or widowed).
Enabling variables
The main idea for this type of variable is that people may well be predisposed to using health services, but they also need some means of obtaining them. In the present paper, the enabling variables include whether the respondents have children, an urban or rural residence, health insurance and an old-age pension, as well as their region and their traffic time for health services. Their region is determined numerically (1=eastern, 2=central, 3=western). Health insurance is measured by uninsured=no insurance, UEMI=urban employee medical insurance, URMI=urban resident medical insurance, NCMS=new rural cooperative medical scheme, private MI=private commercial medical insurance and other=other health insurance. An old-age pension is based on whether people receive benefits from any pension programme (no or yes).
Need variables
This variable captures the need for healthcare and represents the most immediate cause of health service use. Generally, need includes individuals' perceived and evaluated functional capacity, symptoms and general state of health. In this study, the need variables include selfreported health status, physical disability, chronic diseases and limitations on activities of daily living (ADLs). Selfreported health is obtained from the response to the question, 'Would you say your health is excellent, very good, good, fair and poor?' or 'Would you say your health is very good, good, fair, poor and very poor?'. We combined the answers to these two questions into the three categories of poor, fair and good. Physical disabilities are based on the respondents' answer to the question, 'Do you have one of the following disabilities, physical disabilities?'. Chronic diseases are assessed as the cumulative number of diagnosed conditions (0, 1~2 and ≥3). ADL limitations indicate any self-reported difficulty in any of the following ADLs: bathing/showering; eating; dressing; getting into Open access or out of bed; using the toilet; or controlling urination and defecation.
Health behaviour variables
Health behaviour is measured by the following three variables: (1) smoke (no=never a smoker, yes=smoker); (2) drink (no=never, yes=drinking alcohol more than once or less than once in a month); and (3) physical examination (no=not having a regular physical examination or yes=having a regular physical examination).
Statistical analysis
A descriptive analysis is used for the demographic characteristics of the samples. The variables of morbidity and the rates of outpatient and inpatient visits were presented as rates, and the differences between the groups were examined by using the χ 2 test. Subsequently, a two-part model is employed to further investigate the factors that affect the utilisation of health services by the poor elderly. A two-sided p value of <0.05 was considered to indicate statistical significance. All statistical analyses are performed with STATA software, V.15.0.
Since people cannot decide whether or not to use health services and the medical expenditure at the same time, they must initially decide whether or not to utilise health services and then decide the cost of healthcare once they have accepted the services. Hence, a twopart negative binominal distribution model (two-part Negbin) was applied in this study, since the decision to use health services and the medical expenditure were not jointly determined. [31][32][33] Moreover, previous studies have suggested that many individuals did not use any healthcare services during the study period; therefore, the medical cost data are usually characterised by having a substantial proportion of zero values and a right-skewed distribution, and they may exhibit heteroscedasticity. 34 A two-part model can be used to address these data issues. The selection criteria that a high value of the variance inflation factor (VIF) is a sufficient condition for the presence of collinearity suggests that a VIF in excess of 30 is a cause for concern. Therefore, we use a two-part model to analyse health service utilisation in the present paper. 35 Specifically, the first part of the model is a logistic that predicts the probability of any use of health services: in equation (1), the dependent variable Z * i is the probability of health service utilisation, and Health expenditure is analysed by a generalised linear model with a gamma distribution and a log link that can estimate the medical costs of only the observations with positive spending. [36][37][38] In equation 3, E(Y>0|X) is the probability of inpatient service utilisation multiplied by the expected cost, which is conditional on being a user, and the sample average of E(Yi) becomes the expected hospitalisation spending of the elderly. Since equation 3 is specified as gamma GLIMMIX, the link function directly characterises how the expectation of Yi is related to the regressors, which avoids the complications of a log-linked ordinary least squares model. 30 The 2PM can be explained as follows:
Patient and public involvement
The data used in this study were derived from the CHARLS database and involved no patients in the design or development of the research question and outcome measures. No patients were involved in developing the hypothesis and plans for design of this study either. The results would not be disseminated to study participants or any other individuals or communities. The health service utilisation of the poor elderly The utilisation rate of health services The outpatient and inpatient rates of impoverished old people rose by 5% and 7% from 2011 to 2015, respectively.
Open access
The rate of outpatient visits increased from 15.05% in 2011 to 20.06% in 2015, and the hospitalisation rate increased from 10.56% in 2011 to 17.74% in 2015. The growth rate of health services is slower from 2013 to 2015 than from 2011 to 2013 (online supplemental figure S1) .
Outpatient services
In 2015, 20.06% of the poor elderly used outpatient services in the previous month. The median of the outpatient expense was 353 RMB, and the out-of-pocket ratio was 84.41%. Table 2 contains the utilisation of outpatient services for different levels of the independent variables. The poor elderly who were female, young and mid-aged old, unmarried, with high school and above diploma, Open access discovered by UEMI, private medical insurance and other insurance and live in urban areas tend to use more outpatient services. Besides, the outpatient rates of the interviewees who were non-smokers and non-drinkers, with chronic diseases, poor self-reported health, physical disabilities, ADL limitations and without regular physical examinations are significantly higher than the other respondents.
With regard to the medical expenditure, we find that the female, oldest-old and unmarried poor elderly who had lower levels of education and no child were not covered by health insurance, living in the western area, with more than 1 hour traffic time and without regular physical examinations would have heavier medical burdens. The table shows that the outpatient expenditure of poor elderly is strongly related to the types of health insurance, residence and health status. On average, the median of the out-of-pocket expenses of the group with private insurance are nearly 900 RMB, almost four times higher than that of the uninsured group and 4.5 times higher than that of the NCMI group. Furthermore, the outpatient expenses of urban residents more than double the amount of rural residents. The median of the outpatient expenses of the respondents with more than three diseases were about twice those for the elderly without chronic disease.
Among the 668 elderly people who did not visit a doctor for outpatient treatment, 353 people said that a doctor had actually suggested that they needed healthcare and explained that the main reason for not seeking outpatient services are economic difficulties (42.34%), inconvenient traffic (21.53%) and feeling only slightly ill (18.41%) (online supplemental figure S2).
Inpatient services
In 2015, 15.92% of the participants were hospitalised in the last year. The results of univariate analysis for inpatient rates are generally consistent with the outpatients. To be specific, the poor elderly aged 70~79 years, who were unmarried, non-smokers and non-drinkers, with more than 15 min of traffic time, chronic diseases, poor self-reported health, physical disabilities, ADL limitations and regular physical examinations used more inpatient service than others at the 0.05 level of significance. The Open access hospitalisation rates of participants who had higher level of education and lived in urban areas were significantly higher than those for people with lower level educational and rural residents at the 0.01 level.
In terms of the hospitalisation expenses, we find the median and out-of-pocket ratio of hospitalisation expenses for the poor elderly are 10 000 RMB and 53.31%, respectively. The poor elderly who were female, unmarried, with lower educational level, with zero or more than three children, not covered by medical insurance or insured by NCMI and private MI, and who lived in rural areas, had more than three chronic diseases, poor self-reported health status, used cigarettes and without regular physical examinations had heavier burdens for hospitalisation.
The main reasons for not seeking inpatient care are lack of money (58.68%) and not being willing to go to the hospital (22.19%) (table 3).
Predictors of health service utilisation Outpatient service
The results from TPM analysis in table 4 show that predisposing factors, enabling factors, need factors and health behaviour variables were significantly associated with outpatient visits. Compared with the poor elderly aged 60~64 years, old people aged 70~79 and 80+ years were less likely to seek medical treatment when they were ill (OR=0.81, p<0.1; OR=0.63, p<0.05). While there are no differences in outpatient expense across age group. Compared with the illiterate, the poor elderly with higher educational level had a higher probability of receiving outpatient service (OR=1.57, p<0.05) and higher outpatient expenditure (p<0.05). Compared with the NCMI, only the UEMI would significantly increase the probability and cost of having outpatient visit (OR=1.67, p<0.05). Need variables are found to be significant predictors in increasing the probability and cost of using outpatient care. The probability of using outpatient service in elderly participants with poor health status is 2.00 and 4.34 times that of participants with fair or good, respectively. Also the poor health status could drastically increase the outpatient expenditure (p<0.01). Similarly, we find that participants with chronic diseases use outpatient service 2.5 times more likely than those without chronic diseases. In terms of the health behaviour, we find individuals who consume alcohol or use tobacco are less likely to use outpatient service than their counterparts (p<0.05).
The median amount of the outpatient expenditure is 198 RMB. Conditional on having any outpatient visits, female reduces the outpatient expenditures by 100 RMB, the older age reduces the expense by 86 RMB, high school and above education reduces the 73 RMB, alcohol consumption reduces 145 RMB, chronic disease reduces 149 RMB and fair or good health status reduces 238 RMB and 319 RMB, respectively. While the followed predictors will increase the cost of outpatient: with kids (116 RMB), UEMI (197 RMB), private MI (311 RMB), old-age pension (95 RMB) and more than 1 hour traffic time (95 RMB) (table 4). Table 5 reports two-part results of hospitalisation utilisation. The results of hospitalisation were somewhat different from outpatient visits. The need variables are the common and significant factors that exert a greater impact on hospitalisation utilisation. The probability of using inpatient service for poor self-reported health people is three to six times that of fair or good one, respectively. The hospitalisation rate of the elderly with chronic diseases is 5.31 times that of those without chronic diseases (p<0.01). Moreover, compared with the poor elderly with ADL impairments, people without ADL impairments are more likely to use inpatient service. In addition, we find that the females have higher probabilities and medical costs of hospitalisation (OR=0.69, p<0.05). Compared with the poor elderly aged 60-64 years, the individual aged 70 years and above are more likely to use inpatient services (p<0.05), while they spend less money on hospitalisation. People without any health insurance are less likely to use inpatient service than those with UEMI (OR=0.38, p<0.05). Also, people with URMI or NCMI are less likely to use inpatient service than those covered by UEMI (p<0.1). Nevertheless, no significant differences exist in hospitalisation expense across health insurance status and health insurance schemes. With regard to the health behaviour variables, smoking and drinking behaviours decrease the probability of hospitalisation, and the regular physical examination increase hospitalisation rate (p<0.01).
Inpatient service
The median amount of the inpatient expenditure is 1420 RMB. Conditional on using at least one hospitalisation, female spends 497RMB higher than the male, married respondent spends 457 RMB than the single counterpart. The low income elderly covered by UEMI have 1069 RMB and 844 RMB higher inpatient costs compared with respondents covered by URMI or NCMI and 620 RMB higher for the participants covered by other insurance. The urban residents have 1137 RMB higher inpatient costs than the rural one. Furthermore, more than 1 hour traffic time increases the expense by 568 RMB, chronic disease increases 930 RMB, regular physical examination increases 624 RMB, smoking and drinking behaviours increase 624 RMB and 500 RMB, respectively. The followed predictors will reduce hospitalisation expenditure: fair or good health status (1119 RMB, 1475 RMB) and ADL impairments (208 RMB) (table 5).
DISCUSSION
This study indicated that the poor elderly have more health needs but use healthcare services at a relatively low level, and economic difficulty is the largest obstacle to using health services. The results showed that in the past month, the incidence rate of low-income elderly people, the prevalence of chronic diseases and the proportion of elderly people with three or more kinds of diseases were higher. In sharp contrast, indicators related to Open access
Open access
health service utilisation such as outpatient rate, hospitalisation rate and non-visit rate for 2 weeks in recent months were relatively low. Although the utilisation rate of inpatient and outpatient services increased gradually over time, the health service utilisation of the impoverished old population is still inadequate. According to the Fifth National Health Service Survey, the morbidity and chronic disease prevalence of the Chinese elderly were 56.9% and 71.8%, and the outpatient and inpatient rate were 49.7% and 17.9%, respectively. 39 The results of the comparison indicate that it is more difficult for the poor elderly Chinese to turn health needs into healthcare demands. Consistent with previous research results, we find that financial constraints (42.34%) are still the most important reason that limits the effective demand for medical treatment. Moreover, feeling that their disease condition was not severe and transportation barriers also contribute to healthcare underutilisation. 40 In terms of medical burden, the median cost of outpatient and inpatient services for the poor elderly were 353 RMB and 10 000 RMB, and the out-of-pocket ratio was 84.41% and 53.31%, respectively, which is far beyond the results of the Fifth Health Service Survey. According to statistics, onethird of the poverty among the elderly in China resulted from serious illnesses. For instance, a prior survey showed that the out-of-pocket outpatient expenses of low-income residents who live in Beijing account for two-thirds of their monthly income, 41 which means that medical expenses have substantially aggravated the economic pressure of the poor population. It is noteworthy that the poor elderly who are female, with a lower education level, without children and who live in rural areas tended to have a worse health status, a heavier medical burden and more unmet health needs. This study finds that gender, age and education level significantly impacted the health service utilisation of the poor elderly. Older women living in poverty actually have poorer health and are at greater risk of disease. 42 However, our findings suggest that they are less likely to receive healthcare and spend less on healthcare. One possibility is that their relatively low family status would hinder them from using health services to allow more important family members to use health services. 43 In our study, the results showed that the young-old would Open access be more likely to use outpatient services and less likely to be hospitalised. This may be explained by the young-old who care more about their own health and prohibitively high hospitalisation expenses, and they thus might use more outpatient services to protect themselves from catastrophic expenses. The older old are more likely to be hospitalised but spend less money on health services. A possible explanation is that worse health and severe symptoms force them to be hospitalised. 44 Another reason is that the older old are close to the end of their life expectancy and that the effect of the 'time to death' instead of age determines their health expenditure. 45 People with higher education levels relative to those with below primary education levels are found to have a significantly increase in outpatient visit and decrease in outpatient expenditure, which is consistent with a previous study. 30 Our analysis reveals that health insurance and old-age pensions significantly increased the probability of health service utilisation among the poor elderly population. The caveat, however, is that for the insured poor elderly, only coverage under UEMI is significantly related to increased medical service utilisation and medical costs. This reflects the fact that patients covered by UEMI face a more highly discounted price on medical service, and the most heavily insured individuals are more likely to access health services. 46 Another plausible explanation for the disparity across different insurance programmes is that the main three medical insurance schemes in China are heterogeneous regarding funding sources and benefit packages. For example, in 2013, the per capita fund for the rural NCMI scheme was only US$61, just approximately 15% of the per capita fund of the UEMI scheme. 47 Although NCMS has household medical savings account (MSA) that covers outpatient care, the budget for MSA is very limited, and therefore, the poor elderly insured by NCMI are less likely to use outpatient service. 35 In addition, we find that 13.62% participants had no medical insurance that reflects that the generosity of health insurance is far from being universal among the poor elderly population. Accordingly, it is urgent to develop and implement more reasonable reforms and targeted policies to optimise the health insurance system.
The impact of the regional distribution can be explained by medical service accessibility. 48 Compared with the eastern area of China, living in the central and western regions was found to have a significantly negative effect on health services utilisation, while a positive impact on medical expenditures. In a similar vein, the poor elderly with short traffic time, people with long traffic time have higher medical costs and the longer traffic time result in the higher increase of medical costs. This is possibly attributable to the poor accessibility to health services and the financial barriers that seriously restrict them in receiving treatment. A failure to seek timely treatments results in the deterioration of illness, which would lead to increased health demands and medical expenditures. 49 Taken as a whole, the findings suggest that improvements to the access to medical services in underdeveloped areas and lower the healthcare would positively influence the health utilisation of the local low-income population. In this study, there are no significant differences in the healthcare utilisation between the rural and urban residents, possibly attributable in part to the most participants came from rural area and in part to enrolment in NCMS significantly increases an individual's probability of seeking treatment. 17 Prior studies concluded that the need variables were strong determinants of healthcare utilisation of greater predictive value compared with the predisposing and enabling variables in Andersen's model. 50 Open access is consistent with previous literatures that need factors are the primary determinants of healthcare utilisation. 21 52 53 We find that poor self-reported health and more chronic disease multiplied healthcare utilisation and the medical cost. Some studies indicate that better and more effective treatment of chronic conditions will produce a 'compression of morbidity' that makes individuals experience few illnesses. Other scholars assume that the basic pattern of chronic illnesses among the elderly will continue or even increase as the population ages. 54 This paper agrees with the former indication that advances in health service utilisation will improve health and reduce the unmet needs of the poor elderly. Additionally, our study finds that the poor elderly with ADL impairments are more likely to using outpatient service but less likely to use inpatient care. Perhaps this is because they need more medicine for rehabilitation, which could receive from outpatient visits. 55 Most studies have shown that unhealthy behaviour such as smoking and drinking are health risk factors. 56 57 This study finds that the utilisation rate of health services for poor elderly people who smoke and drink is significantly lower. On the one hand, this may be due to people who regular use alcohol or cigarettes being 'overly optimistic' about their health and lacking health awareness. Even if they have some physical discomfort, they will not seek treatment in time. 58 59 On the other hand, it may be that the health hazard of smoking and drinking is a cumulative effects, and most poor elderly people have given up smoking and drinking due to poor health. It should be noted that the hospitalisation expenditure for those who drink and smoke is significantly higher, which indirectly confirm cumulative effects of health hazards. In addition, this study finds that regular physical examinations can promote the use of inpatient services by poor elderly people. Consistent with most studies, regular health check-ups may be important for elderly people to ensure early detection and disease treatment. 60 An earlier study of health behaviour reported that people who live in underdeveloped areas were less knowledgeable about the harmful health effects of unhealthy behaviours, and these significant differences in knowledge were related to the access to health information. 59 Therefore, providing health information efficiently through health education to the poor elderly is necessary.
The current study is subject to several limitations. First, using cross-sectional data, we could not interpret the associations as causal associations of poor elderly. Longitudinal research is required to identify casualty. Second, CHARLS uses self-reported measures on health service utilisation, which are more prone to potential errors than objective measures. Future studies could use hospital registration information and clinical assessments to obtain more reliable results. Third, this study selected the poor elderly from the CHARLS from 2011 to 2015. Since the latest CHARLS data were released in 2020, it was better to use the newly released data in the future study for the status of healthcare service utilisation among the poor elderly. Finally, we did not explore whether the healthcare service utilisation among Chinese poor elderly is reasonable, and the rationale of healthcare service utilisation among the Chinese poor elderly group should be explored in subsequent studies.
Even with these limitations, there are still some new findings in this study. Our investigations add substantially to the understanding of the utilisation status of health services and the factors that affect healthcare service utilisation among Chinese poor elderly over time. Given that the poor elderly have more unmet needs for medical services and face multiple subjective, objective Open access and institutional barriers in seeking appropriate treatment, our study calls for urgent measures to remove the disparity among the poor elderly Chinese. First and foremost, financial constraints play an important role in the unmet needs among the poor population. Continually accelerating economic development and providing financial assistance to improve the financial capability of the poor would have a substantial impact on health service utilisation. Second, in recognising the heterogeneity of the poor elderly, our findings recommend that additional efforts should be made towards capturing groups that are likely to be further disadvantaged, such as women and the oldest-old, and people who live in underdeveloped areas. Third, the barriers to accessing healthcare that consist of international regulations and regulations within health systems should be eliminated. For instance, some strategies include combining both the urban-rural discrepancy and regional characteristics, improving the medical conditions, including timely medical treatment, and providing more care resources such as paid home care. Furthermore, efficient health education should be conducted to popularise health knowledge and improve health literacy. Finally, the government should optimise the financing and compensation system of medical insurance, expand the scope of reimbursement, increase the level of reimbursement rates and implement new health insurance and medical aid programmes that aim at the poor elderly, which is the most vulnerable group on health and economy, especially when the fall sick.
CONCLUSION
Overall, the poor elderly used medical health services at a relatively low level compared with the general population in China. Due to a variety of constraints, the poor elderly had a lower utilisation of health services. Predisposing, enabling, need and health behaviour variables should be considered when making policy and taking measures to eliminate health inequalities to improve the health service utilisation and the health of the poor elderly. The government should try the best to support the poor elderly, including perfecting the endowment insurance and health insurance and improving the quality of health services system in poor areas.
|
2022-06-29T06:17:59.415Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "89457e35dae00f898e8fcc04ca4f640a92861128",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/6/e059758.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "fcbf261dbff825387a1a62cb2869587e4afccdf4",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
21695340
|
pes2o/s2orc
|
v3-fos-license
|
Bacterial Communities in Riparian Sediments: A Large-Scale Longitudinal Distribution Pattern and Response to Dam Construction
Sediment microbes play major roles in riparian ecosystems; however, little is known about their longitudinal distribution pattern and their responses to dam construction, the most severe human disturbance in river basins. Here, we investigated the variability of sediment bacterial communities along a large-scale longitudinal gradient and between dam-controlled and dam-affected sites in riparian zone of the Lancang River, China. The abundance, activity and diversity of sediment bacteria gradually increased in a downstream direction, but were significantly lower in the dam-affected sites than in the dam-controlled sites. The bacterial community compositions differed significantly between the upper-middle-reach and downstream sites at all control sites, and also between the dam-affected and dam-controlled sites. In the cascade dam area, the relative importance of spatial distance and environmental heterogeneity for bacterial distribution differed between the dam-controlled and dam-affected sites. Spatial distance was the primary cause of variations in bacterial community in dam-controlled site. By contrast, the environmental heterogeneity had more control over the bacterial communities than did the spatial distance in dam-affected site. Network analysis showed that the bacterial community in the dam-affected sites had lower connectivity and stability when compared with that in dam-controlled sites. These results suggest the distinct variations in sediment bacterial community in dam-affected sites, which could enhance our understanding of potential ecological effects caused by dam construction.
INTRODUCTION
Riparian zones, at the aquatic-terrestrial interface, play an important role in protecting the land from rivers and have abundant species diversity, unique ecological communities, and high primary productivity . Riparian zones are extremely sensitive to changes in environmental conditions, including human disturbance and natural disasters . It has been suggested that sediment microbes in riparian zones can indicate the health of river basin ecosystems, because of their important roles in energy conversion, pollutant degradation, nutrient biogeochemical cycling, and element transformation and migration (Liu et al., 2011;Wu et al., 2013). Despite their importance, we have little information about the diversity and composition of microbial communities in riparian sediments because of the limits of traditional technologies. However, because of the rapid development in next-generation sequencing technology in recent years, comprehensive biogeographic surveys of microbial communities are now possible over large geographic areas (Liu et al., 2011). To date, small-scale studies (<300 km of the river) or studies based on only a few sites have been carried out (Hu et al., 2014;Kim et al., 2016); yet there is very little information about the factors that influence biogeographic patterns of bacterial communities in riparian sediments over large spatial areas.
As proposed by Baas Becking (1934), who claimed in his classic hypothesis that "everything is everywhere, but the environmental selects, " contemporary environmental factors, especially sediment properties, were initially thought to be the main controls on microbial communities (Griffiths et al., 2011;Wu et al., 2013). It is now clear that sediment pH is correlated with the spatial distribution of microbial communities in many ecosystems, including grassland , forest (Nacke et al., 2011), tundra (Shen et al., 2013), and cropland (Fan et al., 2017). Sediment moisture (Guo et al., 2015), temperature (Zhou et al., 2016), nutrient availability (Hansel et al., 2008) and other sediment properties can also influence the composition of microbial communities. Many recent studies have shown that the distribution of sediment microbial communities was mainly affected by geographic factors such as longitude, latitude, and elevation because of microbial drift or limited dispersal over spatial distances (Schauer et al., 2010;Xiong et al., 2012;Chen et al., 2017). Martiny et al. (2006) reported that the relative importance of environmental heterogeneity and spatial distance on microbial biogeography was scale-dependent. Spatial distance is generally the main driver of variations in microbial diversity at the regional scale (>1000 km), and environmental disturbances drive variations at the local scale (Wu et al., 2013). However, we are still not sure whether spatial distance or contemporary environmental disturbances have more control on the structure of bacterial communities in a large-scale river habitat.
Dams, especially large dams or cascade dams, constructed in river systems for hydropower or water supply, are considered the most severe human disturbance on the integrity of watersheds and river ecosystems (Zhao et al., 2012;Yan et al., 2015). More than half of the world's large river systems are affected by damming (Zhao et al., 2012). Dams significantly modify the water discharge, regulate the flow circulation, and disturb nutrient transport by retaining suspended material in reservoirs, thereby creating environments that have been subject to significant physical, chemical, and biological alterations Yan et al., 2015). The effects of dam construction on riparian biota and their ecological functions have attracted considerable attention in recent years (Li et al., 2018;Liu et al., 2015). Numerous studies of riparian vegetation have demonstrated that dam construction can lead to decreased habitat heterogeneity and species richness in the impounded region (Read et al., 2015), and thus greatly impacting on species' spatial distribution (Gladyshev et al., 2015). However, there is very little information about how microbial assemblages in riparian sediments respond to river damming. A few studies have reported that the community structures of bacterioplankton suffered significant impacts from large impoundments in rivers and were correlated strongly with the variations in the physicochemical properties of water bodies (Kummu and Varis, 2007;Yan et al., 2015). Other studies found that there were clear differences in the physicochemical properties of riparian sediments before and after damming because of frequent flooding and seston sedimentation (Li et al., , 2018. Consequently, we should also expect to observe changes in sediments bacterial communities. The Lancang-Mekong River, the largest river in Asia, flows across seven climatic zones and has very diverse riparian habitats along its course from the source to the sea Zhao et al., 2012). There are plans to construct a cascade of 23 hydroelectric dams along the main channel of the Upper Langcang-Mekong River (i.e., Lancang River). Of these 23 dams, 6 have already been constructed in the middle and lower reaches of the river in Yunnan Province, China Liu et al., 2015). While the effects of dam construction on the downstream hydrological regime, water quality, and the sediment trapping efficiency have already been studied in the Lancang River (Kummu and Varis, 2007;Zhao et al., 2012;Liu et al., 2015), there is very little information about the biogeographic distribution of bacterial communities in riparian sediments along this large river and how they have changed in response to construction of the dams. The objectives of the present study were therefore to (i) investigate the longitudinal distribution of bacterial communities in riparian sediments along the Lancang River at the large scale (>1000 km), (ii) analyze the changes in the activity, abundance, diversity and composition of sediment bacterial communities in response to construction of the dams, and (iii) explore the key controls (either spatial distance or environmental heterogeneity) on the distribution of, and daminduced variations in, bacterial communities.
Study Site and Sediment Sampling
The Lancang-Mekong River has its source in the Guyong-Pudigao Creek close to the foot of Mount Jifu on the Qinghai-Tibetan Plateau and discharges into the South China Sea. Passing through six countries including China, Myanmar, Laos, Thailand, Cambodia, and Vietnam, it is one of the most famous transboundary rivers in the world. The Upper Langcang-Mekong River is known as the Lancang River as it flows across China through, from upstream to downstream, Qinghai Province, Tibet Autonomous Region, and Yunnan Province. The main channel of the Lancang River, from the Jiuzhou Gauging Station to the Chinese/Myanmar border, is 2160 km long and spans an elevation of 4580 m as it flows through narrow and deep valleys. The six hydropower dams on the main channel of the Lancang River those have already been constructed in the middle and lower reaches in Yunnan Province are Gongguoqiao (GGQ), Xiaowan (XW), Manwan (MW), Dachaoshan (DCS), Nuozhadu (NZD), and Jinghong (JH) dams. Detailed information about these six dams is shown in Supplementary Table S1. A total of 25 sampling sites in the riparian zone along the mainstream of the Lancang River that spanned a distance of 1148 km from Yanjing (YJ) upstream to Mengla (ML) downstream were sampled in February 2017 (Figure 1). During sampling, the water flow was less than 0.15 m s −1 in the reservoir area and ranged from 0.60 to 1.20 m s −1 in the river without impoundments. We divided these sampling sites into three groups. The first group comprised control sites where there are presently no dams (non-dam control). Ten of these, including YJ, Gushui (GS), Yunling (YL), Wunonglong (WNL), Lidi (LD), Baijixun (BJX), Tuoba (TB), Huangdeng (HD), Dahuaqiao (DHQ), Miaowei (Miaowei), were upstream sites and three were downstream sites, namely Ganlanba1 (GLB1), Ganlanba2 (GLB2), and ML. The second group was dam-influenced sites that were in the reservoir area where the water level fluctuated (dam-affected). Four of these, namely GGQ-E, XW-E, MW-E, and DCS-E, were middle-reach sites, and two were downstream sites, namely NZD-E and JH-E. The third group was dam-controlled sites that were below the dam and within 5 km of the damaffected site (dam-control). Each pair of dam-controlled site and dam-affected site was in the same climatic zone. Four of these were middle-reach sites, namely GGQ-C, XW-C, MW-C, DCS-C, and two were downstream sites, namely NZD-C and JH-C. Of the 25 sampling sites, 19 control sites that included 13 nondam controlled sites and 6 dam-control sites were considered undisturbed and natural. All sampling sites were located near the river (within 2 m from the river water body). At each site, three replicate sediment samples were collected to a depth of 15 cm below the surface along a 10-m longitudinal transect of the river. Each individual sample was a mixture of five sediment samples that were collected randomly within a given plot with an area of 1 m 2 (Yang et al., 2017). A total of 75 sediment samples were collected and analyzed for chemical and DNA properties.
Chemical Property and Microbial Activity Analysis
Sediment moisture was measured gravimetrically after desiccation for 72 h at 105 • C. The pH and electrical conductivity (Ec) were determined on fresh sediment at a distilled to deionized water (ddH 2 O) ratio of 1:5 (w/v) using a digital pH/Ec meter (Thermo Electron Corporation, Beverly, MA, United States). Total carbon (TC) and organic carbon (TOC) content were determined by a total organic carbon analyzer (Elementar Liqui TOC II, Frankfurt, Germany). The Kjeldahl digestion method was used to determine the total nitrogen (TN) concentrations in sediment. Total phosphorus (TP) was determined by molybdenum-blue colorimetry after digestion by hydrofluoric acid and perchloric acid. Microbial respiration rate was evaluated by the CO 2 released per unit of time from microorganisms following the methods previously described (Hu et al., 2012). Dehydrogenase activity was determined by reduction of 2,3,5-triphenyl tetrazolium chloride to 1,3,5-triphenyl formazan (TPF) according to Tu et al. (2011).
DNA Extraction and Quantitative Real-Time PCR (qPCR)
Genomic DNA was extracted from 0.5 g sediment using a FastDNA SPIN Kit for soil (MP Biomedicals, CA, United States) following the manufacturer's instructions. We determined the 16S rRNA gene copies of total bacteria in triplicate on a real-time PCR system (StepOne Plus, Applied Biosystems, Darmstadt, Germany) using the SYBR green qPCR method with the primers P338-358F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and P534-518R (5 -ATTACCGCGGCTGCTGG-3 ) (Yong et al., 2011). The 20 µL PCR mixture was composed of 10 µL of Master Mix (Roche Molecular Biochemicals, Mannheim, Germany), 1 µL of DNA template, 0.8 µL of each primer (20 µM), and 8.4 µL of sterile ddH 2 O. The amplification temperature program and standard curve creation were previously described (Chen et al., 2018).
High Throughput Sequencing
The prokaryotic universal primers F515 (5 -GTGCCAGCMG CCGCGGTAA-3 ) and R806 (5 -GGACTACHVGGGTWTCT AAT-3 ) (Antoniou et al., 2015), with a sample-specific 12 bp barcode added to F515, were used for amplifying the V4 hypervariable region of bacterial 16S rRNA. The PCR reaction mixture and amplification conditions have been described elsewhere . Each DNA sample was amplified in triplicate and PCR products from each sample were pooled together and purified using a QIAquick PCR Purification kit (QIAGEN, CA, United States), combined in equimolar ratios in a single tube and sequenced on an Illumina HiSeq PE250 platform. The sequencing data were submitted to NCBI Sequence Read Archive database under accession number SRP125317.
Bioinformatical and Statistical Analysis
Raw sequence data were analyzed using the Quantitative Insights Into Microbial Ecology (QIIME, version 1.8.0) pipeline (Caporaso et al., 2010). Reads with quality scores below 25 or with lengths shorter than 200 bp were removed (Huse et al., 2007). The quality sequences were clustered into operational taxonomic units (OTUs) with UCLUST based on a 97% similarity threshold using the USEARCH algorithm (Edgar, 2010). All singleton OTUs were removed during the USEARCH clustering process, because, as some singletons represent artifacts or contaminants, they would have inflated the alpha diversity erroneously (Kunin et al., 2010). The most abundant sequence from each OTU was selected as the representative sequence and the taxonomic identity of each phylotype was predicted based on the similarity to the Greengenes database 1 .
To analyze the alpha and beta diversity of sediment bacteria at the same sequencing depth, the data set was subsampled to 27 285 sequences per sample (Yang et al., 2017). The bacterial richness was calculated as the observed species in each sample and phylogenetic diversity was estimated using Faith's index. The correlations between the alpha diversities and sediment geochemical properties were tested with linear FIGURE 1 | The 25 sampling sites along Lancang River, including 13 non-dam controlled sites, 6 dam-controlled sites, and 6 dam-affected sites.
regression analyses using SPSS 20.0. The 75 sediment samples were analyzed with non-metric multidimensional scaling analysis (NMDS) and hierarchical cluster analysis based on the Bray-Curtis distance to show the dissimilarity of sediment bacteria among the samples (beta diversity). The Mantel test was calculated as described by Chu et al. (2016) to identify which sediment geochemical properties were significantly correlated with bacterial communities, and the results were input into canonical correspondence analysis (CCA) to visualize the relationships between the sediment geochemical properties and bacterial communities. Distance-decay curves were plotted for the bacterial community similarity and spatial distance or for the bacterial community similarity and environmental distance and the slopes of the linear regression analysis (using SPSS 20.0) were used as the rates. The spatial distance was calculated using a matrix of pairwise distance along the river of all the sampling sites. The bacterial compositional similarity was calculated using the Bray-Curtis distance and the environmental distance was calculated using 10 sediment chemical properties from each sediment sample (Supplementary Table S2) based on the Euclidean distance in R. The bacterial community similarity, spatial distance, and environmental distance matrixes were linearized using PASSAGE2 2 . To compare the relative impacts of spatial distance and environmental factors on the bacterial community, a partial Mantel test was conducted based on the matrices of bacterial community similarity, spatial distance, and environmental distance. Similarity percentage (SIMPER) analysis was used to identify which OTUs contributed most to the overall dissimilarity in the bacterial community compositions between dam-controlled and dam-affected sediments. The NMDS, Mantel test, CCA, ANOSIM, SIMPER, and partial Mantel test were conducted in R with the vegan package. Heat map analysis was performed using the package heatmap in R. To explore the effect of dam construction on the co-associated properties of bacterial species, we used SpiecEasi to carry out network analysis (Cardona et al., 2016) in R and the properties were calculated in the igraph package. The topological features have been described elsewhere (Fan et al., 2017). One-way ANOVA, followed by Tukey's test or a student's t-test, was conducted using SPSS 20.0 to determine if there were any significant differences in bacterial diversity and composition similarity.
Physiochemical Properties of Sediment Samples
Of all the measured physiochemical properties, sediment pH varied from 6.29 to 8.91, the TN concentration varied from 515 to 2719 mg kg −1 dry weight (dw), and the C:N ratio varied from 0.92 to 40.1. Sediment pH and the C:N ratio exhibited a gradually decreasing trend along the river, and the concentration of TN and NO 3 − trended to increase from the upstream to the downstream. In the six-dam cascade area, the sediment pH in the dam-affected site was lower than that in the dam-controlled site. While, the sediment moisture, the concentration of TN, TP, and NO 3 − were significantly higher in the dam-affected site than that in the dam-controlled site (Supplementary Table S2).
Abundance and Activity of Sediment Bacteria
The 16S rRNA gene copies quantified by qPCR were used to demonstrate the total bacterial abundance in sediment samples. Of the 25 sampling sites from upstream to downstream along the river, the 16S rRNA gene copies in the sediments varied between 5.30 × 10 11 and 15.2 × 10 11 copies g −1 dw, and exhibited a increasing trend in the downstream direction (Figure 2A). In the six-dam cascade area, the mean 16S rRNA gene copies of six dam-affected sites were significantly lower than that of six dam-controlled sites (Figure 2A). Microbial respiration rate and dehydrogenase activity were measured to indicate the microbial activities in sediment samples. Similar to the total bacterial abundance, for either microbial respiration rate or dehydrogenase activity, an increased trend was observed from the upstream sites to the downstream sites along the river, and the mean value was significantly lower in dam-affected sites than dam-controlled sites (Figures 2B,C).
Diversity and Distribution of Bacterial Communities
A total of 5 909 833 high-quality 16S rRNA gene V4 sequences, obtained from the 75 sediment samples collected at the 25 sites, were clustered into 40 343 bacterial OTUs based on a similarity level of 97%. Across the 19 control sites, which included 13 non-dam controlled and 6 dam-controlled sites, the bacterial alpha diversity of either observed species or polygenetic diversity tended to increase from the upstream sites to the downstream sites along the river (Supplementary Figure S1), and, as shown by linear regression analysis (Figures 3A,B), was significantly and positively correlated with the distance to site YJ (at km 0). The alpha diversities were significantly lower at the six damaffected sites in the area of the six-dam cascade than at their corresponding control sites (Figures 3E,F).
The NMDS showed that the samples from all the control sites, including non-dam controlled and the dam-controlled sites, formed two clusters, that is, one containing downstream sites (the sites from NZD-C to ML) and the other one containing upper-middle-reach sites (the sites from YJ to DCS-C). Samples from the dam-controlled and dam-affected sites were grouped separately by NMDS analysis, which shows that they had different bacterial community compositions ( Figure 4A). Similarity, hierarchical cluster analysis showed that the upper-middlereach control sites (sites from YJ to DCS-C) were clustered together and then with the dam-affected sites, which were clearly separated from the downstream control sites (sites from NZD-C to ML) (Supplementary Figure S2). Analysis by ANOSIM further confirmed that the sediment bacterial communities differed significantly between the upper-middle-reach sites and the downstream sites (r = 0.93, p < 0.001), and also between the dam-controlled sites and the dam-affected sites (r = 0.59, p < 0.001).
Relationships Between the Bacterial Communities and Sediment Geochemical Properties
Pearson analysis showed that the bacterial alpha diversity across all the control sites was significantly correlated with sediment moisture, pH, TC, TN, the C:N ratio, and NO 3 − , and that there were strong negative correlations between elevation and the observed species and the polygenetic diversity (Figure 3 and Supplementary Table S3). Significantly positive correlations were found between bacterial alpha diversity, bacterial abundance and activity (Supplementary Table S3). To identify the factors that influenced the variation in the bacterial communities among all the control sites and among the six-dam cascade area sites, the samples from all the control sites (non-dam controlled and dam controlled sites were considered together) and from the six-dam cascade area (dam-controlled and damaffected sites were considered together) were analyzed separately using the Mantel test and CCA. The Mantel test showed that sediment TN showed the highest correlation with the community composition, both for all the control sites (r = 0.62, p = 0.001) and the six-dam cascade area (r = 0.32, p = 0.001) ( Table 1), while NMDS results showed that the bacterial community was distributed along a TN gradient (Figure 4A). Elevation and some environmental variables such as TOC, NO 3 − , moisture, and pH were also significantly correlated with the compositions of bacterial communities in both control sites and six-dam cascade area sites ( Table 1). The factors that significantly correlated with the bacterial community composition were selected to perform CCA, which showed that sediment TN with the longest arrow have strongest effects on bacterial community composition in either control sites or six-dam cascade area sites (Figures 4B,C). For the control sites, sediment TN, NO 3 − , TOC, pH, and elevation were important for separating the downstream sites from the upper-middle-reach sites ( Figure 4B). In the six-dam cascade area, sediment TN, TP, moisture, and TOC contributed positively, while sediment pH contributed negatively, to separating the dam-affected sites from the dam-controlled sites ( Figure 4C).
Relative Influence of Spatial Distance and Environmental Distance on Bacterial Communities
The 25 sampling sites spanned a large spatial distance; the distance between all the control sites from YJ to ML was 1148 km; the distance between the dam-controlled sites from GGQ-C to JH-C was 597 km, while the distance between the dam-affected sites GGQ-E to JH-E was 610 km (Supplementary Table S2). Plots of community similarity versus spatial distance showed that there were distance decay relationships at all control, dam-controlled, and dam-affected sites (Figures 5A,C). The environmental distance was also negatively correlated with the bacterial community similarity (Figures 5B,D). These distance-decay curves suggest that both spatial distance and environmental factors may have influenced the distribution of the bacterial communities in sediments along the Lancang River. The spatial distance (with a higher R 2 value) was more strongly correlated with the bacterial community dissimilarity than the environmental distance for all the control and dam-controlled sites. In contrast, the bacterial community composition was more strongly correlated with the environmental distance (R 2 = 0.39) than with the spatial distance (R 2 = 0.26) for dam-affected sites. These results were supported by the results from the partial Mantel tests, as shown in Table 2, which had higher r-values for FIGURE 3 | Correlation of bacterial alpha diversity with distance (A,B) and elevation (C,D) across all control sites and differences in alpha diversity between dam-controlled and dam-affected sites (E,F). The shaded areas show the 95% confidence interval of the fit; * and * * represent significant differences between dam-controlled and dam-affected sites when p ≤ 0.05 and p ≤ 0.01, respectively, according to Student's t-test. spatial distance (0.60 and 0.64) than for environmental distance (0.44 and 0.20) for all the control and dam-controlled sites; the r-value for the dam-affected sites was, however, higher for environmental distance (0.55) than for spatial distance (0.39).
Variations in the Composition of Bacterial Communities
The dominant phylum included Proteobacteria, Firmicutes, Bacteroidetes, Acidobacteria, Actinobacteria, Chloroflexi, Verrucomicrobia, and Planctomycetes, which together accounted for more than 90% of the total sequence data ( Figure 6A). Across the 19 undisturbed natural sites, including the 13 non-dam-controlled and 6 dam-controlled sites, Acidobacteria, Bacteroidetes, and Verrucomicrobia tended to increase from the upstream sites to the downstream sites. Proteobacteria and Firmicutes were more abundant, but Chloroflexi and Planctomycetes were less abundant, at most of the upper-middlereach sites (from YJ to DCS-C) than at the downstream sites (from NZD-C to ML). The relative abundances of the dominated phylum in the dam-controlled and dam-affected sites are shown in Supplementary Table S4. Acidobacteria, Verrucomicrobia and Chloroflexi were more abundant, and Proteobacteria and Firmicutes were less abundant, at the dam-affected sites than at their corresponding dam-controlled sites at five of the six paired dam-controlled and dam-affected sites (Supplementary Table S4). The mean relative abundances of Acidobacteria and Verrucomicrobia were significantly higher, and the mean relative abundance of Proteobacteria was significantly lower, at the dam-affected sites than at the dam-controlled sites ( Figure 6B). As shown in Supplementary Table S5, sediment TN, the C:N ratio, TOC, and NO 3 − were significantly and positively correlated with the relative abundances of Acidobacteria, Bacteroidetes, Chloroflexi, and Planctomycetes, implying that nutrients selectively enriched bacteria from these phyla. The sediment pH was negatively correlated with the relative abundance of Acidobacteria, Proteobacteria, and Firmicutes, but was positively correlated with that of Chloroflexi and Planctomycetes (Supplementary Table S5). To visualize the effects of dam-induced water level fluctuations on the composition of bacterial communities of riparian sediments at the genus level, we calculated the logarithmic relative abundance ratios of the dam-controlled to the dam-affected sediments for the top 29 genera (that had relative abundances greater than 0.1%). Positive values represented higher abundances, and negative values represented lower abundances, at the dam-affected sites than at their corresponding dam-controlled sites. As shown in Figure 7, five genera, including Candidatus Solibacter, Ralstonia, Variovorax, Rhodoplanes, DA101, had higher relative abundances at all six dam-affected sites, and the dam-enriched Anaerolinea, Candidatus-Koribacter, Dechloromonas, Sulfuritalea, Flavisolibacter, and Geobacter were also found in at least four out of the six pairs of dam-controlled and dam-affected sites. The relative abundances of nine genera, including Bacillus, Limnobacter, Mycoplana, Perlucidibaca, Acinetobacter, Rhodobacter, Nitrospira, Novosphingobium, and Luteolibacter, were lower in almost all the dam-affected sites, apart from dam JH and MW. The dam-induced increases and decreases in most genera were more significant at dams MW and GGQ, respectively, than at the other dams (Figure 7).
The results of SIMPER analysis show that, of the top 20 OTUs that accounted for more than 0.5% of the community dissimilarity, 12 OTUs belonged to the Proteobacteria phyla (Figure 8). With higher relative abundances, OTU1, OTU2, OTU3, OTU5, and OTU15 made greater contributions (>2% of community dissimilarity) to bacterial community dissimilarity than the other OTUs ( Figure 8B). Based on their relative abundances, two clusters formed in the heatmap of the top 20 OTUs. Cluster 1, with 8 OTUs, had the highest abundances at most dam-affected sites, while the abundances of the 12 OTUs in cluster 2 were lower at most of the dam-affected sites than FIGURE 5 | Distance-decay relationships between the bacterial community similarity, spatial distance, and environmental distance in all the control sediments (A,B) and the six-dam cascade area sites (C,D). The slopes were compared pairwise between the dam-control and the dam-affected sediments with permutation tests and * * represents the significant differences when p ≤ 0.01. TABLE 2 | Partial Mantel tests to compare relative impacts of spatial distance (Spa) and environmental distance (Env) on bacterial community (Spa is the geographic distance matrix and Env is the environmental heterogeneity matrix calculated with Euclidean).
Groups
Factors r (correlation) p (significance) at the dam-controlled sites ( Figure 8A). Similar results are also presented in Figure 8B.
Network Topological Features of Dam-Control and Dam-Affected Sediments
The SpiecEasi network topographical structures ( Figure 9A) show that the dam-affected sediments had a greater number of clusters, but less density, transitivity, number of edges, degree, betweenness, and closeness than the dam-controlled sediments. The nodes were classified as peripheral, intermediate, and central according to their betweenness values for the total network.
The central and intermediate nodes were less abundant, and the peripheral nodes were more abundant, in the dam-affected sediments than in the dam-controlled sediments ( Figure 9B). Examination of the network properties of the dominant bacterial phyla showed that Firmicutes and Proteobacteria, with higher relative abundances, had very low degrees and betweenness, but that Nitrospirae, whose relative abundance was less than 2%, had the highest degree and betweenness (Supplementary Tables S4, S6). Apart from the value for Bacteroidetes, the betweenness values for all phyla were significantly lower in the dam-affected sediments than in the dam-controlled sediments (Supplementary Table S6), indicating the lower connectivity and stability of bacterial community in the dam-affected sediments.
Longitudinal Distribution of Bacterial Communities in All Control Sites
A river transports surface riparian sediment, especially during the flood season, as it flows, leading to dispersal of sediment particles and microorganisms along its length (Wu et al., 2013;Staley et al., 2016). Based on the River Continuum Concept (Vannote et al., 1980), we propose that the bacteria assemblages in riparian sediments also gradually change along the river. In this study, the alpha diversity tended to increase gradually from upstream FIGURE 6 | Relative abundances in the dominant bacterial phyla at all control sites along the Lancang River (A) and the differences in the dominant bacterial phyla between the dam-controlled and the dam-affected sediments (B). Data shown in (B) are means of 18 replicates (6 dam-control or dam-affected sites and each one had 3 replicates). * and * * represent significant differences between dam-control and dam-affected sites at p ≤ 0.05 and p ≤ 0.01, respectively, according to Student's t-test.
to downstream (Figure 3 and Supplementary Figure S1), which is consistent with changes in fungal communities observed in sediment from the Changjiang River (Wu et al., 2013). However, this finding contradicts the results of previous studies in the Thames River (Gladyshev et al., 2015) and in the Danube River (Savio et al., 2015), who found that the bacterial diversity decreased from the headwaters to the river mouth. The bacterial abundance and activity gradually increased toward downstream in this study (Figure 2). Similarly, Ochs et al. (2010) reported that the high bacterial production in the Lower Mississippi was correlated to the increased sediment load when the large river approaches the mouth. In another study, the total bacterial number exhibited significantly increased trend along the complete length of the Danube River, but the bacterial activity did not follow a continuous trend because of the input of sewage from large cities in the midstream (Velimirov et al., 2011). Some previous studies on large rivers, as the Danube River, have reported that the bacterial community compositions developed gradually, in general, based on the data of Joint Danube Survey 2001 and 2007 (Winter et al., 2007;Kirschner et al., 2008;Savio et al., 2015). Differently, the compositions of the bacterial communities at the upper-middle-reach sites differed significantly from those at downstream sites in this study (Figure 4 and Supplementary Figure S2). Wu et al. (2013) also reported a similar lack of gradual change in the composition of microbial communities in sediment collected from upstream to downstream in the Changjiang River, which they attributed to uplift of the Qinghai-Tibetan Plateau. Nutrients like TN,TOC and NO 3 − have been demonstrated to be important environmental variables affecting the bacterial community composition in large rivers (Kirschner et al., 2009;Gladyshev et al., 2015). In this study, the noticeably higher contents of TN, TOC, and NO 3 − in the downstream (Supplementary Table S2) promoted bacterial communities that differed from those in the upper and middle reaches. The pH has also been shown to have an important role in shaping the composition of bacterial communities (Xiong et al., 2012;Shen et al., 2013). Acidobacteria, which dominates in soils with low pH values (Griffiths et al., 2011), increased as the pH decreased from upstream to downstream while, in contrast, the relative abundance of Proteobacteria decreased as the pH decreased (Supplementary Table S5). Similarly strong and significant correlations have been previously reported between the relative abundances of these two phyla and pH (Shen et al., 2013).
Even though some environmental factors were stronger predictors of community composition, we found that there were significant distance-decay relationships for bacterial community composition at both spatial and environmental distances (Figure 5), and that the distance decay turnover rate was greater for spatial distance than for environmental distance, as shown by FIGURE 7 | Variation in the relative abundance of the top 29 genera (relative abundance > 0.1%) between the sediments at the dam-controlled and dam-affected sites.
the partial Mantel test ( Table 2). This suggests that the constraints of spatial dispersal may have more control on the distribution patterns of bacteria in riparian soils from all the control sites than environmental heterogeneity, as reported in other studies (Wu et al., 2013;Chu et al., 2016;Fan et al., 2017).
Impacts of Dam Construction on Bacterial Communities in Riparian Sediments
The construction and operation of dams can alter river hydrological regimes and cause increases in water levels in reservoir regions, which will then change the sediment physiochemical properties and influence ecological processes in riparian zones . In this study, the changes in sediment physiochemical properties because of damming (Supplementary Table S2) may lead to the loss of some sensitive species and the enrichment of adaptable species, known as species-sorting (Ruiz-Gonzalez et al., 2013). Also, fluctuations in inundation induced by dam construction can accelerate the exchange of energy and materials between a river and its flooded area (New and Xie, 2008;Li et al., 2012), which may contribute to species dispersal along the riparian zone and increase the invasion of introduced or exotic species from upstream areas. Allochthonous inputs can be described by the so-called masseffect, where the dispersal of organisms exceeds the rate of local extinction (Ruiz-Gonzalez et al., 2013). The decreases in bacterial diversity and abundance in dam-affected riparian areas (Figures 2, 3) suggest that species-sorting has more influence than mass effects in shaping microbial communities, as also proposed by Savio et al. (2015).
We also found that the composition of bacterial communities differed significantly between the dam-affected and dam-controlled sites (Figure 4), as reported in previous studies (Ruiz-Gonzalez et al., 2013;Yan et al., 2015). Results from the distance decay relationship analysis and partial Mantel test confirmed that the sediment bacterial communities were mainly shaped by spatial distance in the dam-controlled sites, which was also the case for the control sites in this study ( Figure 5 and Table 2) and for some other natural areas, such as the Tibetan Plateau (Chu et al., 2016;Yang et al., 2017). In contrast, we found that the contribution of environmental distance became more important than that of spatial distance for the dam-affected sites, which further confirms that environmental heterogeneity induced by damming was an important influence on the composition of bacterial communities in riparian sediments. Decreases in the contribution of spatial distance, and increases in the contribution of environmental factors, to bacterial communities were also reported in previous studies of other anthropogenic activities such as fertilization .
The responses of the dominant bacterial phyla to dam construction, such as Acidobacteria and Verrucomicrobia, which had higher relative abundances in dam-affected sediments (Figure 6), might be linked to the lower sediment pH and higher sediment moisture in this area (Supplementary Table S2), respectively, as it has been reported that Acidobacteria prefers a low pH environment (Griffiths et al., 2011) and the abundance of Verrucomicrobia was positively correlated to sediment moisture (Buckley and Schmidt, 2001). At the genus level, it is noteworthy that the relative abundance of some functional bacteria, including Nitrospira for nitrite oxidation (Lücker and Delong, 2010), Citrobacter for nitrate reduction (Huang and Tseng, 2001), Sulfuritalea and Limnobacter for sulfate oxidation (Lu et al., 2011;Watanabe et al., 2014), and Geobacter for iron reduction (Snoeyenbos-West et al., 2000), were significantly altered, implying that dam construction may impact microbial-mediated element biogeochemical cycling in riparian zones. Importantly, the relative abundance of the genus Ralstonia, a devastating, soil-borne plant pathogen with a global distribution and an unusually wide host range (Salanoubat et al., 2002), increased significantly at dam-affected sites in this study (Figure 7), and increases of this genus might pose a threat to the riparian vegetation and health of the river basin ecosystem.
At the OTU level, several OTUs with high relative abundances contributed most to the differences in bacterial communities between the dam-controlled and dam-affected sediments (Figure 8), as also reported by Chu et al. (2016). Network analysis has been used in microbial ecological studies to visualize the co-associated properties of bacterial species and to predict the possible ecological processes that are regulated by bacterial communities (Fuhrman, 2009;Cardona et al., 2016). In this study, the number of no co-occurrences across different OTUs was greater, and less compact topologies with lower density and transitivity occurred more frequently, at the dam-affected sites than at the dam-controlled sites (Figure 9). This suggests that dam construction might obstruct the interactions and coupling among bacterial species and may also impact the assemblages and stability of communities in the riparian zone (Fuhrman, 2009). The low level of interactions in dam-affected sediment might be associated with the low bacterial diversity and the environmental filtering effect induced by damming (Figure 3), as reported by Ding et al. (2015). While Nitrospirae had a lower relative abundance than the other phyla, it maintained greater connectivity (Supplementary Table S6), perhaps because the bacterial species in this phyla need to cooperate with other bacteria when mediating N cycling (Fan et al., 2017). In contrast, Firmicutes, which was more abundant, had less, or even no, connectivity (Supplementary Table S6), which might be explained by their strong adaptation to riparian habitats (Johnston and Leff, 2015), such that they do not need to interact with other species.
CONCLUSION
In this contribution, we have reported information about the longitudinal distribution of bacterial communities in riparian sediments along the Lancang River and their responses to dam construction. We found that the bacterial diversity increased gradually along the river from upstream to downstream and that the compositions of the communities differed significantly between the upper-middle-reach sites and the downstream sites. This biogeographic distribution might be mainly driven by the limitation of microbial dispersal or drift, which are mostly influenced by spatial distance between the sites. Dam construction significantly reduced the bacterial diversity and caused shifts in the community composition in riparian sediments. In contrast to the dam-controlled sites, environmental heterogeneity made a greater contribution to bacterial community composition than did spatial distance at dam-affected sites. The interactions among different bacteria species also decreased because of the presence of dams. This study reveals the strong impacts of dam construction on bacterial communities in riparian sediments, thus challenging the sustainability of river ecosystems with the expanding hydropower development. However, this study only focused on the general changes of total bacterial communities using 16S rRNA gene amplicon sequencing, a major aspect missing from this study is information on the functioning of bacterial assemblages and their potential responses to damming. More specific researches should be conducted to better understand of the functional responses of sediment bacteria in riparian zone due to dam construction.
AUTHOR CONTRIBUTIONS
JC, PW, and CW conceived and designed the research. JC, XW, and LM performed the research. LM, SL, and QY analyzed the data. JC, PW, and XW contributed to the writing of the manuscript.
ACKNOWLEDGMENTS
We wish to thank Prof. Defu Tian and Prof. Shijian Xu from Changsha University of Science & Technology for their assistance in field sampling.
|
2018-05-16T13:03:55.748Z
|
2018-05-16T00:00:00.000
|
{
"year": 2018,
"sha1": "f316d8908c7f6f80f1ac310e1a93e0586216ca60",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.00999/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f316d8908c7f6f80f1ac310e1a93e0586216ca60",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
84026905
|
pes2o/s2orc
|
v3-fos-license
|
Enhancement of growth parameters and yield components in eggplant using antagonism of Trichoderma spp. against fusarium wilt disease
Eggplant is one of the important economic vegetable crops which are attacked by several serious diseases such as wilt. Fusarium oxysporum f. sp. melongenae was isolated from a naturally occurring epidemic of wilt in eggplant plants grown in New Valley governorate. In this study, the mycoparasitism inhibitory effects of five Trichoderma species (Trichoderma spirale, Trichoderma hamatum, Trichoderma polysoprium, Trichoderma harzianum and Trichoderma viride) on the growth of the causal agent of eggplant Fusarium wilt were investigated by dual culture in laboratory condition. In this step, the maximum inhibitory effect was caused by T. viride (isolate TVM-5) and T. hamatum (isolate THM-2), while T. spirale (TSM-1) was the lowest ones. In pot experiment, the obtained data showed that all Trichoderma species reduced significantly the area under wilt progress curve caused by F. oxysporum f. sp. melongenae. T. viride and T. hamatum recorded the highest reduction of the area under wilt progress curve (AUWPC), where the AUWPC was reduced from 1125.33 in control to 244.0 and 325.33 AUWPC, respectively. Under field conditions, it showed that these treatments significantly reduced AUWPC and increased all tested growth parameters (plant height, no. of branches plant -1 ) and fruit yield components (number of fruits plant -1 , fruits yield plant -1 , fruit weight, no. of fruit Kg -1 , fruit length, fruit diameters and fruits yield fed. -1 ) compared to the control during the growing seasons (2011-2012 and 2012-2013). T. viride and T. hamatum were the best treatments in both reducing disease severity or increase growth parameters and fruit yield components.
INTRODUCTION
The eggplant, Aubergine or Brinjal (Solanum melongena L.), of the family Solanaceae, is grown in the subtropical and tropical regions of the world. It is one of the most common, highly productive and popular vegetable crops grown in Egypt. It is quite popular as the poor man's crop (Gargi and Kalita, 2012). The unripe fruit of eggplant is primarily used as a cooking vegetable for the various dishes in many countries in the world. The eggplant is also reported to possess medicinal properties. Various plant parts are used for curing ailments such as diabetes, cholera, bronchitis, dysuria, dysentery, otitis, toothache, skin infections, asthenia and haemorrhoids. It is also ascribed narcotic, anti-asthmatic and antirheumatic properties (Daunay et al., 2003).
The major constraint in the production of eggplant is the Fusarium wilt disease. Fusarium oxysporum f. sp. melongenae (Fomg) is the most destructive disease agent of Fusarium wilt of eggplant. The soil-borne fungus invades the vascular bundles, causes severe wilting and death of the above ground parts of plants by blocking the *Corresponding author. E-mail: fowzy_2008@yahoo.com. xylem transport system (Altinok, 2005). It is extremely difficult to control soil-borne fungi conventional strategies such as the use of synthetic fungicides, etc. Since their spores are able to survive for many years in the soil, biological control strategies for this pathogen should, therefore, be carefully selected and handled in an ecofriendly way instead of using chemical fungicides.
The application of microorganisms as biocontrol agents is important, since they may increase beneficial microbial activity which extends for a long period of time. Trichoderma spp. are considered as potential biocontrol and growth promoting agents for many crop plants (Verma et al., 2007;Bai et al., 2008;Savazzini et al., 2009). The competition with pathogens, parasitism and the production of antifungal compounds are the most important mechanisms in biocontrol activity (Verma et al., 2007;Savazzini et al., 2009). Trichoderma populations can be established relatively easily in different types of soil and can continue to persist at detectable levels for months.
In the above context, the present study was undertaken to isolate the Fusarium wilt pathogen from the economically important eggplant crop and evaluate the potential of an isolated indigenous strain of Trichoderma spp., applied by soil drench to control the disease under pots and field conditions. Also, the effect of Trichoderma spp., on growth parameters and fruit yield attributes was evaluated under field conditions.
Pathogen isolation
Eggplant plants showing wilt symptoms were collected from different fields at flowering stage growing in El-Kharga Oozes-New Valley, Egypt. Small pieces of diseased specimen were grown on Fusarium selective medium (Nash and Snyder, 1965). After purifications, isolates were identified according to morphological characteristics with the help of standard key (Nelson et al., 1983). Pathogenicity tests were conducted on potted plants and after re-isolation, a pathogenic isolate of F. oxysporum was selected for further studies.
Sources of Trichoderma isolates
Isolation of Trichoderma spp. from soil was done following the technique used by Rifai (1969). For this purpose, soil samples were collected from potato root rhizosphere (20 cm deep) of different fields. Twenty gram of each soil sample were gently mixed with 500 ml distilled water containing 0.2% citric acid, 5 ml of prepared solution were added to Petri plates containing 15 ml water agar at 50°C and shaken to mix properly. After solidification, 5 mm plugs of these cultures were transferred into Petri plates containing Davet selective medium (Davet, 1979) and were incubated at 25°C for 7 days. After proper growth, isolates were purified and identified according to standard keys (Bissett, 1991).
Inhibitory mechanisms of Trichoderma species against F. oxysporum mycelial growth
Five mm plugs of seven-day-old cultures of F. oxysporum and Trichoderma were placed against each other on plates containing PDA. In the case of control instead of Fusarium oxysporum plugs, PDA plugs were used. Plates were incubated at 25°C and checked daily for their reactions such as growth speed. Radial growth of the pathogen was measured daily and data were obtained. Data of laboratory tests were calculated by the following formula: % Inhi bi ti on = Di ameter of col ony growth i n control -Di ameter of col ony growth i n treatment × 100 Di ameter of col ony growth i n control
Preparation of inocula of F. oxysporum and Trichoderma isolates
In order to prepare F. oxysporum inocula, Erlenmeyer flasks containing 100 g of barley and 100 ml of sterilized water were autoclaved at 121°C for 1 h on three successive days. After cooling, about 5-7 small plugs of seven day-old culture of F. oxysporum were dropped into each Erlenmeyer under sterilized condition. The flasks were kept at 25°C for 4 weeks. Colonized wheat grains were then transferred into paper pockets, and were dried and ground. Fourteen gram of prepared powder was used to infest 1 Kg of soil (Frommel et al., 1991). For preparation of Trichoderma inocula, moistened wheat bran was poured into Erlenmeyer flasks which were autoclaved at 121°C for 1 h on three successive days. The substrate mixture was then inoculated with a homogenized suspension of spore + mycelia of seven days old culture of Trichoderma isolates under aseptic condition. Erlenmeyer flasks were incubated at 27°C for 14 days. Ten gram of this inoculum (10 5 -10 7 CFU) was used to add to 1 Kg of pot soil (Ommati and Zaker, 2012). Surface sterilized eggplant transplanting (cv. Black Beauty) were grown in pots. All of the Trichoderma isolates performing well in laboratory tests were used in this experiment. Five seedlings per treatment were sown in plastic pot (30 cm in diameter) and four pots were used for each treatment as replicates. In control treatment, eggplant seedlings were planted in infested soil only and the area under wilt progress curve was recorded.
Disease assessments
Wilt severity was estimated at 10 days interval for 60 min after transplanting according to Abdou et al. (2001) using a rating scale of (0 -5) based on leaf yellowing grading, where, 0 = healthy, 1 = one leaf yellowing, 2 = more than one leaf yellowing, 3 = one wilted leaf, 4 = more than one leaf wilted, and 5 = completely dead plants. Disease severity index (DSI) described by Liu et al. (1995) was adapted and calculated as follows: where: d is the disease rating of each plant, d max is the maximum disease rating, and n is the total number of plants/samples examined in each replicate.
Field experiments
Field experiments were carried out at New Valley Agriculture Research Station Farm, New Valley governorate during 2011-2012 and 2012-2013 seasons, to evaluate the efficiency of the tested Trichoderma spp. as bio-control agents for controlling wilt disease of eggplant plants as well as its effect on growth parameters and fruit yield components. The chosen field test area was naturally infested with F. oxysporum. The experimental design was a complete randomized block with four replicates. The experimental unit area was 15 m 2 (5 × 3 m). Each unit included three rows; each row was 5 m in length and 1 m in width. Soil treatments were done by application of 150 g of the prepared formulation/plot at the same time of planting. Eggplant seedlings (cv. Black Beauty) were transplanted into the field in 1 October in both seasons at a rate of 10 seedlings per row; one seedling/hill was sown with 50 cm apart between hills. Untreated seedlings were used as control. The NPK mineral fertilizers were applied at the recommended dose of Ministry of Agriculture and Land Reclamation. Disease severity was recorded every 30 days for 4 months. The mean of area under disease progress curve (AUDPC) for each replicate was calculated. Plant height, number of branches, number of fruits plant -1 , fruits yield plant -1 (Kg), fruit weight (gm), number of fruit Kg -1 , fruit length (cm), fruit diameter (cm) and the estimated fruits yield fed. -1 (ton) were calculated at the end of the growing season.
Statistical analysis
All experiments were performed twice. Analyses of variance were done using MSTAT-C program version Int. J. Agric. Sci. Res. 071 2.10 (1991). Least significant difference (LSD) was calculated at P ≤ 0.05 according to Gomez and Gomez (1984).
Effect of Trichoderma species on wilt disease under field conditions
Effects of Trichoderma spp. on wilt disease incidence, some growth parameters, and yield components of eggplant plants under field conditions in New Valley governorate were studied.
Effect of Trichoderma species on area under disease progress curve
Data in Figure 4 indicate that all Trichoderma species exhibit significant protection against wilt disease compared with the control in both growing seasons (2011-2012 and 2012-2013
Effect of growth parameters
All the tested Trichoderma species significantly increased growth parameters, that is, plant height and branches number per plant compared with control treatment in both growing seasons ( (20.22 and 19.36 in the first and second growing seasons, respectively); however, T. polysoprium recorded the lowest ones in both growing seasons.
Effect on yield components
Data in Table 2 revealed a decrease in the yield components of eggplant plants, that is, number of fruits plant -1 , fruits yield plant -1 , fruit weight, no. of fruit Kg -1 , fruit length, fruit diameters and the estimated fruits yield fed. (3.45 and 3.55), fruit length (13.44 and 13.24 cm), fruit diameter (12.67 and 11.97 cm) and total yield fed.-1 (26.59 and 25.49 ton) when compared with 9. 57 and 8.75, 2.29 and 2.08, 239.6 and 241.25, 4.17 and 4.15, 7.28 and 7.05, 7.1 and 6.80, 12.65 and 11.25 in control treatment in both seasons, respectively. T. hamatum came after T. viride in increasing all fruit parameters in both growing seasons. On the other hand, T. harzianum and T. spirale recorded the lowest increase of all fruit parameters in this respect during both growing seasons. (2005) and Baysal et al. (2013). The management of the disease is difficult owing to long saprophytic survival ability of pathogen in soil (Dey, 2005). Control of the plant diseases by chemicals can be spectacular but this is relatively in short term; moreover, the accumulation of the harmful chemical residues sometimes causes serious ecological problem. In recent years, the increasing use of potentially hazardous pesticides and fungicides in agriculture has been the result of growing concern of both environmentalists and public health authorities.
Biological methods can be economical, long lasting and free from residual side effects and safe on human and animal health. The main purpose of the biological control of the plant disease is to suppress the inoculum load of the target pathogen to a level, which would not cause potential economic loss in a crop. Fungal species belonging to the genus Trichoderma are worldwide in occurrence and easily isolated from the soil. The potential of Trichoderma species as bioconrol agents against various plant diseases has been reported by several workers (Verma et al., 2007;El-Nagdi and Abd-El-Khair, 2008;Bai et al., 2008;Savazzini et al., 2009;Joshi et al., 2010;Sundaramoorthy and Balabaskar, 2013). In this study, five Trichoderma species namely: T. spirale, T. hamatum, T. polysoprium, T. harzianum and T. viride were evaluated in vitro and in vivo. The obtained data indicated that all Trichoderma spp. suppressed mycelia growth of F. oxysporum f. sp. melongenae in vitro. T. viride (TVM-5) and T. harzianum (THM-4) were found to be the most potent antagonistic against the pathogen.
The inhibition of Fusarium oxysporum f. sp. melongenae by Trichoderma species could probably be due to the secretion of extracellular cell degrading enzymes such as chitinase β-1, 3-glucanase, cellulose and lectin, which help mycoparasites in the colonization of their host. The inhibition of pathogen may also be attributed to the production of secondary metabolites by antagonists such as glioviridin, viridin and gliotoxin (Kamlesh and Gurjar, 2002;Muhammad and Amusa, 2003;Rehman et al., 2013). The secondary metabolites of Trichoderma includes chitinase enzyme which is considered as the most effective component against pathogenic fungi. Chitinase enzymes degrade the fungal cell walls which are composed of chitin (Lorito et al., 1996).
Also, all the tested Trichoderma species significantly reduced the area under wilt progress curve (AUWPC) under pot and field condition when compared with the control. T. viride and T. hamatum recorded the highest reduction of AUWPC when compared with the other Trichoderma species. Trichoderma spp. is now the most common fungal biological control agent that has been extensively researched and deployed throughout the world. The primary mechanism of antagonism in Trichoderma is mycoparasitism. Lytic activity is the key feature responsible for the expression of mycoparasitism against several fungal pathogens (Chet, 1987). Trichoderma spp. are also good competitors in soil and producers of volatile and non-volatile antibiotics to suppress target pathogens (Chet, 1987). Because of their effectiveness and ease of production for commercial application, at least nine commercial biological control products based on Trichoderma species are manufactured and marketed in Belgium, Sweden, Israel, USA, Denmark, India and New Zealand for use on several crops (Navi and Bandyopadhyay, 2002).
On the other hand, all Trichoderma species improved growth parameters (plant height, no. of branches plant -1 ) and fruit yield components (number of fruits plant -1 , fruits yield plant -1 , fruit weight, no. of fruit Kg -1 , fruit length, fruit diameters and fruits yield fed .-1 ). T. viride (TVM-5) and T. Int. J. Agric. Sci. Res. 075 hamatum (THM-2) showed the highest increase of all growth parameters and yield components. Similar results on increased plant growth due to application of Trichoderma gamsii in cereals and legume crops were reported by Ozbay et al. (2004) and Sundaramoorthy and Balabaskar (2013). The increase in plant growth might be associated with secretion of auxins, gibberellins and cytokinins. The increase in biomatter production may be due to the production of plant growth promoters or through indirect stimulation of nutrient uptake and by producing siderophore or antibiotics to protect plants from deleterious rhizosphere organisms (Sundaramoorthy and Balabaskar, 2013).
|
2019-03-20T13:04:23.103Z
|
2014-04-30T00:00:00.000
|
{
"year": 2014,
"sha1": "e4c92923bdcacfd002e039ef257ebd3a0a0f4e3b",
"oa_license": null,
"oa_url": "https://doi.org/10.33687/phytopath.003.01.0510",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "64d2236761862efac49e89b4c1ce5e27fcf8d36d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
215848853
|
pes2o/s2orc
|
v3-fos-license
|
Full Reference Image Quality Assessment Algorithm based on Haar Wavelet and Edge Perceptual Similarity
The research on image quality assessment (IQA) has been become a hot topic in image processing. Many studies show that HVS edge information plays crucial role when human perceive the quality of an image. The proposed metric is called HEPSI. The method from HaarPSI metric is combined with edge structural similarity and a contrast map is added for pooling the structural similarity map, Validation is taken by comparing HEPSI with the well-known state-of-the-art IQA metrics: PSNR, SSIM, MSSIM, FSIM and HaarPSI over the LIVE database. Experiment shows that HEPSI achieved better performance than other 5 IQA metrics.
Introduction
Assessing image quality is significant in optimizing image processing performance. Many studies have been performed for the purpose of effectively measure image quality. Digital images often pass through several processing stages such as acquisition, processing, storage and transmission before they reach to the observers [1]. These images are subjected to different kinds of distortions during the stages such as transmission, processing, acquisition and compression. These stages may result in degradation of visual quality of the images.
For example, during the transmission stage, the quality of the received image may decrease because of dropping of some data due to limited bandwidth of the channels. Consecutively, it is significant for image acquisition, communication, processing systems and management to measure the quality of images at each stage. Hence, image quality assessment (IQA) is very important in order to maintain and conserve the quality of the images.
1.1. Image Quality Assessment (IQA) Image quality measurement (IQA) techniques can be categorized as subjective and objectives. Subjective measurement consists of mean opinion scores (MOS) scored by a number of selected observers that has shown a series of test scenes. Although this measurement will give accurate results, subjective evaluation is usually time-consuming, too inconvenient and cannot be incorporate in real time.
To eliminate the need for expensive subjective studies, numerous efforts have been made to develop objective measurement that can correlate with perceived quality. The goal of objective IQA is to design algorithms that are able to predict the quality of an image automatically and accurately. Objective measurement are automatic algorithms for quality assessment that could analyse images and 2 report their quality without human involvement. Such methods could eliminate the need for expensive subjective studies. The most widely used and simplest full reference (FR) IQAs are the peak signal-tonoise ratio (PSNR) and mean square error (MSE), but due to the low correlation between their results with human opinion scores another FR IQAs have been presented [2].
Wang et al. proposed structural similarity index (SSIM) based on the assumption that HVS is highly adapted by extracting structural information from an image [3]. After the huge achievement of SSIM, it gives inspirations and ideas to numerous new IQAs. Besides, Zhang et al. [4] proposed a lowlevel feature similarity index (FSIM), to assess phase congruency and gradient magnitude with the assumption that the HVS distinguishes images according to low-level features. Recently, Reisenhofer et al. [5] proposed utilizes the coefficients obtained from a Haar wavelet decomposition to assess local similarities between two images, as well as the relative importance of image areas.
Proposed quality assessment methods
Given image i is the reference image and image j is the distorted image. Coefficients obtained from a Haar wavelet decomposition is used to calculate the local similarities and the relative importance of image areas between image i and image j. The local similarity measure included chroma channels I and Q from the color images in the YIQ color space. These are calculated according to the method in [5]. This generalization is given by where ɭ α is non-linear mapping to the local similarities obtained from high-frequency Haar wavelet filter response.
where S denotes the scalar similarity measure, m is 2 × 2 mean filter, * is the two-dimensional convolution operator and C is a constant. Meanwhile, where and represent the edge maps of i and j respectively. (m,n) represents the sample position within the wavelet subbands. , and denote horizontal, vertical and diagonal detail subbands of image i while , and denote detail subbands of image j. The edge SSIM map is calculated between the two images.
where , is the cross correlation between image patches and while 2 and 2 are the variances of and respectively. Next, contrast map is formed for weighted pooling of the edge SSIM map. It is well known that the HVS is more sensitive to areas near the edges [6]. The contrast map is calculated within a local Gaussian square window that moves (pixel-by-pixel) over the entire approximation subband and edge map . The edge similarity scores, local similarities and the relative importance of image areas are combined to obtain overall quality measure between images i and j. The final quality score is calculated using the following formula.
where is a constant. We set = 0.99 in this paper.
Performance evaluation
Four commonly used correlation coefficient (CC) are applied to assess the performance of the proposed technique between objective and subjective score pertaining to image quality. These four performance metrics are the Spearman rank order correlation coefficient (SROCC), Kendall rankorder correlation coefficient (KROCC), Pearson linear correlation coefficient (PLCC), and root-meansquared error (RMSE). SROCC and KROCC assess the prediction monotonicity while PLCC and RMSE evaluate the prediction accuracy of an IQA [10].
In this research, a non-linear logistic regression function is chosen for non-linear mapping in the scatter plot. The function is recommended by the video quality experts' group (VQEG) and has been widely used by many IQA researchers [7]. The function is defined as following: where represents the predicted score after regression, denotes the raw objective scores of a metric and 1 , 2 , 3 , 4 and 5 are regression model parameters. The five regression model parameters are calculated by fitting the function to the objective and subjective data. The importance of this function is to analyse the performance in a common space and remove the non-linearity caused in the process of subjective scores [4], [8], [9].
Performance Comparison
The performance of HEPSI is compared with selected state-of-the-art IQAs which are SSIM [3], MSSIM [11], FSIM [4], HaarPSI [5] and PSNR models (MATLAB source code of all the methods are obtained from the original authors). The following Table 1 lists the SROCC, PLCC, KROCC and RMSE results of proposed metric and other 5 IQAs on LIVE database. The best one metric produces the highest CC and lowest RMSE is marked in boldface. The results show that HEPSI outperform all other metrics in terms of correlation coefficients and root mean square error. Thus, HEPSI performs better than other IQA metrics The fitted curve (red line) plotted in each graph is actually the logistic function defined in Equation 1 that is widely used in IQA researches. In comparison with other scatter plots, HEPSI's points (blue +) are closer to each other and fitted curve, which means that HEPSI correlates well with subjective scores. This shows that proposed metric HEPSI performs better than the other 5 IQA metrics which are PSNR, SSIM, MS-SSIM, FSIM and HaarPSI.
Conclusion
In this research, we proposed a full reference IQA metric that is called HEPSI. This metric combines the assessment of local similarity and the relative importance of image areas from HaarPSI with the edge similarity between two images. Experiments show that HEPSI achieved good and better performance than other 5 IQA metrics across LIVE database. The work also indicates the high correlation between the image edges and visual perception of image quality. Assessing the edges of reference and distorted image can well measure the image structural distortion and become an efficient IQA metric.
|
2020-04-21T04:03:13.031Z
|
2020-03-21T00:00:00.000
|
{
"year": 2020,
"sha1": "8878b431f5fe61a7f31e899b3739168d26767fd6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/767/1/012015",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "a049cbdc0bf13074e7b79f6da9ef0ed692ef1ab1",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
235289556
|
pes2o/s2orc
|
v3-fos-license
|
Effect of edible coating application by spraying method on the quality of red chili during storage
After being harvested, fruits continue their physiological activity, such as transpiration and respiration, which leads to deterioration. The main problem on maintaining the quality of fresh fruit and vegetables is regarding the shelf life. They have short shelf life due to the metabolism still occurs. Therefore, additional treatments are required to prolong the shelf life. Application of edible coating is one of a promising treatments. The coating is known for its ability to prevent moisture loss and create a barrier for gas exchange to give protection from fruits decay by applying it directly to the fruit surface. This experimental study used Completely Randomized Design with two factors. The first factor is the type of starch used for coatings consisting Canna edulis starch, potato starch and sweet potato starch, and the second factor is the variation of nozzle diameter consisting 0.6 mm, 1.0 mm and 1.5 mm. The parameters observed in this study comprised of color parameter (L-, a-, and b-value), ascorbic acid and moisture content, and firmness. The result obtained from this study were analyzed using the F test and continued with the Duncan Multiple Range Test. The result showed that the main material for producing edible coating gave a contribution on the alteration of red chili during storage. Moreover, the differences of diameter of nozzle sprayer also influence the adhering of the edible coating on the surface of the agricultural product. Based on the present research, the selected treatment combination application of an edible coating for red chilies was using of edible film made from sweet potato starch that sprayed with a with a diameter of 0.6 mm nozzle sprayer.
Introduction
Due to the pandemic of COVID-19, many people aware regarding the healthy foods. Therefore, the demand of fresh fruit, vegetables and functional foods has been increasing. Fresh fruit and vegetables are beneficial for increasing the immunity of the body to inhibit the infection by the virus. The main problem on maintaining the quality of fresh fruit and vegetables is regarding the shelf life. They have short shelf life due to the metabolism after harvested. Additional treatments are required to prolong the shelf life. 2 Previous research explained that the use an appropriate packaging material [1], application of Modified Atmosphere Packaging (MAP) [2], and applying the low dose of Gamma irradiation [3] contribute on extending of shelf life of fruits and vegetables. Moreover, the combination between MAP and edible coating result longer shelf life of strawberry than the control and single treatment [3].
Edible coating is a thin layer of an edible material that can function as a barrier to moisture vapor, oxygen and the transfer of dissolved materials to food. Edible coating is applied directly over the surface of foodstuffs through various methods, such as dipping, spraying and bruising [4]. According to [5], the use of edible coating is beneficial for the environment because it is biodegradable and also is made of material that is suitable for consumption and has the benefit of maintaining the quality of food products. In addition to the application on to the surface of fresh commodities, edible coating can be applied on processed food as well to improve the quality of the product [4], [5], [6]. Edible coating can be applied to the products by dipping or spraying methods. Considering of using of natural resources, recently, the utilization tubers as raw material for producing edible coating is considered. The use of the tubers because they have high content of starch. Previous research reported the effect of utilization of starch produced from tubers as raw material of edible coating could contribute on improving quality of processed product [7]. A consideration of usage of plasticizer during the preparation of the edible coating plays an important role on determining the appropriateness of the coating [8].
Postharvest loss is one of the important factors that must be considered to maintain quality of agricultural commodities. Inappropriate strategies for handling of fruits can cause a significant decrease of their quality. Edible coating application is one of the strategies that can be used to maintain fruit or freshcut fruit quality after being harvested. At present, the consumption of fresh cut fruit increased because of the ease of consumption without the need to peel the fruit. However, because there is no protector over the fruit, the fruit is easily damaged [9]. Previous research by [4] stated that the use of edible coating on cut fruit can be useful for reducing weight loss, maintaining vitamin C level, delaying decay, and maintaining sensory quality especially color and firmness. Several previous studies stated that the use of edible coating on fresh fruit and vegetable can contribute to maintain their quality. The edible coating could be made from: chitosan [10], pectin [11], starch and protein [12], gelatin [13], and alginate [14].
The effectiveness of using edible coating to extend the shelf life of various types of fruit has also been reported by several previous researchers. It could be seen from the retention of ascorbic acid level, color, texture, freshness and taste that can still be accepted by consumers and there is no attack of microorganisms. Various types of fruit that have been studied are plums [15], tomatoes [16], strawberry [17], and cavendish banana [18]. In addition, edible coating is also useful in maintaining the quality of fresh-cut fruit, for example in watermelon [19], and preventing browning on mangoes [20].
Many kinds of fruits and vegetables are cultivated in Indonesia, the treatments during cultivation plays an important role on the characteristic of the harvested products [21]. In Indonesia, red chili is one of the important agricultural commodities because of high demand throughout the year. It is a perishable product that easily deteriorate at short period. Red chilies have a high moisture content, which is around 90%. The high moisture content causes transpiration to continue after harvesting so that red chilies are quickly damaged. The decrease in weight loss, change in color and decrease in vitamin C levels are caused by the respiration process which continues after harvesting. Therefore, fruit packaging and coating are needed which can reduce and suppress the rate of respiration and transpiration rate to inhibit damage to fruit. One potential way to reduce the level of damage to red chilies is by applying an edible coating. Edible coating forms a semi-permeable layer so that it is able to modify the internal atmosphere in the fruit, thereby delaying maturity and decreasing the transpiration rate of the fruits. Coating inhibits the release of gas, moisture and contact with oxygen, so the transpiration and respiration process can be slowed down [4]. This study aims to determine the effect of application of edible coating and the size of the nozzle sprayer for coating on chemical and physical characteristics of red chilies during storage.
Material and method
Red chili, variety of "OR TWIST 42", were harvested on the field from the farmers in the Limpakuwus, Banyumas regency, Central java. Therefore, the sample in this research were fresh product without any treatment after being harvested. The samples were sorted and cleaned in the laboratory before applying the treatments on the following day. Moreover, the starches (canna starch, potato starch and sweet potato starch) as raw material for producing edible coating were bought from the producer of starch. This company sell these starches in Indonesia and for export as well.
The factors studied were the type of edible coating (P) with 3 different sources of raw materials, edible coating made from canna starch (P1), potato starch (P2), and sweet potato starch (P3); the nozzle sprayer diameter with 3 different size of the diameter of the nozzle sprayer with 0.6 mm (N1), 1.0 mm (N2) and 1.5 mm (N3). Every treatment was performed 3 replications.
The parameters observed in this study comprised of color parameter (L-, a-, and b-value), ascorbic acid and moisture content, and firmness. The data that had been collected was analyzed using analysis of variance (F-test). Afterward, if the result showed that there was a difference, Duncan's Multiple Range Test (DMRT) should be done at a significance level of 5%.
Color parameter
In addition to spicy level of chili, appearances is one of the important factors influence the customers on consuming the chili. Therefore, color of chili plays an important role on customer's preference. Brightness is an essential attribute in the perception of the quality of red chilies. The average value of brightness in the combination of edible coating and nozzle sprayer type treatments during storage is presented in Figure 1. Figure 1 shows the change in surface brightness of red chilies that were treated with various types of edible coating and applied with various sizes of sprayer nozzles. This shows that all treatment combinations are able to maintain the brightness of red chilies. All samples do not appear to experience a significant increase in brightness values. Previous research by [22] also found that the brightness value of red chilies at the beginning of the harvest was 34.4.
Brightness (L*) Effects of treatments during storage (days) On day 2, the brightness of red chilies in each treatment combination was not significantly different. At day 4 and day 6 also did not have a significant difference between each treatment combination. What might happen is the ability of the coating to keep the red chilies evenly bright. When the coating is made, the coating is made with canna starch, which is clear in color with a little cloudiness.
The average value of a-and b-value in the combination of edible coating and nozzle sprayer type treatments during storage is presented in Figure 2 and Figure 3, respectively. Figure 2 and Figure 3 show that there is no significant differences found for the measurement of red chili for a-and b-value. Therefore, during storage until 6 days, the color of the red chili did not change intensely. Based on the visual observation, there were a tendency of the surface become darker. The initial color of red chili was red and during storage until the last day the color remain the same. During the treatment with temperature and storage time, red chilies undergo physical changes, including changes in color and changes in texture. This happens where the color change is caused by the oxidation of chlorogenic acid by the enzyme polyphenoloksidase to melanoidin, so that a blackish brown color is formed. The longer storage time, chlorogenic acid oxidation is accelerated and the blackish brown color of the chilies is increasingly visible so that the color change that occurs is also faster. The ripening process of fruit is often associated with a series of visible changes including color, aroma, consistency and flavor (taste and smell). Likewise with color, the higher the level of fruit maturity, the higher the level of fruit color change.
Moisture content
Once harvested from the plant, the chilies still conducts the metabolism activities. Moisture loss during storage due to respiration and transpiration of chilies, which can cause a decrease in moisture content. The chilies that are just harvested are then stored and are still undergoing a development process, which is marked by a change in the color of the chilies and the occurrence of withering and wrinkling as a result of the respiration and transpiration processes. The average value of moisture content in the treatment of edible coating types during storage is presented in Figure 4. Based on the results of the analysis of variance, it was found that the moisture content value did not have a significant effect except on the 4th day where the main coating factor had an effect on the moisture content. Based on Figure 4, it is obtained the average moisture content in red chilies. At the starting point of observation, the average moisture content of red chilies was 74%. Red chilies with the application of coatings of canna starch and sweet potatoes have relatively the same average moisture content, while potato starch has the highest moisture content of 68% on the second day. On the 4th day the coating made from canna starch showed the lowest moisture content with an average value of 64%, the coating with potato starch had an average moisture content of 69%, the coating of sweet potato starch had the largest average moisture content on the following day. On the 6th day of the potato starch coating treatment had the highest average moisture content of 70%.
Potato starch has quite high levels of amylose, about 23% amylose and 77% amylopectin. The amylose structure allows the formation of hydrogen bonds between glucose constituents and during heating is able to form a three-dimensional network that can trap moisture to produce a strong gel. This strong gel will reduce the rate of water vapor transmission in food products so that the decrease in moisture content is inhibited.
Red chilies have a high moisture content so moisture loss will result in weight loss or shrinkage in the fruit. Moisture loss is closely related to weight loss. The coating material plays a role in slowing down the respiration process so that moisture loss from the fruit can be minimized and weight loss can be minimized [3].
Ascorbic acid
In addition to capsaicin, red chili also contain of ascorbic acid at relatively high concentration. At the beginning of the observation, the average level of ascorbic acid in red chilies was 23 mg/100 g. The second day, the level of ascorbic acid in the combination of sweet potato starch treatment with a nozzle sprayer measuring 0.6 mm had the highest average ascorbic acid, namely 19 mg/100 g. On the 4th day, the combination of edible coating types of sweet potato coated with a nozzle sprayer with a size of 1 mm had the highest average Ascorbic acid content with 15 mg/100 g, while the combination of edible coating types of potatoes with a nozzle sprayer measuring 0.6 mm had an average. On day 6, there was no significant effect between ascorbic acid levels and the factors tested. The possibility that what happened is on the 6th day all the starch ingredients that have been applied to red chilies have the same ability to maintain ascorbic acid levels in red chilies. The decrease in ascorbic acid content can be affected by the process of respiration and transpiration which causes a decrease in fruit moisture content. Starch-based coating acts as a permeable membrane that is selective against the exchange of O 2 and CO 2 gases so that it can reduce respiration rates in fruits and vegetables. P1N1 P1N2 P1N3 P2N1 P2N2 P2N3 P3N1 P3N2 P3N3 P1N1 P1N2 P1N3 P2N1 P2N2 P2N3 P3N1 P3N2 P3N3 P1N1 P1N2 P1N3 P2N1 P2N2 P2N3 P3N1 P3N2 P3N3 0 2 4 6 Ascorbic acid (mg/100g) Effects of treatments during storage (days)
Firmness
Texture changes are one of the physiological changes that occur as a direct result of moisture loss in horticultural products. The change in texture that can be used as an indication of damage to red chilies is a decrease in the firmness of red chilies so that they become soft during storage. The decrease in the firmness level of red chilies during storage is in accordance with the research of [23]. Based on the results of the analysis of the various treatments for the main coating material, it shows a significant difference on the second day, while the nozzle sprayer shows no significant difference. The highest firmness value was in the treatment of canna starch and a nozzle sprayer with a size of 0.6 mm.
On the 4th day the main coating material showed a significant difference, while the size of the sprayer nozzle did not show a significant difference. The coating treatment of potato starch and 1 mm nozzle sprayer had the highest average firmness of 0.201 kg / cm 2 . Day 6 shows that there is no significant influence between the tested factors with fruit firmness. The coating treatment of sweet potato starch with a nozzle sprayer resulted the firmness with a value of 0.203 kg/cm 2 . Chili peppers lose firmness rapidly during room temperature storage which contributes to a short postharvest shelf life and susceptibility to fungal contamination. The texture of the fruit is influenced by the cell turgidity, structure and composition of the cell wall polysaccharides. Red pepper softening was associated with the degradation of the middle lamellae in parenchyma cortical cells, resulting in a dramatic increase in pectin solubilization, a slight change in pectin molecular weight and a slight reduction in hemicellulose content [23].
The firmness level of the fruit is related to the skin tissue system represented by the epidermis as the outer protector of the fruit. Gas exchange, moisture loss, mechanical damage, resistance to pressure and change in firmness all start at the surface of the fruit. Chemical changes also occur in the continuous cell wall of complex compounds from the structural carbohydrate group such as cellulose, hemicellulose, pectin and lignin.
Texture changes are one of the physiological changes that occur as a direct result of moisture loss in horticultural products. The change in texture that can be used as an indication of damage to red chilies is a decrease in the firmness of red chilies so that they become soft during storage. Reduction in the firmness level of red chilies during storage according to the study of [23]. The change in texture is due to the respiration process which causes the breakdown of carbohydrates into simpler compounds, with the breakdown of these carbohydrates it will cause the breakdown of tissue in the fruits so that the fruit becomes soft. This respiration process leads to continued ripening of the fruit, at which time there is degradation of hemicellulose and pectin from the cell walls which results in changes in fruit firmness [23].
One of the changes in texture is caused by the presence of pectin, which is originally present in the form of protopectin in unripe fruits, but with the help of the pectin methylesterase and polygalacturonase enzymes, it causes pectin to dissolve in moisture and break down or break down pectin into other compounds. The breakdown or damage causes the texture of the hard plant to become soft.
Conclusions
The result of this research shows that the potential usage of edible coating produced from the local tubers, particularly starch made from sweet potato, potato and canna. The diameter of nozzle sprayer also contribute on determining the characteristic of edible coating on the surface of the product. The selected treatment combination for red chilies is the application of an edible coating made from sweet potato starch overlaid with a nozzle sprayer with a diameter of 0.6 mm.
Further research is needed on the application of coatings with a smaller sprayer nozzle and fogging system as a method for the applying of coating to examine the effect of other various method of edible coating application.
|
2021-06-03T00:32:44.557Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c0f701a0cd578cf0ba54712766b9dc44857ece1e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1755-1315/746/1/012004",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c0f701a0cd578cf0ba54712766b9dc44857ece1e",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics"
]
}
|
36800939
|
pes2o/s2orc
|
v3-fos-license
|
Protein-losing enteropathy associated with Henoch-Schönlein purpura
The gastrointestinal manifestations of Henoch-Schönlein purpura include abdominal pain, gastrointestinal bleeding, intussusception, and perforation. Protein-losing enteropathy is rarely associated with Henoch-Schönlein purpura. Two pediatric patients with Henoch-Schönlein purpura who developed protein-losing enteropathy are reported. Tc-99m human serum albumin scintigraphy is useful in the detection of protein-losing enteropathy.
Case Report
Case #1 A 4-year-old boy was hospitalized with colicky abdominal pain. There was no recent drug exposure or immunization, and his family history was unremarkable.
Four days before admission, he had low grade fever and vomiting, and was treated by a family physician. Because of continuing abdominal pain, he was admitted to our hospital with suspected acute gastroenteritis. On admission, he had vomiting and abdominal pain. He was 113 cm tall (±1.63 SD) and weighed 18.4 kg (±0.3 SD). His temperature was 37.4°C, his pulse was 98/min, and his general condition was poor, with abdominal tenderness but no guarding, and no purpura.
On the day after admission, he developed bloody stool and purpura on the lower extremities and scrotum. With a diagnosis of HSP with gastrointestinal involvement, prednisolone (1.5 mg/kg/day), dalteparin (75 U/kg/day), and urinastatin (2,500 U/kg/dose) were started. There was marked edema of eyelids and upper and lower extremities. The patient's serum albumin decreased to 1.8 g/dL, necessitating albumin infusion. His plasma factor XIII activity also decreased to 17%, necessitating human antihemophilic factor XIII fraction. Plasma factor XIII and serum albumin levels normalized by the 10th day. He was discharged on the 35th day.
On plain abdominal CT (Figure 1), diffuse wall thickening in the small bowel and colon and a small amount of ascites were noted.
Tc 99m-human serum albumin (HSA) scintigraphy showed activity in the stomach and duodenum 60 min after injection, suggesting PLE. During the course, no urine abnormalities were detected. His transient hypoproteinemia was likely due to PLE, because it normalized as his abdominal symptoms improved.
Case #2
A 5-year-old boy was admitted with abdominal pain, vomiting, bloody stool, and a rash on his legs. He had no recent drug exposure or immunization, and his family history was unremarkable. Seven days before admission, purpura developed on his nose and around his mouth, followed by abdominal pain and purpura on the upper and lower extremities, involving his buttocks one day later. HSP was diagnosed.
Because of continuing abdominal pain, he was admitted to our hospital. On admission, he had vomiting, abdominal pain, and bloody stool. He was 121.8 cm tall (±1.9 SD) and weighed 22.8 kg (±0.8 SD). His temperature was 36.9°C, pulse was 90/min, and blood pressure was 90/58 mmHg. His general condition was poor, with abdominal tenderness but no guarding, and purpura on the earlobes, lower extremities, and scrotum.
On abdominal CT, diffuse wall thickening was noted in the stomach. Tc-99m-HSA
Figure 1. On abdominal computerized tomography, diffuse wall thickening is noted in the small bowel and colon. A small amount of ascites is seen.
scintigraphy showed activity in the stomach 60 min after injection, suggesting PLE. During the course, no urine abnormalities were detected.
Discussion
Gastrointestinal manifestations of PLE rarely occur with HSP. PLE is a rare condition characterized by gastrointestinal loss of serum protein resulting in hypoproteinemia, which can cause edema, ascites, pleural and pericardial effusions, and malnutrition. Increased capillary permeability or submucosal hemorrhage and ulceration by vasculitis might cause the protein loss. 2 PLE is rare in HSP, with only eight reported pediatric cases, including our two cases [2][3][4][5][6][7] (Table 1). These patients (7 boys, 1 girl; age range, 4 to 14 years) all had abdominal pain; four had several unusual complications (case ## 2, 3, 4, 6). Case #4 5 had abdominal pain, but the stool was Guaiac-negative. Moderate hypoproteinemia (4.9 g/dL) and hypoalbuminemia (2.5 g/dL) without proteinuria were detected during the acute phase; the serum protein levels increased gradually on steroid therapy, but severe neurological involvement resulted from cerebral vasculitis. Case #6, 7 a 4-year-old boy, had severe esophagitis and multiple ulcers in the descending duodenum presenting as PLE. However, our cases had PLE without other complications.
Previously, the detection and quantification of gastrointestinal protein loss primarily involved measurement of fecal excretion of albumin labeled with I-125 or Cr-51 after intravenous tracer injection. However, stool collection is difficult, with false-positive results if the stool specimen is contaminated with urine. Subsequently, measurement of fecal excretion of a1-antitrypsin was used to detect PLE. 8 This serum enzyme, with the same molecular weight as albumin, resists proteolysis and is not degraded when it leaks into the intestinal lumen. However, this is difficult because stool must be collected. Recently, Tc-99m HAS scintigraphy has been used to diagnose PLE by documenting protein loss in the gut, which is also useful for monitoring response to treatment. 9-10 Our two patients were diagnosed as having PLE because of hypoalbuminemia without nephrosis or liver dysfunction, as well as abnormal leakage of the radiotracer into the gut on Tc-99m HSA scintigraphy. Tc-99m HAS scintigraphy offers a rapid, safe, and convenient way to screen and localize PLE. In HSP patients, Tc-99m HSA scintigraphy is useful for detecting PLE.
|
2014-10-01T00:00:00.000Z
|
2010-09-06T00:00:00.000
|
{
"year": 2010,
"sha1": "17acd0bc4b25ca28a00d41f6bd271c15f455856c",
"oa_license": "CCBY",
"oa_url": "https://www.pagepress.org/journals/index.php/pr/article/download/pr.2010.e20/2578",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17acd0bc4b25ca28a00d41f6bd271c15f455856c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269815617
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Acupuncture Therapy at ST 40 (Fenglong), SP 6 (Sanyinjiao) and Avocado Juice on Reducing Total Cholesterol Levels in High Cholesterol Cases in Duwetan Ngunut Jumantono
Introduction: Hypercholesterolemia is a condition where the cholesterol levels in the blood exceed normal levels. One of the non-pharmacological therapeutic treatments for patients with hypercholesterolemia is acupuncture therapy and providing nutritious food intake. This study aims to determine the effect of acupuncture points at ST 40 (Fenglong), SP 6 (Sanyinjiao) and avocado juice on reducing total cholesterol levels in cases of high cholesterol in Duwetan Ngunut Jumantono Methods: Design of this research was Quasy experimental with the two groups pretest-posttest design method. January - August 2023 in Duwetan Ngunut village, Jumantono, Karang Anyar. The sampling technique was Purposive Random Sampling of 38 research subjects. Data collection uses primary data, namely the results of measuring total cholesterol levels. Research analysis consisted of univariate analysis, bivariate analysis with Wilcoxon and Mann-Whitney tests. Results: There was an effect of acupuncture therapy at points ST 40 (Fenglong), SP 6 (Sanyinjiao) (p=0.0018) and a combination of avocado juice (p=0.0001) on reducing total cholesterol levels. There was a difference in the reduction in total cholesterol levels after being given the intervention (p=0.007) in the acupuncture therapy group (Mean= 219.79; SD= 39.35) and the avocado juice combination acupuncture group (Mean= 181.63 SD= 36.63). Conclusion: There was a decrease in total cholesterol levels after being given acupuncture therapy at points ST 40 (Fenglong), SP 6 (Sanyinjiao) and a combination of avocado juice in the Duwetan Ngunut Jumantono village community.
INTRODUCTION
Hypercholesterolemia is a condition where the cholesterol levels in the blood exceed normal levels.Hypercholesterolemia occurs due to disorders of fat metabolism which can cause an increase in blood fat levels caused by deficiencies in lipoprotein enzymes, lipase, LDL receptor deficiencies, genetic disorders, and a decrease in the liver's ability to clean cholesterol in the blood.Hypercholesterolemia over a long period of time can cause the formation of atherosclerosis, resulting in cerebrovascular, cardiovascular and coronary heart disease, pancreatitis (inflammation of the pancreas organ), diabetes mellitus, thyroid disorders, liver disease and kidney disease. 1 The incidence of heart and blood vessel disease is influenced by many factors, one of which is caused by hypercholesterolemia, which is a condition where cholesterol levels in the blood increase above normal limits. 2According to WHO (2017) the death rate due to cardiovascular disorders is expected to increase to 25 million people in 2017.2020, or around 37% of the total estimated deaths and 45% of these deaths were caused by coronary heart disease. 3ne of the non-pharmacological therapeutic treatments for patients with hypercholesterolemia or dyslipidemia is acupuncture therapy and providing nutritious food intake.The mechanism of acupuncture therapy can stimulate neurotransmitters, namely: the stability of serotonin and noradrenaline levels in the central and peripheral nerves.This stimulation has the effect of facilitating and improving blood circulation.If blood circulation is smooth there will be no atherosclerosis and cholesterol levels will decrease. 4cupuncture management at the ST 40 Fenglong point can affect the plasma lipid profile, also affect the lipid profile in the liver and eliminate foam cells which cause atherosclerosis. 5At the SP 6 Sanyinjiao point it can reduce cholesterol content in the blood by restraining its synthesis and absorption, accelerating its decomposition and excretion.and changes its distribution in plasma and tissues. 6ne nutritious food is avocado.Of several fruits that can lower cholesterol levels, researchers chose avocado because it has a total fat content of 15.41 grams/100 grams of fruit. 7his study aims to determine the effect of acupuncture points at ST 40 (Fenglong), SP 6 (Sanyinjiao) and avocado juice on reducing total cholesterol levels in cases of high cholesterol in Duwetan Ngunut Jumantono.Table 2 shows data on the characteristics of the research subjects, namely that the majority are female, 28 people (73.68%) and the majority work as housewives, 17 people (44.74%).The results of measuring total cholesterol levels before being given the intervention were in the very high category, 24 people (63.16%), while after being given the action, the majority of total cholesterol levels were in the normal category, 19 people (50%).Table 3 shows that before the intervention was given, the majority of research subjects' cholesterol levels were in the very high category, 13 people (54.17%) in the acupuncture group, and after being given the intervention, the majority of research subjects' cholesterol levels experienced a decrease in total cholesterol levels (normal category), 11 people (57, 89%) in the acupuncture plus avocado juice group.Table 4 shows the results of the normality test that the Sig value, total cholesterol levels before and after the procedure is >0.05, meaning the data is not normally distributed, so the hypothesis test used is Wilcoxon or Mann-Whitney.Table 5 shows the results of the Wilcoxon test analysis that there was an influence on total cholesterol levels before and after the intervention in the acupuncture group (p=0.0018;p<0.05) and the avocado juice combination acupuncture group (p=0.0001;p<0, 05).
METHODS
The results of the Mann-Whitney test between the two acupuncture groups and the avocado juice combination acupuncture group showed no difference in the average total cholesterol levels at the beginning before the intervention was given (p=0.748;p>0.05).This happened because respondents in both groups had the same average total cholesterol levels when examined.There was a difference in total cholesterol levels after being given the intervention between the two groups (p=0.007;p<0.05), after being given the intervention the average levels Total cholesterol in the avocado juice combination acupuncture group decreased more (Mean= 181.63 SD= 36.63)than the single acupuncture group (Mean= 219.79;SD= 39.35).
DISCUSSION
Acupuncture is a traditional Chinese medicine therapy, which has good effects for many diseases in the clinic, one of which can be used as a safe and effective alternative therapy for hyperlipidemia.8 Liu et al, (2015) analyzed the acupuncture points used in 65 research articles by taking the primary database of Chinese and English versions of clinical research literature on acupuncture points for hyperlipidemia treated with acupuncture and moxibustion.The top five acupuncture points were found, namely Fenglong (ST 40), Zusanli (ST 36), Sanyinjiao (SP 6), Neiguan (PC 6) and Tianshu (ST 25). 9 Pang Jun (2000) explains that the Fenglon point (ST 40) reduces plegma and repairs the meridians; also choose the spleen meridian, namely the Sanyinjiao point (SP 6), which is the meeting point of the three foot yin meridians.It can help strengthen the spleen and kidneys, regulate qi and blood, can also clear heat, facilitate blockages, remove toxins and dispersion and stagnation. 10he results of this study showed that there was an influence on total cholesterol levels before and after intervention in the acupuncture group (p=0.0018;p<0.05), in line with previous research by Buranatawonsom (2022) that acupuncture intervention at the Funglong point (ST 40) was significant.reduce cholesterol levels (P = 0.000). 9his is supported by the explanation of Xue-Song et al. (2020) and Ling et al. ( 2014) that the stimulatory effect of electroacupuncture at the ST40 point induces the expression of nNOS and Mt1 enzymes.The NNOS enzyme mediates nitric oxide (NO) signaling and plays an important role in vascular tone, blood pressure, insulin secretion, respiratory tract tone, angiogenesis and peristalsis.intestines.Mt1 plays an important role in protecting cells against oxidative stress in the body, therefore electroacupuncture is effective in lowering cholesterol and triglycerides.The functional theory of the nervous system when stimulated through certain acupuncture points encourages the brain stem, cerebral cortex, hypothalamus and finally the hypothalamic-pituitary gland to control chemically through neurotrans-mitters and hormones, increasing cell fat metabolism, blood circulation, Traditional medical theory suggests that acupuncture stimulates the peripheral nerves at acupuncture points, which then affects the central nervous system.It has been proven that the application of electroacupuncture can inhibit gastric hyperactivity caused by increased electrical stimulation of the activity of the lateral hypothalamus (LHA) region, leading to the activation of the satiety center indicating that acupuncture has an anticholinergic function, through which β receptors are activated to inhibit appetite and eliminate hunger. 12he results of research in the avocado juice combination acupuncture group showed that there was an effect on reducing total cholesterol levels (p=0.0001;p<0.05), in line with previous research by Purhadi (2022) that giving avocado juice was effective in reducing cholesterol levels ( p=0.000<0.05).There was a decrease in cholesterol levels in the intervention group after giving avocado juice twice a day (morning and evening) at a dose of 50 grams/day of avocado flesh plus 140 ml of water which was made into 2 parts, namely in the morning 25 grams of avocado flesh plus 70 ml of water to drink in the morning and in the evening for 7 days.In general, giving avocado juice is effective in reducing cholesterol levels in someone who has high cholesterol (Hypercholesterolemia). 13 This was added to Nurman's (2019) research that the effectiveness of giving avocado juice in reducing cholesterol levels in people with hypercholesterolemia in the Bangkinang City Health Center work area in 2019 was seen from the average cholesterol level before being given avocado juice, namely 249.2 mg/dl.However, after being given avocado juice for 5 days, the average cholesterol level was 190.9 mg/dl, and the difference in the average cholesterol level before and after drinking avocado juice was 58.3 mg/dl.
A recent study published in the Journal of Clinical Lipidology also stated that consuming one avocado or half regularly per day showed a significant reduction in cholesterol levels.Avocados were chosen because they are a special fruit, containing 20-30 times more fat than other fruits.Avocados have a total fat content of 15.41 grams/100 grams of fruit. 14he decrease in LDL cholesterol levels in the group given avocado juice was possibly caused by the compounds contained in avocados, such as omega-9 oleic acid.This condition is thought to be caused by the active ingredients in avocados such as omega-9 oleic acid, panthetin, niacin (vitamin B3), beta setosterone, vitamin C, vitamin E, vitamin A, pantothenic acid, MUFA, folate, acid, selenium, amino acids and fiber.15 The results of this study showed that there was a difference in total cholesterol levels after being given the intervention between the two groups (p=0.007;p<0.05), after being given the intervention the average total cholesterol level in the avocado juice combination acupuncture group decreased more (Mean= 181 .63SD= 36.63)than the single acupuncture group (Mean= 219.79;SD= 39.35).This is supported by research by Kusuma (2020) which explains that there is a significant difference in blood cholesterol levels between the initial condition values before and after acupuncture therapy combined with a low purine diet (p = 0.000 < 0.05).
There are several factors that cause cholesterol to increase that cannot be controlled, namely genetics, gender, age and ethnicity.The factors that can be controlled include diet, body weight, exercise, smoking and certain diseases. 16n this case, to emphasize a healthy eating pattern, according to this research, it is by consuming avocado juice.One serving of fresh avocado (50 g or 1/3 of a medium fruit) contains 80 kcal, 3.4 g fiber (11% Daily Value/DV), 44.5 g folate (10% DV), 0.73 mg pantothenic acid (15% DV), 85 g copper (10% DV), 10.5 g vitamin K (10% DV), 254 mg potassium (7.5% DV), and 4 mg sodium (0.2% DV). 17 Carbohydrates avocado consists of 80% dietary fiber, consisting of 70% insoluble and 30% soluble fiber. 18So a diet that consumes avocado juice is better for the body.
CONCLUSION
This research shows that there is an effect of acupuncture therapy at points ST 40 (Fenglong), SP 6 (Sanyinjiao) (p=0.0018) and a combination of avocado juice (p=0.0001) on reducing total cholesterol levels.There was a difference in the reduction in total cholesterol levels after being given the intervention (p=0.007) in the acupuncture therapy group (Mean= 219.79;SD= 39.35) and the avocado juice combination acupuncture group (Mean= 181.63 SD= 36.63).
This research can be used as a reference or study material for health science literature in the prevention and treatment of diseases caused by high cholesterol levels.Health service agencies can provide health promotion regarding the management of reducing cholesterol levels in preventive and curative services to the community, especially disease prevention.People can be motivated to adopt a healthy diet and combine acupuncture to prevent high cholesterol levels.
Table 2
Characteristics of Research Subjects
Table 3
Tabulation of Total Cholesterol Levels in Research Subjects
Table 4
Normality Test Data
Table 5
Results of Analysis of Average Total Cholesterol Levels in the Acupuncture and Avocado Juice Combination Acupuncture Groups
|
2024-05-18T15:50:52.758Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "7a5262c8fba2490f9d8fc9523e2be6e772525493",
"oa_license": "CCBYSA",
"oa_url": "https://jurnal.uns.ac.id/placentum/article/download/78959/45004",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "afd0f98072b667d8855a77dfa1991de798171829",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
7610801
|
pes2o/s2orc
|
v3-fos-license
|
High-quality III-nitride films on conductive, transparent (2̅01)-oriented β-Ga2O3 using a GaN buffer layer
We demonstrate the high structural and optical properties of InxGa1−xN epilayers (0 ≤ x ≤ 23) grown on conductive and transparent (01)-oriented β-Ga2O3 substrates using a low-temperature GaN buffer layer rather than AlN buffer layer, which enhances the quality and stability of the crystals compared to those grown on (100)-oriented β-Ga2O3. Raman maps show that the 2″ wafer is relaxed and uniform. Transmission electron microscopy (TEM) reveals that the dislocation density reduces considerably (~4.8 × 107 cm−2) at the grain centers. High-resolution TEM analysis demonstrates that most dislocations emerge at an angle with respect to the c-axis, whereas dislocations of the opposite phase form a loop and annihilate each other. The dislocation behavior is due to irregular (01) β-Ga2O3 surface at the interface and distorted buffer layer, followed by relaxed GaN epilayer. Photoluminescence results confirm high optical quality and time-resolved spectroscopy shows that the recombination is governed by bound excitons. We find that a low root-mean-square average (≤1.5 nm) of InxGa1−xN epilayers can be achieved with high optical quality of InxGa1−xN epilayers. We reveal that (01)-oriented β-Ga2O3 substrate has a strong potential for use in large-scale high-quality vertical light emitting device design.
current crowding due to lateral injection, which leads to poor current management and heat distribution, undermining device efficiency. For devices grown on SiC, micropipes are introduced during crystal growth, preventing the use of full wafers. In addition, SiC is very expensive and the incorporation of N 2 dopants diminishes its transparency 11 . Even though this substrate is characterized by a smaller lattice mismatch (3.1%) 12 compared to Al 2 O 3 and provides vertical current injection geometry, the lack of the transparency of this conductive SiC decreases the LED efficiency due to the substrate light absorption. Therefore, as additional processing steps are required, the device fabrication is made more complex. Efforts are being made to enhance the light efficiency of LEDs on SiC by using substrate transferring technique (substrate liftoff or introduction of distributed Bragg reflectors (DBRs) between the nitride and the substrate to reflect the light back from the substrate are employed) 13 . Nonetheless, producing good quality nitride DBRs remains a challenge. Therefore, there is still a significant need for alternative substrates that can be employed in the fabrication of bright vertical light-emitting devices with good lattice match, high thermal and electrical conductivity, and high transparency in UV spectral regions.
The β -Ga 2 O 3 substrate combines the beneficial properties (low lattice mismatch, transparency and conductivity) of both Al 2 O 3 and SiC. β -Ga 2 O 3 is a promising candidate as a wide bandgap (4.8 eV) 14 substrate for fabricating tunable bright III-nitride vertical light-emitting devices because it satisfies these conditions and surpasses several other substrates. Conducting β -Ga 2 O 3 allows vertical current flow, reduces the forward operating voltage and series resistance, and improves current distribution and thermal management. Conductive substrates allow fabricating the contacts at both the top and the bottom surface, which simplifies device fabrication, integration process, and packaging. This approach also reduces the fabrication cost and increases the number of devices in a single chip, resulting in greater light extraction efficiency (LEE) compared to Al 2 O 3 substrates. Furthermore, the transparent nature of β -Ga 2 O 3 as a substrate provides a wider light-emitting area than do conductive SiC and Si substrates. Therefore, emission from vertical devices grown on β -Ga 2 O 3 is omnidirectional, which further increases the LEE, resulting in a bright-light-emitting device and supporting high power operations. Conversely, emission from that grown on SiC is permitted through the top side only. In addition, Ga 2 O 3 shows high stability at high growth temperatures of around 1100 °C 15 . Relative to GaN substrates, Ga 2 O 3 has a wider bandgap and is cheaper to grow and process.
Previous attempts to grow high-quality III-nitride quantum-well-based (100)-oriented β -Ga 2 O 3 have been unsuccessful, as the quality of these crystals is inadequate for high-performance devices [16][17][18] . The strong cleavage nature of the (100)-plane caused the GaN epilayer to detach and peel off from the (100) β -Ga 2 O 3 plane 19 , causing complications to the required step of separating wafers by dicing. In our previous study, we found that using monoclinic (201)-oriented β -Ga 2 O 3 substrate led to a high optical and structural quality GaN material using an AlN buffer layer 20 . Using this (201)-oriented β -Ga 2 O 3 substrate, we found that (0002) GaN rocking curve (RC) has a significantly enhanced full width at half maximum (FWHM) value (430 arcsec) 20 compared to that grown on (100)-oriented β -Ga 2 O 3 (1200 arcsec) 21 . We reported a relatively low lattice mismatch of ~4.7% and an in-plane epitaxial orientation relationship of (010) β -Ga 2 O 3 || (1120) GaN and (201) β -Ga 2 O 3 || (0001) GaN 20 . The growth of InGaN LEDs on (201) β -Ga 2 O 3 with an AlN buffer layer is presently under investigation 22 .
In this paper, we demonstrate high optical and structural quality of Si-doped GaN and In x Ga 1−x N epilayers grown on (201) β -Ga 2 O 3 substrate using a low-temperature GaN buffer layer by metal organic chemical vapor deposition (MOCVD). Our results are a testament to the potential for producing InGaN vertical LEDs that are more efficient, of better quality, and more simply and cheaply produced than LEDs grown on Al 2 O 3 or SiC.
Growth and Characterization
The low-temperature undoped GaN buffer layer was grown to a thickness of ~9 nm at 500 °C under an N 2 and NH 3 atmosphere on (201)-oriented monoclinic conductive β -Ga 2 O 3 substrates using a low-pressure, vertical MOCVD reactor. The (201) β -Ga 2 O 3 substrate was doped with Sn to increase its conductivity, whereby Hall measurements revealed an electron concentration in the order of 10 18 cm −3 . A low-temperature grown GaN buffer layer is known to decrease hillocks in the GaN epilayer surface 23-25 on the Ga 2 O 3 substrate and decrease the TDD compared to that on AlN buffer layer 20 . After changing the carrier gas from N 2 to H 2 , the temperature of the substrate was increased to 1020 °C to grow the Si-doped GaN (n-GaN) epilayer (with a carrier density of 4 × 10 18 cm −3 and a nominal thickness of ~1.75 μ m). The temperature was further increased to 1100 °C during the deposition of the remainder of the Si-doped GaN layer (of ~1.75 μ m thickness). This two-step growth process decreases the formation of epicracks by reducing dislocations between the GaN layers grown in the first and the second steps 26,27 . Current-voltage measurements confirmed that the interface between the Ga 2 O 3 substrate and the GaN epilayer is conductive. Transmission electron microscopy (TEM) images confirmed that the total thickness of the GaN epilayer was 3.7 μ m. In x Ga 1−x N layers (5 ≤ x ≤ 23) with a nominal thickness of 40 nm were grown on n-GaN/ Ga 2 O 3 with a GaN buffer layer by MOCVD. During the In x Ga 1−x N layer growth, precursors of trimethylgallium (TMGa), trimethylindium (TMIn), and NH 3 were used as source gasses and N 2 as the carrier gas at a pressure 400 mbar. The growth temperature was varied for different x values: 0.01 (875 °C), 0.05 (820 °C), 0.1 (795 °C), 0.15 (775 °C), and 0.23 (720 °C). For the purpose of comparison, we grew In x Ga 1−x N films on Al 2 O 3 using similar optimization growth process.
Cross-sectional and plan-view TEM specimens were prepared using a lamellar lift-out procedure on an FEI Helios focused ion beam scanning electron microscope, and cross-sectional imaging was performed on a JEOL 4000 EX TEM operating at 400 kV. The plan-view TEM images were obtained using FEI Tecnai TWIN TEM operated at 120 kV. Atomic resolution High Angle Annular Dark Field-Scanning Transmission electron microscopy (HAADF-STEM) study was carried out with a Titan Cs-Probe Corrected (FEI Co.) microscope operated at 300 kV. In order to get the strain maps from the HAADF images, we used The Geometric Phase Analysis (GPA) plug-in package (HREM Research Inc.), which was implemented in the Digital Micrograph software (Gatan). The surface morphology and roughness of the InGaN epilayers were examined by atomic force microscopy (AFM) using the Agilent 5400 scanning probe microscope. The X-Ray Diffraction (XRD) was performed on a Bruker D8 diffractometer system using a Cu K α1 radiation. The emission characteristics of this GaN epilayer were compared to a similar GaN epilayer grown by the same growth process but with an AlN buffer layer. Photoluminescence (PL) was measured to investigate the optical properties of the n-GaN film using a 325-nm He-Cd laser at different temperatures. The spectra were collected by an Andor monochromator attached to a charge-coupled device camera. The samples were mounted in a closed-cycle helium cryostat for low-temperature PL (6 K). Time-resolved PL (TRPL) experiments were carried out with a Hamamatsu Synchroscan streak camera. The samples were excited by the third harmonic UV (λ = 266 nm) pulses of a mode-locked Ti:sapphire femtosecond pulsed laser (frequency was doubled using a barium borate crystal) with a pulse width of ~150 fs and a power density of 70 W/cm 2 (with 76 MHz repetition rate). A Coherent Verdi-V18 diode-pumped solid-state continuous wave laser was used to pump the Ti:sapphire laser. Emission of the sample was detected by a monochromator attached to a UV-sensitive Hamamatsu C6860 streak camera with a temporal resolution of 2 ps. The samples were mounted in a variable temperature open-helium cryostat for measurements between 2 and 300 K.
Results and Discussion
Structural characterization of GaN 2″ wafer. It is important to obtain high quality and uniform wafers for large-size fabrication technology for use in vertical emitting devices. Raman spectroscopy and XRD measurements were used to examine the material quality and uniformity across the whole 2″ wafer. The strain in GaN/ (201) β -Ga 2 O 3 epilayer was estimated by Raman measurements. Figure 1(a) (top panel) indicates the Raman mapping E 2 (high) peak position across the 2′ ′ GaN/β -Ga 2 O 3 wafer. The E 2 (high) mode was confirmed to be sensitive to biaxial strain in GaN epilayers 28 . The Raman maps reveal a homogenous uniform stress distribution over the entire wafer with a negligible left shift of ~0.7 cm −1 observed between the center (567.3 cm −1 ) and the edge (568.0 cm −1 ) of the wafer, indicating that the stress is fully released to the edge. The E 2 (high) peak exhibits an average value of ~567.93 cm −1 with a very small left shift (− 0.07 cm −1 ) compared to that of bulk relaxed GaN (568 cm −1 ) 28 . This strain value indicates that the uniform GaN/Ga 2 O 3 wafer is nearly strain-free, whereby the presence of a very slight tensile strain suggests low TDD (It is noteworthy that the low lattice mismatch between (201) β -Ga 2 O 3 and the GaN film (~4.7%) can effectively reduce TDD). On the other hand, a high-quality GaN/ Al 2 O 3 2″ wafer (grown with the same structure) shows a considerable left shift of ~1.4 cm −1 ( Fig. 1(a) bottom panel). Furthermore, GaN grown on AlN buffer layer showed a compressive strain in the film with the E 2 (high) peak shifted to the right by 1.04 cm −1 compared to the bulk GaN 20 . The wafer uniformity of high-quality GaN grown on (201)-oriented β -Ga 2 O 3 is enhanced significantly using GaN buffer layer by formation of nucleation centers, which is particularly beneficial for good quality two-dimensional lateral growth GaN layer 29 .
To study the reduction in TD, AFM and XRD measurements were conducted across the wafer. The XRD RC analysis of the GaN (0002) reflection peak was performed on the GaN/(201) β -Ga 2 O 3 wafer. The FWHM value showed a sharper peak of ~330 arcsec ( Fig. 1(b)), disclosing a better quality GaN epilayer grown on GaN buffer layer compared to that grown on AlN buffer layer (~430 arcsec), as shown in Fig. 1(b). To the best of our knowledge, this is the best RC FWHM value for GaN obtained for materials grown on a Ga 2 O 3 substrate. This FWHM of the RC confirms that (201) orientation of β -Ga 2 O 3 substrates improves the crystal quality significantly compared to the (100) orientation (1200 arcsec) 21 . The surface morphology of GaN epilayers was analyzed by examining the AFM images across the whole wafer. Figure 1(c) illustrates that the GaN grown on (201) β -Ga 2 O 3 with GaN buffer layer confirms a decrease in TDD (> 50%) compared to that with AlN buffer layer ( Fig. 1(d)) using the same substrate. The average TDD count (obtained by averaging over > 20 AFM 5 × 5 μ m 2 images at different positions on the 2″ wafer) on GaN buffer layer is found to be 1.8 (± 0.2) × 10 8 cm −2 and that on AlN buffer is 4.5 (± 0.2) × 10 8 cm −2 . This low TDD confirms the Raman results, indicating that such substrate can be used for large-scale technology based on GaN vertical light emitting devices. TEM analysis and low TDD. TEM and high-resolution TEM (HRTEM) must be carried out to investigate the actual TDD and analyze the dislocation mechanism. The interface between GaN/β -Ga 2 O 3 was studied by cross-sectional TEM, as shown in Fig. 2. The samples were examined by diffraction contrast, viewing close to the [1100] zone axis by tilting the sample to excite the g = (0002) and (1120) Bragg reflections. The electron diffraction pattern taken from the GaN epilayer (the top inset in Fig. 2(a)) shows a single crystal wurtzite (0001) structure with no zinc-blende regions or misoriented grains. Diffraction patterns pertaining to the labeled regions in Fig. 2 are taken near the interface (center inset), revealing a strong epitaxial relationship between the substrate and the GaN epilayer. The bottom inset of Fig. 2 shows an image taken along the GaN [1100] zone axis, parallel to the β -Ga 2 O 3 [110] direction.
Plan-view TEM analysis was performed to confirm the TDD on the epilayer by bright-field imaging in the vicinity of the [0001] zone axis. Figure 3(a) shows a plan-view TEM image taken under two-beam diffraction conditions, whereby g = (1120) to allow for accurate determination of the TD from an average of 40 images. The TDD obtained at the centers of GaN grains (of ~1-~2-μ m diameter) is considerably lower (~4.8 × 10 7 cm −2 ) for that grown on Ga 2 O 3 than that grown on a flat Al 2 O 3 using conventional MOCVD 30 . As can be seen in Fig. 3(a), TDs appear to migrate to the grain boundaries and are depicted in the image as chains of dark spots delineating the grain boundary. The total average TDD is found to be low (~1.9 (± 0.2) × 10 8 cm −2 ). This TDD value is in good agreement with the results obtained by Raman and AFM analysis.
HRTEM analysis and TD annihilation mechanism.
To investigate the TDD reduction mechanism, we carried out TEM and HRTEM analyses. The c-and a-components of the Burgers vectors of the TDs are visible, as shown in the cross-sectional images in Fig. 3(b,c), respectively. With diffraction condition g = (0002), only screw and mixed dislocations are visible, whereas edge and mixed components are visible in the g = (1120) condition 31 . The (a):(c+ a) dislocation ratio is observed to be approximately 1:2, thus differing from the 1:1 ratio typical for dislocation types in standard low TDD GaN grown on (0001) Al 2 O 3 32 . The TEM cross-sectional images ( Fig. 3(b,c)) for g = (0002) and g = (1120) show that the GaN/Ga 2 O 3 interface is abrupt and characterized by a high initial TDD followed by a gradual reduction in the GaN layer (at the distance < 200 nm from the substrate) in density as the film continues to grow. Figure 3(d) shows closed dislocation loops (pointed by yellow arrows, taken with g = (0002)). Figure 3(e,f) show a HRTEM cross-sectional view with g = (0002) reflections of TD loops. Figure 3(d-f) demonstrate that the TDs bend into the basal plane and react with dislocations of the opposite phase, and are eliminated by forming closed loops in the low-temperature GaN buffer layer and in the lower regions of the GaN epilayer. Therefore, the dislocation does not propagate to the upper part of the GaN epilayer. HRTEM and FFT analyses were carried out to investigate the origin of the TD annihilation mechanism. Figure 4(a) shows the HRTEM image of the overlying GaN epilayer, low-temperature grown undoped GaN buffer layer (9 ± 1 nm) and the interface between the buffer layer and the monoclinic β -Ga 2 O 3 substrate. The surface of the flat β -Ga 2 O 3 substrate is characterized by slightly irregular surface "nano hump-features" that reach < 4 nm height at the interface. As a result of this irregular feature, the dislocation was grown with an angle with respect to the [0001] direction 33 , as indicated by the dotted yellow arrow in Fig. 4(a). These dislocation types usually bend and propagate horizontally 33 . FFT images taken from the overlaying n-GaN epilayer, the buffer layer, and the interface are indicated in Fig. 4(b-d), respectively. FFT image (Fig. 4(d) of the interface reveals a complete distortion. This distortion extends to the buffer layer, as shown in the FFT image (Fig. 4(c)) of the low-temperature GaN buffer layer, followed by highly crystalline roughly relaxed n-GaN epilayer, depicted in the FFT image (Fig. 4b).
The distortion of the buffer layer may absorb the effect of misfit dislocations caused by lattice mismatch, leading to TD bending in TD in the buffer layer 34 . These curved dislocations possess Burgers vectors of the same magnitude and opposite direction, which form pairs (Fig. 3(e,f)) and are eliminated near the interface. As a result, the TD propagation to the upper parts of the epilayer ceases. Figure 4(e) shows an HRTEM strain map along the vertical direction, produced by applying the Geometrical Phase Analysis (GPA) program on a dislocation loop with in the buffer layer, confirming the TD elimination near the interface. Furthermore, we observed TD defects grow vertically along the c-axis. Some of these defects stop propagating beyond the first 300 nm above the substrate, as shown in the HRTEM image (Fig. S1, supporting information) due to strain relaxation in the n-GaN epilayer above the distorted buffer layer 34 . This mechanism of low TDD is posited to be the reason behind the sharp (0002) RC peak of GaN grown on (201)-oriented β -Ga 2 O 3 with GaN buffer layer.
Optical properties of n-GaN epilayer. A time-integrated low-temperature PL spectrum of the Si-doped
GaN/(201)-oriented β -Ga 2 O 3 epilayer (GaN buffer layer) is shown in Fig. 5(a). The spectrum displays an intense near-band-edge (NBE) emission under an excitation density of 6 W/cm 2 , centered at 3.47 eV, which has previously been attributed to band-to-band recombination and to bound and free exciton recombination 35,36 . The inset of Fig. 5(a) shows that a GaN epilayer grown on a GaN buffer layer rather than on an A1N buffer layer has a 12-fold higher PL intensity at RT with negligible yellow-band luminescence, suggesting a higher quality GaN epilayer and low TDD by using Low-temperature GaN buffer layer. The peak at 3.47 eV (Fig. 5(a)) can be attributed to direct transitions between the conduction and the valence band tail states, as well as random distribution of dopants, resulting in random fluctuations of the doping concentration on a microscopic scale 37,38 . At RT, a slight broadening in the FWHM of the NBE peak (74 meV) is observed, due to the presence of the Si dopant 39 . This broadening of the FWHM is expected as a result of high carrier concentration (at the low-energy side) introduced by Si impurities compared to undoped GaN and it can be explained by the tailing of the density of states caused by potential fluctuations introduced by the random distribution of Si dopants 39 . In addition, residual acceptors may contribute to the NBE peak broadening in the same way by introducing potential fluctuations 40 . Intensity power dependence studies show that decreasing the excitation intensity by three orders of magnitude produces no significant changes in the observed PL spectra, indicating that no band-gap filling or saturation of the defect level occurs and suggesting that these effects do not contribute to the broadening 41 . Peaks ascribed to donor-acceptor pair recombination accompanied by longitudinal optical (LO) phonon replicas were observed in the 3.28 − 2.98 eV energy range as shown in Fig. 5(a,b). The phonon replicas are observed in an energy difference of about 97 meV. Figure 5(b) shows a typical temperature-dependent PL spectrum of Si-doped GaN between 8 K and RT. As the temperature increases, the NBE peak weakens and becomes slightly red shifted. The LO phonon replicas weaken dramatically after 150 K and almost disappear at RT. Figure 5(c) shows the TRPL spectra of Si-doped GaN/(201)-oriented β -Ga 2 O 3 epilayer (GaN buffer layer) at 4K and RT. The TRPL decay time of the Si-doped GaN epilayer can be described by a biexponential fitting. The biexponential decay process occurs in a multilevel system, which arises following the capturing of carriers at the deeper non-radiative centers either in the film or at the interface 42,43 . The biexponential decay can be described as 44 , Scientific RepoRts | 6:29747 | DOI: 10.1038/srep29747 1 2 where A1 and A2 are adjustable constants, and τ 1 and τ 2 are the fast and slow decay times, respectively. Decay times for τ 1 and τ 2 are measured as 51 ps and 189 ps, respectively, at 4 K, and as 71 ps and 352 ps, respectively, at RT. The radiative recombination lifetime of the donor-bound exciton in intentionally and unintentionally doped GaN is typically in the range of 30 to 100 ps 45 at RT, increasing to 530 ps at 4 K 41 . A high donor concentration (10 18 cm −1 ) in a high-quality n-GaN epilayer has previously been shown to lead to a fast decay and a broad spectrum 43 . Hence, even at low-temperature, it will be difficult to clearly distinguish between free and donor-bound excitons. By defining the internal quantum efficiency η int (T) as the ratio of the time-integrated PL intensity at each temperature relative to that at the lowest temperature (4 K) and by assuming that the non-radiative channels are inactive at the lowest temperature (in line with the approach used in Rashba's treatment 43 ), we estimated the values of radiative τ rad (T) and non-radiative τ nr (T) recombination lifetimes using the following equations: nr PL int where τ PL (T) is the total recombination lifetime. The radiative and non-radiative recombination lifetimes obtained are plotted in Fig. 5(d). The squares and the circles indicate estimated τ rad and τ nr , respectively. As the temperature increases from 4 K to RT, τ rad increases super-linearly from 51ps to 2.8 ns as shown in Fig. 5(d). Such behavior has been reported at different Si-doping concentrations of GaN 46 . The decay time behavior of highly Si-doped GaN may be explained by the fact that the recombination is governed by bound excitons at low-temperatures (< 10 K), whereas in the intermediate temperature range (20 to 100 K), the free exciton decay rate is largely weakened by the high electron background concentration (due to the presence of Si dopant atoms).
Since the sample is heavily doped, the effect of the free excitons is weakened, and donor-to-band transitions significantly contribute to the recombination events at intermediate and higher temperatures (~100 to 300 K). However, for undoped-and moderately-doped GaN epilayers, the radiative decay time is considerably enhanced by the free exciton transition 43 . When temperature increases, the radiative lifetime decreases from 300 to 72 ps and non-radiative lifetime decreases, reflecting the thermal activation of non-radiative recombination processes (Fig. 5(d)) 43 .
The quality of InGaN grown on the n-GaN epilayer. The growth of In x Ga 1−x N epilayers by MOCVD is influenced by the surface morphology of the underlying n-GaN layer, growth mode, indium incorporation, and carrier gas composition 47 . It is, therefore, necessary to demonstrate that the Ga 2 O 3 substrate can produce high structural and optical quality In x Ga 1−x N materials grown on this n-GaN/(201)-oriented β -Ga 2 O 3 (using GaN buffer layer) that can be used in the development of single and multiple quantum well vertical devices. We investigated the morphology of In x Ga 1−x N samples through AFM ( Fig. 6(a-c) whereby the surface roughness can be represented by the root-mean-square (RMS) average. Figure 6( 48 . At least one pit (dislocation) located in the center of each spiral is observed for all InGaN epilayers, as shown in Fig. 6(a-c). Figure 6( 49 . Furthermore, PL emissions of all In x Ga 1−x N films grown on the (201) β -Ga 2 O 3 substrate showed high peak intensity compared to those grown on Al 2 O 3 substrates (Fig. 6(e)), indicating that the β -Ga 2 O 3 substrate is a potential candidate for growing good quality InGaN materials. The blueshift of NBE peak (Fig. 6(e)) of the InGaN sample grown on (201) β -Ga 2 O 3 compared to that grown on Al 2 O 3 can be due to a different degree of strain and InN compositional fluctuation.
Conclusion
We have reported high optical and structural quality In x Ga 1−x N epilayers (0 ≤ x ≤ 23) grown on (201) β -Ga 2 O 3 substrate using low-temperature undoped GaN buffer layer β -Ga 2 O 3 is a potential substrate material for III-nitrides, which incorporates the transparency of sapphire and the conductivity of SiC. Raman mapping and XRD RC show high-quality uniform and nearly strain-free 2″ wafer grown on (201) β -Ga 2 O 3 substrate by using GaN buffer layer, confirming that this substrate can be used for large-scale technology for nitride-based vertical devices. TEM analysis confirms the low TDD of the single-crystalline epilayers. We found that the TD annihilation occurred near the interface between the substrate and the buffer layer, as a result of TDs bending into the basal plane and reacting with dislocations of the opposite phase, before eliminating each other. The TD bending can be ascribed to the combination of the nature of the (201) β -Ga 2 O 3 surface (nano-humps) and the buffer layer distortion, followed by roughly strain-free single crystal Si-doped GaN epilayer, preventing these TD continuing propagating to this epilayer. The PL measurements revealed a high-intensity NBE emission peak with a weak yellow band, whereas TRPL showed a low non-radiative contribution to the NBE emission. This work demonstrates that improved quality and stable III-nitride crystals grown on (201)-β -Ga 2 O 3 substrate can be obtained, relative to those grown on (100)-orientated β -Ga 2 O 3 . We showed that conductive β -Ga 2 O 3 promises better current distribution, thermal management and better light extraction compared to sapphire and testifies to its suitability as a substrate for manufacturing large scale high-efficiency vertical devices, which increases the lifetime and simplifies the fabrication process compared to lateral devices. Dislocation density in nitride films grown on (201)-oriented β -Ga 2 O 3 may be further reduced as crystal growth continues to be optimized either by lateral overgrowth or substrate patterning.
|
2018-04-03T04:42:40.642Z
|
2016-07-14T00:00:00.000
|
{
"year": 2016,
"sha1": "750548f75fc54e2d6abbf6aede32372bc2a4ccdf",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep29747.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "97f1ea16ec64aac6c048efba3e2c13fea40d2629",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
55357616
|
pes2o/s2orc
|
v3-fos-license
|
PORTABLE IMAGERY QUALITY ASSESSMENT TEST FIELD FOR UAV SENSORS
Nowadays the imagery data acquired from UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in imagery intelligence (IMINT) as well as in other tasks as decision support. Therefore quality assessment of such imagery is an important task. The research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test fieldThe Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs. The PIQuAT consists of 6 individual segments, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs. All segments of the PIQuAT can be used together in various configurations or independently. All elements of The Portable Imagery Quality Assessment Test Field were tested in laboratory conditions in terms of their radiometry and spectral reflectance characteristics. * Corresponding author
INTRODUCTION
Nowadays there is a great increase in the demand for geospatial information-especially remote sensing data that were obtained in different ranges of the electromagnetic spectrum with different kind of sensors (e.g.multispectral, hyperspectral, etc.), both for military and civilian applications.Hugh amount of remote sensing data, especially the imagery data, that are acquired all over the world, at different heights (i.e. from the ground, from the satellites, aircraft or unmanned airborne platforms) and in different ranges of the electromagnetic spectrum with different types of sensor (i.e.passive and active), are reliable material to carry on various types of spatial and spectral analyses.In the last few years it can be observed a significant increase in the availability of different imagery data, especially that obtained with relatively economic and optimum sensors, that are placed on Unmanned Aerial Vehicles (UAV), which can be used both for photogrammetry and remote sensing (Colomina & Molina, 2014;Dąbrowski et al., 2014).Unmanned Aerial Vehicles have a great potential for various remote sensing applications which is caused by several advantages of UAVs over satellites and aircraft.The most important advantage of UAVs is the possibility of their quick and repeatedly deployment.Moreover they are less costly and much more safer than standard aircraft.UAVs can fly at various heights and are flexible in the terms of mission planning.In addition to that it is possible to obtain imagery at sub-decimetre resolution with sensors that are mounted on them (spatial resolution of sensors mounted on UAVs is greater than satellite sensors, i.e. several centimetres instead of hundreds of meters) and unlike the satellite and aerial observations they do not have observation gaps due to high cloud coverage or other atmospheric phenomena and their coverage is not limited over certain regions due to orbit around the Earth (Hein & Bento, 2008;Bending et al., 2012).
The increased availability of UAVs in the recent years and all mentioned advantages of unnamed aerial systems have resulted in their frequent adoption for a growing range of different remote sensing tasks (environmental and anthropogenic) (Gupta et al., 2013) which among others include fine-scale topographic mapping, homeland security (Henriques, 2014), urban mapping and monitoring (Fernandez Galarreta et al., 2015), vegetation mapping (Berni et al., 2009;Salami et al., 2014;von Bueren et al., 2015, Feng et al., 2015), precision agriculture (Honkavaara et al., 2013), wetlands monitoring, wildfire research (i.e.forest fire monitoring) (Casbeer et al., 2005;Ollero et al., 2006;Merino & Ollero, 2010), river detection and tracking, disaster management and surveillance for emergency disasters (Ezequiel et al., 2014).Furthermore UAVs are used for Earth science research, humanitarian observations, traffic monitoring, inspection of industrial facilities like gas pipeline monitoring, volcanic gas sampling and many other environmental tasks (e.g.aquatic ecosystem monitoring (Flynn & Chapra, 2014), and coastal management (Mancini et al., 2013), ice cover monitoring (Lešinskis et al., 2012), monitoring of wildlife (Jones IV et al., 2006), monitoring of climate change and observing the weather phenomena, etc.) and digital elevation model of terrain (Dowling & Gallant, 2013;Ruiz et al., 2013).The implementation of all mentioned applications with the use of UAVs is possible with the use of various sensors working in different range electromagnetic spectrum range, that can be mounted on unmanned aerial vehicles.Nowadays, depending on the application it is possible to use a RGB-IR digital cameras, video cameras, multispectral and hyperspectral sensors, thermal cameras or even a LIDAR sensor.The imagery data acquired with UAV sensors are the main source of all data used in various remote sensing applications, photogrammetry projects and in an imagery intelligence (IMINT).In case of imaging sensors mounted on different kinds of UAVs, the determination of an interpretational potential of acquired imagery is one of the most important thing in the chain of data acquisition, data post-processing and image analysis for remote sensing and photogrammetric purposes.Field-testing and calibration of digital imagery sensors mounted on UAVs are essential to assess their quality and characterize their performance, therefore spatial, radiometric and spectral properties of digital imagery systems not only require calibration but also testing before the operational use.UAVs' sensors can be and are tested in the laboratory conditions, but due to various flight parameters of UAVs, exposition parameters of imaging sensor, light conditions and many others that can occur during the UAV flight it is important to maintain control over spectral, radiometric and spatial parameters of used sensor.Due to changing external conditions it is important to assess the image quality and test the imaging system in terms of radiometry, spectral and spatial parameters in field conditions before operational flight.To provide such assessment it is important to have field test, that would be portable and applicable for imaging sensors in many field conditions.
IMAGE QUALITY ASSESSMENT
Digital images obtained with sensors mounted on UAVs are subject to a variety distortions during acquisitions, like external conditions (e.g.amount of light, exposure time, wind speed, etc.).Therefore the assessment of their quality is very important.Till now the only method of quantifying visual image quality was through subjective evaluation (Zhou et al., 2004).Image quality can be defined as subjective impression ranging from poor to excellent.In practice, however, only subjective evaluation was usually too inconvenient, timeconsuming and expensive, especially for multispectral and hyperspectral images.Humans perception of good image quality assessment is based on the real-world experiences of seeing colours, intensities, textures and distinguishing different objects.Image quality is a subjective impression ranging from poor to excellent-that is a somewhat learned ability, accomplished by the brain.Therefore perceptual quality of the same scene can vary between individuals and temporally for the same individual.Because of large variations exist in an observer's judgment as to the correct rank ordering of image quality from poor to best, it is impossible to place image quality on an absolute scale (Holst, 1998).There are many formulas for assess image quality.One of the possibility to evaluate the quality of image quality is evaluation its spectral, radiometric and spatial resolution.There are many tests that provide the assessment of image quality of aerial sensors in the field conditions.However due to their dimensions and construction they are permanent field test, that cannot be used for evaluation of quality of images obtained with sensors mounted on UAVs.All such test provide the possibility of checking image quality in terms of spectral, spatial and radiometric resolution.Usually they consist of Siemens star and/ or sparse resolution bar target, greyscale, and different reflectance targets.e.g.Honkavaara et al. (2006), Honkavaara et al. (2008), Honkavaara et al. (2010), Markelin et al. (2010).In response to lack of the portable test field for the image quality assessment of sensors mounted on UAVs, that could be used in different conditions and for gran variety of sensors, the research team from Military University of Technology, Faculty of Civil Engineering and Geodesy, Geodesy Institute, Department of Remote Sensing and Photogrammetry has designed and prepared special test field-The Portable Imagery Quality Assessment Test Field (PIQuAT), that will fulfil the gap in this area.The PIQuAT (The Portable Imagery Quality Assessment Test Field) is the portable test field, dedicated for different sensors mounted on UAVs that allows for evaluation of the quality of images.The PIQuAT, designed and build by research team from MUT allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs.
THE PIQUAT
The Portable Imagery Quality Assessment Test Field (PIQuAT) is the portable test field, dedicated for different sensors mounted on UAVs that allows for evaluation of the quality of images.It consists of 6 individual parts made from wooden plates, 70 x 70 cm each, when combined allow for determine radiometric, spectral and spatial resolution of images acquired from UAVs.All parts of the PIQuAT were checked in the laboratory conditions.All segments of the PIQuAT can be used together in various configurations or independently.
Spatial resolution
The spatial resolution describes the ability of a sensor to identify the smallest size detail of a pattern on an image.It is the distance between distinguishable patterns or objects in an image that can be separated from each other and is often expressed in meters.Spatial resolution of digital sensors depends primarily on their Instantaneous Field of View (IFOV) and the flight height (Levin, 1999).Spatial resolution and spatial attributes of sensor can be described with GRD (Ground Resolved Distance) and GSD (Ground Sampling Distance).To determine spatial resolution for UAVs' sensors there is the first segment of the PIQuAT.It consists of a quarter of the 32 sectorial Siemens star (radius-70 cm)-Figure 1, Figure 2, that allows to determine the spatial resolution of the sensor.
Radiometric resolution
The radiometric characteristics describe the actual information content of an image.Every time an image is acquired on film or by a detector, its sensitivity to the magnitude of the electromagnetic energy determines the radiometric resolution.
The radiometric resolution can be defined as the sensitivity of a sensor to incoming flux (Levin, 1999;Orych et al., 2014).
The second segment if the PIQuAT is built from three parts, that together form a 12-level greyscale board (from black-3% reflectance to white-85% reflectance).This segment should be used for the evaluation of radiometric resolution-Figure 3. Thanks to 12-levels of grey it is possible to check the radiometric capacity of sensor's array.
Figure 3.The project of the segment no 2: 12-level grayscale All shades of grey presented in the Figure 4 were prepared by mixing black and white paint in presented proportions (Figure 3).To avoid gloss only matt paint was used.Elements of the second segment (Figure 4) of the PIQuAT were measured in the laboratory conditions with the ASD FieldSpec 4 Wide-Res spectroradiometer.For all 12 elements, spectral reflectance coefficients were obtained, thereby this segment can also be used for assessment of spectral resolution of investigated sensor-Figure 5.
Spectral resolution
Spectral resolution is the sensitivity of a sensor to respond to a specific wavelength range.Different materials reflect and absorb differently at different wavelengths.The reflectance spectrum of a material is a plot of the fraction of radiation reflected as a function of the incident wavelength and serves as a unique signature for the material.In principle, a material can be identified from its spectral reflectance signature if the sensing system has sufficient spectral resolution to distinguish its spectrum from those of other materials (Levin, 1999).On the basis of this concept the las segment of the PIQuAT was designed.The third segment is built from two elements.First element is covered with four different types of materials-wood, red PVC, aluminium plate and green textile.Such diversity of materials will allow to evaluate the ability of sensor to distinguish different materials on the basis of their spectral reflectance coefficients-Figure 6-7.For all element of this segment spectral reflectance coefficients were obtained in the laboratory conditions.As it is shown on the Figure 10, spectral characteristics of red colours as green are quite similar in the visible spectrum range, however it is possible to distinguish them in infrared spectral range.
CONCLUSIONS
The Portable Imagery Quality Assessment Test Field (PIQuAT) that provides quality assessment in field conditions of images obtained with sensors mounted on UAVs.Thanks to its construction: separate segments, wooden plates and dimension (70 x 70 cm), the PIQuAT can be transported with UAVs and mounted in any possible field conditions.All segments can be mounted separately or combined depending on the field conditions.Three segments that provide evaluation of three types of resolutions (i.e.spatial, spectral and radiometric) are ideal for image quality assessment.Moreover the portable construction of the PIQuAT allows using it without any durable interference with imaged area from UAV system.Therefore the PIQuAT facilitates testing different UAV sensors used for remote sensing or IMINT purposes in field conditions.
Figure 1 .
Figure 1.The project of the segment no 1: the quarter of Siemens star (diameter 70 cm)
Figure 6 .
Figure 6.The project of the segment no 3: Various materials
Figure 8 .Figure 9 .
Figure 8.The project of the segment no 3: Different paint of the same colour
Figure 10 .
Figure 10.Segment no 3: Spectral reflectance coefficients of various paints
|
2018-12-07T05:42:16.535Z
|
2015-08-26T00:00:00.000
|
{
"year": 2015,
"sha1": "bd387498c3c54e1640218fa0f039195830da65f1",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-1-W4/117/2015/isprsarchives-XL-1-W4-117-2015.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bd387498c3c54e1640218fa0f039195830da65f1",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Geography"
]
}
|
18239943
|
pes2o/s2orc
|
v3-fos-license
|
Chloroquine improves left ventricle diastolic function in streptozotocin-induced diabetic mice
Diabetes is a potent risk factor for heart failure with preserved ejection fraction (HFpEF). Autophagy can be activated under pathological conditions, including diabetic cardiomyopathy. The therapeutic effects of chloroquine (CQ), an autophagy inhibitor, on left ventricle function in streptozotocin (STZ)-induced diabetic mice were investigated. The cardiac function, light chain 3 (LC3)-II/LC3-I ratio, p62, beclin 1, reactive oxygen species, apoptosis, and fibrosis were measured 14 days after CQ (ip 60 mg/kg/d) administration. In STZ-induced mice, cardiac diastolic function was decreased significantly with normal ejection fraction. CQ significantly ameliorated cardiac diastolic function in diabetic mice with HFpEF. In addition, CQ decreased the autophagolysosomes, cardiomyocyte apoptosis, and cardiac fibrosis but increased LC3-II and p62 expressions. These results suggested that CQ improved the cardiac diastolic function by inhibiting autophagy in STZ-induced HFpEF mice. Autophagic inhibitor CQ might be a potential therapeutic agent for HFpEF.
Introduction
Diabetes mellitus (DM) is a potent risk factor for heart failure (HF). DM contributes to the pathophysiologic process of HF with reduced or preserved ejection fraction (EF). 1 The prevalence of heart failure with preserved ejection fraction (HFpEF) has increased in the past 2 decades, and DM is frequently associated with HFpEF. 2 Population-based epidemiological studies reported that almost half of the patients with congestive HF have preserved EF and 30%-40% of these HFpEF patients have DM. 3 HFpEF associated with DM has a long period of irreversible cardiac damage; therefore, early recognition and treatment of the preclinical cardiac abnormalities are important. 4 Recently, several studies have established the key role of autophagy in the development of diabetic cardiomyopathy. In hearts from streptozotocin (STZ)-induced type 1 diabetic mice, diastolic function was impaired, though autophagic activity was significantly increased, as evidenced by increases in microtubule-associated protein 1 light chain 3 (LC3) and LC3-II/-I ratios and SQSTM1/p62 (sequestosome-1) and by the abundance of autophagic vacuoles and lysosomes detected electron microscopically. 5 Mellor et al 6 reported that increased myocardial autophagic flux in fructose diet-induced type 2 diabetic mice resulted in pathological remodeling of the heart. In cultured neonatal rat cardiomyocytes, the suppression of autophagy by 3-methyladenine and/or siRNA specific for beclin 1 or Atg7 attenuated high glucose-induced cardiomyocyte injury. 7 Xu et al 8 demonstrated that inhibition of autophagy was cardioprotective in type 1 diabetes, which was linked to activatation of noncanonical autophagy and improvement in mitophagy. We hypothesized that inhibition of autophagy would ameliorate cardiac diastolic function in diabetic mice. Chloroquine (CQ), an autophagy inhibitor, has been used for the treatment of various diseases for a long time. CQ inhibits lysosomal acidification and therefore prevents autophagy by blocking autophagosome fusion and degradation. It has been reported that the administration of CQ reversed cardiac dysfunction and atrophy via the suppression of autophagic activity. 9 However, other results demonstrated that CQ induces mitochondrial dysfunction in cardiomyocytes and significantly impairs mitochondrial antioxidant in pressure overload hypertrophy. 10 In the present study, we investigated the effect of the autophagic inhibitor CQ on cardiac function in STZ-induced diabetic mice with HFpEF.
Materials and methods experimental animals and protocol
All animal procedures used in this study were performed in accordance with the guidelines of the National Institutes of Health and approved by the Animal Care and Use Committee of Guangzhou Medical University (No 2010-225). Specific pathogen-free male C57BL mice (age 6 weeks, weighing 16-18 g, Guangdong Experimental Animal Center) were randomly divided into three groups (n=8): 1) control; 2) STZ; and 3) CQ. Hyperglycemia of the mice in the STZ and CQ groups was induced with STZ (ip 60 mg/kg/d prepared daily in citrate buffer pH 4.5, for maximal stability) for 6 days. Six days after STZ injection, hyperglycemia was confirmed by fasting blood glucose measurement using tail-vein blood with the FreeStyle Flash Blood Glucose Monitoring System (Ultra; LifeScan, New Brunswick, NJ, USA). Mice with blood glucose concentrations $16.7 mmol/L were used for the study. 11 At the end of 9 weeks, the mice in the CQ group were treated with CQ (ip 60 mg/kg/d; Sigma-Aldrich Co., St Louis, MO, USA) for 14 days. Two weeks after treatment, the animals underwent echocardiography followed by heart tissue experiments. echocardiography Functional and dimensional parameters were measured at the end of 11 weeks after STZ-induced diabetes using a Vevo 2100 echocardiography system (VisualSonics, Toronto, Canada). Anesthesia was induced by 1.5% isoflurane in 100% oxygen. Following anesthetic induction, the mouse was placed on a heated platform for echocardiography. The body temperature was maintained at 37.0°C, and anesthesia was maintained with 0.5%-1.0% isoflurane in 100% oxygen. The isoflurane gas volume was regulated according to the rate in order to ensure an adequate depth of anesthesia. Standard imaging planes, M-mode, color-mode, Doppler, and functional calculations were acquired according to the guidelines of the American Society of Echocardiography. The parasternal short axis view of the left ventricle (LV) was used to guide calculations of percentage EF, percentage fractional shortening (FS), and ventricular dimensions and volumes. Moreover, passive LV filling peak velocity (E, mm/s) and atrial contraction flow peak velocity (A, mm/s) were acquired from the images of mitral valve Doppler flow from apical four-chamber view. Considering that heart rate (HR) positively correlates with systolic function, the HRs of animals during echocardiographic study were maintained in the range of 500-550 beats/min for M-mode, 400-450 beats/min for Doppler studies, and 450-500 beats/min for B-mode.
Preparation for heart tissue samples
At the end of 11 weeks, the mice were weighed and anesthetized with isoflurane and sacrificed by cervical dislocation. The hearts were rapidly excised and placed into a dish containing physiological saline. All four chambers were removed and weighed separately. The LV was snap frozen in liquid nitrogen for subsequent analysis.
reactive oxygen species detection in the myocardium
Heart tissue sections were harvested and directly embedded in optimal cutting temperature compound. Superoxide production in the heart was detected using dihydroethidium (DHE) staining (Sigma-Aldrich Co.). Frozen heart sections (10 µm) were incubated with 10 µM DHE for 45 minutes at 37°C in a humidified chamber protected from the light. The average fluorescence intensity of the nuclei was then analyzed using Image-Pro Plus software (Media Cybernetics, Inc., Silver Spring, MD, USA).
assessment of apoptosis
Apoptosis was assessed using terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) according to the manufacturer's instruction. Briefly, 5 µm thick paraffinembedded sections were prepared from the hearts of each group and deparaffinized sequentially. The sections were stained with TUNEL reagents, and the nuclei were stained by 4′,6-diamidino-2-phenylindole. The samples were analyzed using a fluorescence microscopy (TE2000; Nikon Instruments, Melville, NY, USA). The number of TUNEL-positive cardiomyocyte nuclei and the total cardiomyocyte nuclei in each sight was counted. The ratio of apoptotic cardiomyocytes was calculated by dividing the number of TUNEL-positive cardiomyocyte nuclei by the number of total cardiomyocyte nuclei.
Masson's trichrome staining for fibrosis
The hearts were removed, perfusion fixed with 10% neutral buffered formalin overnight, and embedded in paraffin. The sections were cut with 5 µm thickness and mounted onto slides. Masson's trichrome staining was performed for fibrosis analysis. The tissue sections were examined under a light microscope (EVOS fl; AMG, Mill Creek, WA). The relative fibrosis area was determined by Image-Pro Plus software (Version 6.0), and the mean values of relative fibrosis area were obtained by one investigator blinded to the groups.
statistical analyses
Statistical analyses were performed in GraphPad Prism (Version 6.0; GraphPad Software, Inc., La Jolla, CA, USA). All data were expressed as mean ± standard error of the mean. A one-way analysis of variance with a Tukey post hoc analysis that calculates P-values corrected for multiple comparisons was performed to assess differences between three groups. A P-value of ,0.05 was considered statistically significant.
Metabolic and morphometric profiles
Fasting blood glucose levels were significantly increased in STZ-induced mice compared with the control group at the age of 11 weeks (P,0.05). The blood glucose levels of STZ-induced mice did not differ significantly from those of the CQ-treated group (P.0.05; Figure 1A). The total body weight and heart weight were significantly decreased in STZ-induced mice and CQ group compared with controls (P,0.05). However, the body weight and heart weight showed no significant difference between the CQ and STZ groups (P.0.05; Figure 1B and C). The heart weight to body weight ratio was significantly decreased in the STZ and CQ groups (P,0.05; Figure 1D). The LV weight and the LV weight to body weight ratio were significantly decreased in the STZ and CQ groups (P,0.05; Figure 1E and F).
systolic function and diastolic function of the heart in vivo
In STZ-induced diabetic mice and CQ-treated mice, parameters of cardiac systolic function (Figure 2A), including HR ( Figure 2B), FS ( Figure 2C), and EF ( Figure 2D), were similar to those in the control group (P.0.05). Using pulse-wave Doppler technique, echocardiography revealed that diastolic cardiac function was significantly impaired in the STZ-induced diabetic mice ( Figure 3A). Transmitral filling pattern showed inverted E/A ratio ( Figure 3B) with prolongation of E-wave deceleration time ( Figure 3C) in the STZ-induced diabetic mice compared with the control group (P,0.05). CQ treatment significantly improved the E/A ratio and E-wave deceleration time (P,0.05).
expression of lc3, p62, and beclin 1 in myocardium LC3, p62/SQSTM1, and beclin 1 (marker molecules of autophagic activities) in the hearts are shown in Figure 4A. LC3-II/LC3-I ratio was increased significantly in the STZinduced mice. CQ treatment further increased LC3-II expression ( Figure 4B) but decreased the LC3-II/LC3-I ratio with the increased LC3-I ( Figure 4C). Compared with control, the level of p62 was increased significantly in the STZ-induced diabetic mice. The level of p62 protein was further increased significantly in the CQ-treated mice ( Figure 4D). The level of beclin 1 was significantly increased in the STZ-induced diabetic mice compared with the control group. The level of
2732
Yuan et al beclin 1 in the CQ treated group did not differ significantly from that of the STZ-induced mice (P.0.05; Figure 4E).
electron microscopy
Representative electron microscopy images are shown in Figure 5. Electron microscopy revealed that formation of the autophagosomes and the autophagolysosomes was enhanced in the hearts of STZ-induced mice. The electrographic assay showed that the autophagolysosomes were significantly decreased in the hearts of the CQ treatment group compared with the STZ-induced group (P,0.05).
reactive oxygen species levels in myocardium
Myocardial reactive oxygen species (ROS) levels were quantified by DHE staining. The ROS levels were significantly increased in the STZ-induced diabetic mice (P,0.01). Treatment with CQ had no significant change in the ROS levels as compared with the untreated diabetic mice (P.0.05; Figure 6).
apoptosis in myocardium
The paraffin-embedded sections of hearts from each group were stained by TUNEL to determine the cardiomyocyte apoptosis. The apoptotic cardiomyocytes were observed in STZ-induced mice, and CQ treatment reduced significantly the STZ-induced apoptosis (P,0.01; Figure 7).
Fibrosis in myocardium
Fibrosis is an integral feature of diabetic cardiomyopathy. We performed Masson's staining on cardiac tissue sections to determine the development of fibrosis in mice of each group. Dramatic perivascular fibrosis was observed in STZ-induced mice, which was attenuated significantly by CQ treatment (P,0.05; Figure 8).
Discussion
In this study, the HFpEF model was established with STZinduced diabetic mice. Diastolic dysfunction was found in STZ-induced diabetic mice, showing the reverse E/A ratio and increased mitral E-wave deceleration time. Additionally, the echocardiography M-mode demonstrated normal EF, FS, and stroke volume. Therefore, the STZ-induced mice model showed typical features of HF with normal EF. Echocardiography tests in type 1 diabetic mellitus patients showed that without known coronary artery disease revealed
2734
Yuan et al diastolic function with a reduction in early filling and increase in atrial filling. 12,13 Diabetic cardiomyopathy could progress to irreversible cardiac damage; therefore, early recognition and treatment of the preclinical cardiac abnormalities are important. 4 The present study showed features of metabolic syndromes with decreased body weight and increased blood glucose in STZ-induced diabetic mice. Treatment with CQ for 14 days did not lower the plasma glucose level significantly ( Figure 1). The animal model provided evidence for diastolic dysfunction tested by echocardiography. The LV mitral valve blood flow showed faster relaxation and the E/A ratio back to the normal level in the CQ group, indicating that CQ treatment improved the diastolic dysfunction in the STZ-induced diabetic mice (Figures 2 and 3). These findings actually suggested that CQ, an autophagy inhibitor, might be a useful therapeutic agent for the treatment of diabetic diastolic dysfunction.
Autophagy refers to the homeostatic cellular process of sequestering organelles and long-lived proteins in a doublemembrane vesicle inside the cell (autophagosome), where β β β β Figure 4 effects of cQ on the expression of lc3, p62, and beclin 1 in the heart of sTZ-induced mice. Notes: The levels of LC3-II and p62 were significantly increased in the STZ-induced mice, and CQ treatment further increased their expressions. CQ decreased the LC3-II/ LC3-I ratio with the increased LC3-I. The expression of beclin 1 of STZ-induced mice and CQ-treated mice had significantly increased. Representative images of Western blot for lc3 protein and p62 protein (A), summary data of lc3-ii level (B), summary data of lc3-ii/lc3-i ratio (C), summary data of p62 level (D), and summary data of beclin 1 (E). *P,0.05 vs control group, **P,0.01 vs control group, ***P,0.05 vs sTZ group, and ****P,0.01 vs sTZ group. the contents are subsequently delivered to the lysosome for degradation. 14 Under basal conditions, autophagy occurs in a healthy heart; 15 however, autophagy can be activated under pathological conditions, including HF and cardiac hypertrophy. 16,17 Overactivated autophagy may affect the ultrastructure of the sarcomere and cause mitochondrial structural abnormalities. 18 A previous study showed that the overactivated autophagy may harm the cardiac function through affecting the titin/protein ratio. 19 Insulin acts through the mTOR pathway to inhibit the autophagy. Autophagy in the heart is enhanced in type 1 diabetes, but is suppressed in type 2 diabetes. This difference provided important insight into the pathophysiology of diabetic cardiomyopathy, which was essential for the development of new treatment strategies. 5,20 The present study demonstrated that autophagy was enhanced in the STZ-induced diabetic mice model (Figures 4 and 5). CQ inhibited autophagy by affecting lysosome acidification. 21 CQ altered the lysosome pH, with the lysosomal neutralization inhibiting lysosome activities and can be used in assays of short-term autophagy flux. 22 In addition, CQ decreased LC3-II/LC3-I protein ratio and undigested autophagosome observed by transmission electron microscopy in STZ-induced diabetic mice ( Figure 5). The level of LC3-II is correlated with the extent of autophagosome formation. CQ accumulates LC3-II, a key step in autophagosome formation, by preventing the degradation of LC3-II-containing autolysosomes. 23 The adaptor protein p62 (sequestosome-1) can bind directly to LC3 to facilitate degradation of ubiquitinated protein aggregated by autophagy. 24 The accumulation of p62 was associated with decreased autophagy by CQ (Figure 4). in the CQ-treated diabetic mice. These results showed that inhibiting of excessive autophagy of STZ-induced diabetic mice was correlated with decreased apoptosis and fibrosis.
Conclusion
The present investigation demonstrates that overactivated autophagy in STZ-induced type 1 DM mice results in cardiac diastolic dysfunction. CQ improved LV diastolic function and preserved systolic contractility in STZ-induced type 1 diabetic mice by reducing autophagic conditions of the LV.
The subsequent generation of ROS and accompanying oxidative stress in diabetes are hallmarks of the molecular mechanisms underlying diabetic cardiovascular disease. 25 In diabetic cardiomyopathy, the production of ROS induces inflammation, endothelial dysfunction, cell apoptosis, and myocardial remodeling. 26 In the present study, the effects of CQ on oxidative stress in STZ-induced mice were analyzed. The results of the present study suggested that the autophagy inhibitor CQ was not able to decrease the ROS level in the diabetic mice, which indicated that CQ was not able to act as an antioxidant directly. Though autophagy is generally viewed as a survival mechanism, excessive autophagy has been associated with apoptosis and fibrosis. 27 In alveolar macrophages, lipopolysaccharide induced autophagy and apoptosis. Blockade of the formation of autophagosomes inhibited apoptosis via the intrinsic apoptotic pathway. 28 The apoptosis and fibrosis of myocardium were ameliorated
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/drug-design-development-and-therapy-journal Drug Design, Development and Therapy is an international, peerreviewed open-access journal that spans the spectrum of drug design and development through to clinical applications. Clinical outcomes, patient safety, and programs for the development and effective, safe, and sustained use of medicines are the features of the journal, which has also been accepted for indexing on PubMed Central. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
|
2018-04-03T04:21:03.422Z
|
2016-08-30T00:00:00.000
|
{
"year": 2016,
"sha1": "5c46efa56f0c290bf85c34db9df9a02cc0a32bc2",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=32154",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6cc9255058b116743ecdfc65112953884b82bb21",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17955997
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Complex Cancer Morphology Detection Using the SIVQ Pattern Recognition Algorithm
For personalization of medicine, increasingly clinical and demographic data are integrated into nomograms for prognostic use, while molecular biomarkers are being developed to add independent diagnostic, prognostic, or management information. In a number of cases in surgical pathology, morphometric quantitation is already performed manually or semi-quantitatively, with this effort contributing to diagnostic workup. Digital whole slide imaging, coupled with emerging image analysis algorithms, offers great promise as an adjunctive tool for the surgical pathologist in areas of screening, quality assurance, consistency, and quantitation. We have recently reported such an algorithm, SIVQ (Spatially Invariant Vector Quantization), which avails itself of the geometric advantages of ring vectors for pattern matching, and have proposed a number of potential applications. One key test, however, remains the need for demonstration and optimization of SIVQ for discrimination between foreground (neoplasm- malignant epithelium) and background (normal parenchyma, stroma, vessels, inflammatory cells). Especially important is the determination of relative contributions of each key SIVQ matching parameter with respect to the algorithm’s overall detection performance. Herein, by combinatorial testing of SIVQ ring size, sub-ring number, and inter-ring wobble parameters, in the setting of a morphologically complex bladder cancer use case, we ascertain the relative contributions of each of these parameters towards overall detection optimization using urothelial carcinoma as a use case, providing an exemplar by which this algorithm and future histology-oriented pattern matching tools may be validated and subsequently, implemented broadly in other appropriate microscopic classification settings.
Introduction
For personalized medicine, clinical and demographic data are being integrated into nomograms for prognostic use, while molecular biomarkers are being developed to add independent diagnostic, prognostic, or management value [1]. In the setting of diagnostic surgical pathology of several diseases, morphometric quantitation of key histologic characteristics is 1 These authors contributed equally to this work. performed manually or semi-quantitatively on H&E sections, though such efforts remain plagued by significant inter-and often intra-observer variability [2][3][4][5]. To address these concerns, several adjuncts, availing themselves of advancing digital slide technologies, have emerged to provide greater standardization and precision, including implementations, reaching FDA approval for clinical use (such as Ventana VIAS Image Analysis System TM , Aperio's Digital IHC solution TM , Bioimagene's PATHIAMTM TM system with iSCAN TM , and Bioview's Duet TM ). However, with the increasing acceptance of digital slide workflow, and improvement in image quality and scanning speed, the application of H&E image analysis algorithms may offer important opportunities for screening, quantitation, and standardization to practice while spurring generation and testing of novel hypotheses regarding relationships between histologic features and disease classification and risk stratification.
Image processing techniques, such as pattern recognition algorithms, offer the potential to rigorously and consistently identify and quantify morphologic and architectural features unique to a diagnosis [6][7][8][9][10]. However, the technical demands of image sampling in the x and y translation of freedom, 360 degree rotation, and mirror symmetry, in conjunction with morphologic heterogeneity of tissue (due to biological heterogeneity and tissue processing artifacts), yield astronomical combinations of image features prohibiting efficient computational analysis such as might have real-time utility in a complex diagnostic setting.
One example of a disease process that is characterized by a constellation of highly variable morphologic features is urothelial carcinoma (UC), the most common type of urinary bladder cancer in Western populations. With numerous described histologic variants characterized by important prognostic and therapeutic implications [11], UC constitutes an epitome of morphologic heterogeneity. The micropapillary variant of UC (MPUC) [12], in particular, demonstrates a wide spectrum of architectural and cytologic features presenting a complex diagnostic problem for expert observers [5], while its natural history of early and extensive invasion often predicates aggressive, early surgical management [13].
We have previously shown that the use of ring vectors can address many of the limitations due to intrinsic incorporation of x and y translation, rotational, and its mirror symmetry within the vector itself, which we have implemented in our pattern recognition image analysis program, the "wobble" feature, using a high-throughput pattern recognition image analysis algorithm, Spatially Invariant Vector Quantization (SIVQ) [14]. Briefly, SIVQ is a pattern recognition algorithm that can identify morphologic and architectural features on H&E and special stained digital slides [14,15]. Initially, one (or a small number of) distinct predicate image candidate feature(s) is identified by the user. This feature in turn is utilized to perform an exhaustive search of the entire surface area of the digital slide, resulting in the generation of a statistical probability heatmap of quality of matches.
SIVQ is unique in that it uses a set of rings instead of a block. A ring is the only geometric structure in two-dimensional space besides a point that has continuous symmetry. Thus, using a series of concentric rings, one can convert this two dimensional orientation problem into a linear pattern matching of rotational sampling where each ring would match a series of points on a circle and then rotate thru the complete 360 degrees, then move to the next x, y coordinate. Video demonstrations on multiple uses of the SIVQ algorithm can be viewed on our institution's YouTube channel [16]. Most recently, we have added tools to automatically generate the optimal vector and integrated it into MATLAB TM [17] as well as demonstrated its potential as a tool for user-directed automated tissue microdissection for molecular studies [15].
However, as we have reported before [14], the SIVQ algorithm requires input of multiple key parameters, including ring vector selection, ring diameter, number of sub-rings, and inter-ring wobble. Briefly, a predicate ring vector is created by adjusting the size and number of sub-rings and selecting the feature of interests, capturing the pixels values (described further below). These series of concentric rings are overlaid onto a x, y pixel coordinate, as shown in Fig. 1. This example contains 10 rings where the number and spacing of the dots along the ring corresponds to sampling at twice Nyquist frequency, ensuring complete sampling. The pixels values surrounding each dot in the ring are assessed via bicubic interpolation and compared to the predicate ring vector pixel values. The rings are then systematically rotated and the pixels values are calculated and compared for each point, including its mirror symmetry by flipping the ring.
Anecdotally, we have observed that the pattern recognition performance of the algorithm, in multiple "proof of principle" use cases, is highly variable as a function of ring size, sub-ring number and inter-ring wobble (unpublished observations). Importantly, in no use case have we systematically evaluated the relative contribution of each of these parameters to the performance of SIVQ, leaving open the question of how to implement and optimize its performance for future application.
Thus, proceeding from a diagnostically challenging and clinically critical [5] use case of MPUC, herein we ascertain the relative contributions of SIVQ algorithm parameters to detection of this complex and heterogeneous cancer morphology. Not only do these findings support the argument that SIVQ has the potential to provide a useful adjunctive tool to the surgical pathologist by demonstrating excellent performance when For demonstration purposes, an representative example ring vector, sized 10 pixels in diameter and with 9 sub-rings, was superimposed on a pixel-level image. This image shows the relationship between the number of points on the ring that are sampled for analysis, as well as the orientation, sizes, and maximal number of subrings. For adequate sampling, Nyquist frequency theory dictates that sampling should occur at twice the maximum frequency, thus requiring sampling at every half pixel. For clarity, in this image the number of sampling points was determined by the size of the outer ring and held constant for the inner rings, though in general SIVQ recalculates the number of sampling points was calculated by Nyquist frequency theory at each ring size. optimized for detection MPUC, but more importantly, the use case provides an exemplar for optimization of SIVQ ring parameters that is broadly applicable to future implementations.
Digital slides
A de-identified hematoxylin and eosin (H&E) stained section of a transurethral resection specimen of MPUC was scanned into a digital format using the Aperio XT TM whole slide imaging scanner at 40×. The digital slide has been made publicly available on our WSI repository website (www.WSIrepository.org).
Using Microsoft Paint, the surface area of tumor was painted in consensus by pathologists (SCS, JH, SAT and GU expert LPK) to generate a ground "truth" map ( Fig. 2B).
SIVQ
The use of SIVQ image analysis has been previously described elsewhere by Hipp & Cheng et al. [14,15]. An additional feature, referred to as "wobble," was added to the algorithm that allows for rings to independently rotate amongst each other. The degree to which the inner ring rotates amongst its best match of the outer ringer directly outside it is defined by the user and ranges from 1-180 • . Fig. 2. A 4× field of MPUC and "ground truth" map used in analyses. A) A representative field of from a digital H&E stained slide of micropapillary urothelial carcinoma (MPUC) was chosen for analysis by SIVQ. Small, irregular, and infiltrative nests of malignant urothelial cells show areas of identifiable "inverse polarization" of peripherally oriented nuclei and prominent retraction artifact. B) The field of MPUC showin in A. was painted by the authors for use as "ground truth" map for ROC and AUC analyses.
To improve the speed of the algorithm, alternate numbers of rings up to the total number of sub-rings was used for these analyses. For example, if a ring with a diameter of 11 pixels had 10 subrings, then ring numbers 11, 9, 7, 5, 3, and 1 were analyzed.
Statistical comparisons of SIVQ-ROC and AUC
The utility of the "wobble" parameter was tested directly across various numbers of sub-rings (0, 2, 6, 12) and various angles of "wobble" (1-180 degrees) by analysis of the area under the curve (AUC). Briefly, all areas of malignancy in both fields of interest were identified and painted in consensus by pathologists (SCS, JH, SAT) in consultation with a genitourinary pathology expert (LPK). This image was used as gold standard for calculation of the AUC by comparing distributions of SIVQ scores for pixels within and outside of these curves.
The same vector was selected for each set of experiments with the various parameters being systematically held as constants or variables. The output of the SIVQ analysis resulted in numeric values at each pixel location, which were then converted to a grayscale value of 0-255. The area corresponding to these pixel locations were dilated to the size of the ring vector that was used for the analysis. These values were compared to a ground truth map to calculate a ROC curve and it's AUC.
Histopathologic interpretation
SIVQ allows image analysis at multiple length scales [14]. As screening histologic sections for UC, including its variant MPUC, is undertaken at low power [5], we chose a length-scale of 4× magnification at which to select vectors. Figure 2A shows the representative field of a digital slide of MPUC, stained with standard H&E, which we selected for testing of SIVQ. Small, irregular, and infiltrative nests of malignant urothelial cells show areas of identifiable "inverse polarization" of peripherally oriented nuclei and prominent retraction artifact ( Fig. 2A). However, the morphologic heterogeneity of this tumor is demonstrated by areas of multiple nests within the same lacuna, and by epithelial ring forms also indicated, as reported recently. Figure 2B shows the "ground truth" map of UC, painted in consensus by the authors.
Effect of the number sub-rings (0 vs 2 vs 6 vs 12)
The total number of sub-rings can vary from 0 (minimum) to 1 -Diameter (maximum size in pixels). Conceptually, increasing the number of subrings increases the specificity while decreasing the sensitivity because more rings requires more pixel values within the feature to be matched. At 4× magnification, using a standard ring vector size of 13 pixels in diameter, SIVQ analysis was performed with the following number of sub-rings: 0, 2, 6, and 12 (while skipping every other ring). The SIVQ results were then compared to the ground "truth" map and ROC curves were generated (Fig. 3A). Figure 3A shows incremental improvement in performance as one moves from the lowest to the greatest number of sub-rings. The AUC was also calculated demonstrating a maximal performance with 12 sub-rings (0.85, Fig. 3B).
Large versus small ring vectors
SIVQ works optimally when an archetypal image feature, specific to and pervasively characteristic of the disease process of interest is identified as its exemplar for pattern recognition. For example, a small ring vector captures a cellular feature particular for an individual cell. However, when a small ring vector (3-5 pixels in diameter) is selected for an individual UC cell, the inflammatory infiltrate and stromal cells are also identified due to the similar nuclear/cytoplasmic textures (Supplementary Figure 1). In contrast, selection of a larger vector (11-13 pixels in diameter) identified from within one of the micropapillary nests captures the circular distribution of the tumor nuclei and adjacent cytoplasm (Supplementary Figure 1). Use of this larger vector identified less background cells (inflammatory and stromal cells) with many more hits specific to nests.
To characterize the effect of ring size and identify the optimal vector, a point within the center of a nest was selected and every odd-numbered ring vector from 3-25 pixels in diameter was collected (while holding the number of sub-rings to a constant, at maximal value) and used to search the field of view ( Fig. 2A). The results are displayed in Fig. 4A and B. Proceeding from 3 to 11 pixel ring diameters, we identified an improvement in the ROC curves up to a size of 11 pixels in diameter; subsequently the performance degrades up to 25 pixel diameters. The AUC confirms the best performing vector size was 11 with a value of 0.84 and the corresponding heat map is shown in Fig. 4B.
Effect of the "wobble" angle
We hypothesized that increasing the angle of "wobble" between the rings would identify more micropapillary nests by increasing the number of potential matches of the heterogenous cellular/nuclear morphologies and distributions. We analyzed every other ring size from 3 to 23 pixels in diameter; however, we added an extra dimension of "wobble": no wobble (1), 60 degree, 120 degree, and maximal wobble (180 degrees). Three dimensional ROCs are shown in Fig. 5. The wobble angle had a minor increase on the ROC curves for those large rings between 0 and 120 degrees of "wobble".
Based on the data above, we further investigated the effect of "wobble" at the lower degrees of freedom. Using a ring vector of 11 pixels in diameter with 10 subrings we analyzed the effect of the "wobble" angles of 1, 3, 6, 12, 22, 45, 60, 120, and 180 degrees. Increasing the angle from 1 to 120 degrees improved the AUC by approximately 0.006 (Fig. 6).
Effect of a benign mimic of urothelial carcinoma on the ROC and AUC
To further assess the ability of ring vectors to separate MPUC from background, we employed the same ring vectors from Fig. 4 using an additional field of view, which contained reactive urothelium (a benign mimic of in situ UC), a slightly different stromal texture and dilated blood vessels which served as a negative control (Fig. 7). This field of view, which contained no neoplasm, was designated as ground truth "negative" and the results were combined together with the results from the ground "truth" of Fig. 1. The ROC curves that contain both fields of views are shown in Fig. 8A. There was also an improvement in the maximal AUC at ring vector size 11 to 0.88.
Discussion
To identify a particular morphologic or architectural feature for SIVQ, there are critical ring vector parameters one has to optimize, including the ring diameter, number of sub-rings (which rotate along with its outer ring), and inter-ring wobble. Thus, proceeding from a diagnostically challenging and clinically critical [5] use case of MPUC, herein we ascertain the relative contributions of each of these parameters towards overall detection optimization. We believe these findings provide an exemplar by which this algorithm and future histology-oriented pattern matching tools may be optimized, validated, and, it is our hope, subsequently implemented broadly as adjunctive tools in other appropriate microscopic classification settings. Fig. 4, an additional field of view (Fig. 7) was added to the analysis and new ROC curves and AUCs were calculated. There was a very similar pattern with an improvement in performance from 3-11 followed by a degradation from 11-25 (left panel). The right panel shows an improvement in the maximal AUC, 0.88.
MPUC presents a complex diagnostic problem for expert observers [5], and its natural history of early invasion of the muscularis propria of the bladder with frequent lymphovascular invasion and metastatic nodal colonization often predicates aggressive, early surgical management [12,13]. Thus, the creation of computer aided detection tools to assist in its identification would be of material benefit to patients and exceed any additional time or costs associated with a digital workflow.
To summarize the findings reported above, ring vectors function like a "safe combination", each one rotating autonomously for the correct "match". This results in highly specific, homogeneous feature identifications. However, to relax this feature and create an increased degree of freedom, the wobble of the rings can be adjusted to allow for a defined degree of independent rotation. Adjusting the diameter of the rings to capture the optimal image predicate to account for the size and variability of the feature of interest, such as cellular or nuclear size, is of critical importance. Once the optimal ring size is selected, increasing the number of sub-rings greatly improves the specificity and sensitivity. While in this use case adjusting the "wobble" angle has little impact on the results, while making the analysis more computationally expensive, we do note a small improvement in performance and cannot exclude that in other use cases wobble might result in a greater effect.
Since this is the first image analysis report regarding MPUC, and since a computer aided detection of MPUC might have a significant impact on patient care, we have made all our of our primary data reported here publicly available on our WSI repository site (www.WSIrepository.org). Here, others will have access to review the digital slide, the TIFF images used for the analysis, and the ground truth maps. It is tempting to speculate that a computer aided detection solution for MPUC is an achievable goal, and we expect that it would entail a multi-algorithm approach. Such efforts are ongoing in our group, and we hope other investigators can build upon this foundation.
In conclusion, not only do the findings reported here extend the case that SIVQ has the potential to provide a useful adjunctive tool for the surgical pathologist by demonstrating excellent performance when optimized for detection of MPUC, but more importantly, this challenging use case provides an exemplar for optimization of SIVQ ring parameters in a manner that is broadly applicable to other malignancies.
|
2016-05-04T20:20:58.661Z
|
2011-10-11T00:00:00.000
|
{
"year": 2011,
"sha1": "f76c90605f5f90ef694ad032e56208c8aadfda45",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e956efc2f771bd6139121c935b5328f89179c0b1",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
270635128
|
pes2o/s2orc
|
v3-fos-license
|
Emerging priorities and concerns in the wake of the COVID-19 pandemic: qualitative and quantitative findings from a United States national survey
Purpose The present study examines how the coronavirus disease 2019 (COVID-19) experience affected values and priorities. Methods This cross-sectional study collected data between January and April 2023, from 1,197 individuals who are chronically ill or part of a general population sample. Using open-ended prompts and closed-ended questions, we investigated individuals’ perceptions about COVID-19-induced changes in what quality of life means to them, what and who are important, life focus, and changes in norms and stressors. Data analyses included content and psychometric analysis, leading to latent profile analysis (LPA) to characterize distinct groups, and analysis of variance and chi-squared to compare profile groups’ demographic characteristics. Results About 75% of the study sample noted changes in values and/or priorities, particularly in the greater prominence of family and friends. LPA yielded a four-profile model that fit the data well. Profile 1 (Index group; 64% of the sample) had relatively average scores on all indicators. Profile 2 (COVID-Specific Health & Resignation to Isolation Attributable to COVID-19; 5%) represented COVID-19-specific preventive health behaviors along with noting the requisite isolation and disengagement entailed in the social distancing necessary for COVID-19 prevention. Profile 3 (High Stress, Low Trust; 25%) represented high multi-domain stress, with the most elevated scores both on focusing on being true to themselves and perceiving people to be increasingly uncivil. Profile 4 (Active in the World, Low Trust; 6%) was focused on returning to work and finding greater meaning in their activities. These groups differed on race, marital status, difficulty paying bills, employment status, number of times they reported having had COVID-19, number of COVID-19 boosters received, whether they had Long COVID, age, BMI, and number of comorbidities. Conclusion Three years after the beginning of the worldwide COVID-19 pandemic, its subjective impact is notable on most study participants’ conceptualization of quality of life, priorities, perspectives on social norms, and perceived stressors. The four profile groups reflected distinct ways of dealing with the long-term effects of COVID-19.
Introduction
It is generally acknowledged that societal stressors impact the immediate health-related quality of life (QOL) and well-being of affected individuals, and recent research has documented impacts on longer-term outcomes.For example, the extreme economic challenges of the Great Depression led to immediate health effects due to food and housing insecurity, and extreme mental health problems including anxiety, depression, and even suicidality (1,2).Longer-term impacts reflected life-long Depression-induced values and priorities, such as hoarding (3), distrust of financial institutions (4), and extreme frugality (5).Accelerated age-related physiological damage among surviving cohort members has also been reported (6), as well as faster epigenetic aging among children who were in utero during the Great Depression (7).Other examples of societal stressors that had large, short-and longterm effects on QOL and broader psychosocial outcomes include natural disasters, such as Hurricane Katrina (8) or the California wildfires (9), human-induced suffering, such as the Holocaust (10), 9/11 (11), and mass shootings (12), and infectious-disease pandemics, such as the 1918 Spanish flu (13), the avian flu (14), and human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/ AIDS) (15).
The coronavirus disease 2019 (COVID-19) pandemic has some similarities to these earlier crises but is distinct in several important ways.Similar to the 1918 Spanish flu, COVID-19 was strikingly global, fast-moving, and lethal (16), although the mortality attributed to the Spanish flu was 50-100 million deaths globally and 675,000 in the United States over 10 months (0.64% of the total population), whereas the mortality attributed to COVID-19 is currently estimated to be 15-16 million people worldwide in the two years of 2020 and 2021 [0.02% of the total population (17- 21)].However, whereas the Spanish flu virus disappeared before an effective vaccine could be developed (22), the coronavirus continued its devastation for a full year until effective vaccines became available (23,24), and these vaccines continued to evolve as the virus mutated.By early 2023, many people around the world were vaccinated and the death toll was greatly reduced to about 30% of the 2022 rate in Western countries (25)(26)(27).Thus, vaccine availability differentiated COVID-19 from this earlier pandemic and transformed the viral illness from primarily deadly to a potentially disabling but less frequently fatal condition.
In the United States, a rapidly changing understanding of the coronavirus as well as misinformation transmitted from political (28,29) and public-health leaders (30) and the media (31,32) had great costs (33).Instead of a clear message about best practices for self-protection and risk factors for transmission, such information became politicized (34).Social distancing was instituted, a measure that had both benefits in reducing exposure but also costs such as adverse effects on mental health, glycemic control in diabetes, and other health problems (35,36).Social distancing became an emblem of what political message an individual believed, rather than a clear directive to all in the interest of the greater good (28,32,37).Social upheaval, social divisiveness, and mistrust were part of the experience of the pandemic, fueled by social media (38).
The initial trauma of COVID-19 was documented in global research done on the deeply distressing and disturbing experience early in the pandemic (39)(40)(41).This research complemented the growing and substantial evidence base related to the clinical science of viral transmission and containment (42)(43)(44)(45).It has documented beyond doubt the lasting imprint of sociodemographic and racial inequities on the experience of COVID-19, exacerbating the physical, psychological and social impacts of COVID-19 among people of color (46)(47)(48).
A small subset of this early research also noted important changes and changeability in values and priorities.For example, researchers in Australia documented that "conservation values" that emphasized order and stability became more important early in the pandemic, but these same values became less important by late 2020 (49).While COVID-19-driven "cocooning" led to reduced reported enjoyment and increased loneliness among older adults in Ireland, their priorities shifted to concern about "protecting the development of children" and "enjoying life as much as possible (50)." In the United Kingdom, taking responsibility and being concerned about security were core values among those compliant with COVID-19 behavioral guidelines, and perceiving that others shared these values elicited a sense of connectedness to them (51).Similarly, in Poland, people reported increased valuing of self-direction, security, conformity, humility, and caring, and a reduced emphasis on hedonism early in the lockdown period of the pandemic (52).In the state of Vermont, United States, people reported a growing connection with nature and its value in helping them cope, inspiring them, and providing access to enjoyable activities despite COVID-19-imposed limitations (53).
In addition to early changes in values and priorities, researchers noted changes in social norms early in the pandemic.In Spain, individuals most closely connected with a community were more willing to sacrifice for others, and this altruism was motivated more by social norms than by a perceived threat (54).Social norms were also found to drive preventive health behaviors in Germany and Japan (55) and in a global study of 115 countries (56).
There is, however, a paucity of research on changes in values, priorities, and social norms at this later stage of the pandemic, when COVID-19 is entrenched in our collective reality.COVID-19 is no longer treated as a public health crisis by leaders.The current public health protocol is routine booster vaccines and vigilance to the onset of symptoms.For many, when infected with COVID-19, it resembles a normal flu, not a life-threatening event.For others, however, an initial COVID-19 infection may lead to Long COVID (57), a multisystemic condition impacting multiple organ systems.At this later stage, the memory of social-distancing and preventive health behaviors is still relatively fresh, but life has largely resumed to pre-COVID-19 normal for many people.However, recent U.S. polling data from November 2023 suggests that about 50% of adults are taking at least one or more of the following precautions: avoiding large gatherings, avoiding travel, avoiding dining indoors at restaurants, wearing masks in crowds, and testing for COVID-19 before visiting with family and friends (58).Notably, only 39% of White participants reported taking precautions compared to 72% of Black and 68% of Hispanic participants.It is also important to note that this push to "return to normal" has in many respects further marginalized disabled and immunocompromised people (59).
In sum, the experiences of the pandemic may have had broad ramifications, for example in people's trust in science and public health, social activity and willingness to affiliate, and sense of equity.The present study thus aimed to understand how the COVID-19 experience led to perceived changes in values, priorities, social norms, and stressors.Using methods that relied on data using both openended prompts and closed-ended questions, it utilized thematic analysis, data-reduction tools, and mixed-methods analyses to investigate individuals' perceptions about COVID-
Sample and design
This study utilized cross-sectional data from the fourth and final data collection of a quasi-experimental, longitudinal study of the psychosocial impact of the COVID-19 pandemic.The data were collected via an online structured questionnaire between January 19 and April 12, 2023.Study participants were recruited via Rare Patient Voice 1 and Ipsos Insight 2 to yield a general-population sample of 1 www.rarepatientvoice.com 2 www.ipsos.comUnited States adults who were heterogeneous in terms of health and nationally representative in terms of age distribution, gender, region, and income.Both Rare Patient Voice and Ipsos Insight are for-profit, panel-research organizations that facilitate study recruitment by emailing their panel members, confirming compliance with study eligibility criteria, and providing links to the baseline survey outreach done by DeltaQuest Foundation, a not-for-profit medical research organization.Participants were not paid monetarily for their participation.
Criteria for eligibility were age 18 or older, able to complete an online questionnaire, and able to provide informed consent.The survey was administered through the secure Alchemer engine, 3 which is compliant with the United States Health Insurance Portability and Accountability Act.Alchemer is a for-profit organization.The protocol was reviewed and approved by the WCG Independent Review Board (#2021164), and all participants provided informed consent prior to beginning the survey.
Measures
Perceived changes in values and priorities were assessed using four open-ended questions regarding participants' appraisal of changes in perspective and seven closed-ended questions on changes in priorities, seven on perceived changes in social norms, and 22 life-stress items adapted from the Urban Life Stressors Scale (60,61).The open-ended questions asked: "When you think back on your experiences with the COVID pandemic, how have you changed how you think about… (1) the meaning of quality of life; (2) what is important in your life; (3) who is important in your life; and (4) what you want to focus on or spend your life energy on?"The closed-ended questions on changes in priorities over the COVID-19 pandemic queried job (3 items), relationships (2 items), and having alone (1 item) or free (1 item) time.The social-norms questions queried confidence in public health strategies for preventing the spread of COVID-19 (e.g., mask-wearing, vaccination), incivility (e.g., impoliteness, anger), leadership/media propriety (e.g., truth-telling, protecting best interests of the general public and vulnerable populations).One additional item "the COVID-19 pandemic" was added to the usual items from the Urban Life Stressors Scale for the current study.Supplementary Text provides the full text of these open-and closed-ended questions.
Demographic characteristics
Demographic characteristics included age, gender, with whom they live, cohabitation/marital status, race, ethnicity, education, region, height and weight [to compute body mass index (BMI)], reported difficulty paying bills, employment status, smoking status, years since chronic illness/comorbidity diagnosis if applicable, number of comorbidities, whether/how many times the individual had COVID-19, COVID-19 vaccination history, whether they believed they had Long COVID, and whether they received assistance completing the survey.
Data analysis
Descriptive statistics were used to describe the study sample characteristics.
Qualitative analysis
The open-ended data were coded into themes by six trained raters (CS, AW, TS, BC, EA, and RS), according to an existing framework from two decades of appraisal research (62).This existing framework provided a standardized protocol and comprehensive codebook originally derived using both deductive and inductive approaches in an extensive sorting procedure (63).An initial review of the open-text responses to the four prompts led us to utilize themes from past work on QOL Meaning for the QOL Meaning prompt, and on Goal Delineation themes for the Who, What, and Focus prompts.From this starting point, themes were iteratively refined based on emergent themes in the data, yielding a set of 22 themes used for the QOL Meaning prompt, and 55 themes for the Who, What, and Focus prompts.Themes in the current data were coded as "1" or "0" depending on whether they were reflected or not, respectively, in the individual's written text.For each prompt, a theme of "No Direct Answer" was used if the respondent did not provide an answer (i.e., left blank) or answered a different question than the one that was asked.For example, in response to the question "…how have you changed how you think about the meaning of QOL?, " a non-blank No Direct Answer was "My quality of life is ok" or "Yes." Each text entry could be coded for as many themes as were reflected in the set for the corresponding prompt.Therefore, one entry could elicit one theme or more than one depending on its wording.For example, in response to the What's Important prompt, one individual had written "I have tried to slow down more and not focus on work so much.I also realized how much I enjoyed my time outdoors and take more intentional walks, " which was coded as reflecting Creating Moments & Memories, Epiphanic Clarity, Health & Wellness, Prioritization, and Work & Unemployment.In contrast, another individual's "What's Important" response was "Cannot take things for granted" which was coded with the single theme of Epiphanic Clarity.Responses stating that there was no change in a particular perspective were coded as "No Change." Training took place in four 1.5 h sessions to understand the protocol and to utilize fully and expand as needed the codebook.Raters coded an initial set of 10 participants' data (from all four prompts), followed by a discussion of differences across raters.Incorporating exchanged feedback, they then coded the next 10 participants' data (again all prompts) for three more rounds, at which point comparison and discussion revealed almost no differences across raters.Raters coded data from 41 responses (all four prompts), from which inter-rater reliability per prompt was computed in two ways on the 246 test responses (6 raters * 41 participant entries).
Inter-rater reliability
Fleiss's kappa (64) assessed the degree of agreement over and above what would be expected by chance.This variant on the more familiar Cohen's kappa (65) is used in cases of more than two raters.While there are no generally accepted guidelines for a desirable level of either form of kappa, some healthcare researchers have proposed values from 0.41-0.60 as "moderate, " 0.61-0.80 as "good, " and 0.81-1.00as "very good" (66,67).Once the reliability analysis suggested sufficient consistency across raters to proceed, the remaining entries were randomly divided among raters to complete the coding of the open-text data.
Selection bias
To address possible selection biases associated with remaining in the longitudinal sample versus being lost to follow-up from the study baseline, chi-squared tests of independence or analysis of variance (ANOVA) tests were computed comparing the retention and attrition samples (i.e., the current study sample at this final follow-up versus the sample who participated in the study at baseline but not in this fourth and final follow-up) on their demographic, characteristics at baseline.Cohen's criteria (68) for small, medium and large effect size facilitated interpretation of results.
Data reduction
To reduce the number of variables used in subsequent inferential analyses, exploratory principal axis factor analysis with varimax rotation was used on the closed-ended questions, separately for Changes in Priorities, Social Norms, and Life Stress.Extracted factors had to have eigenvalues greater than 1.0, and a cut-point of 0.50 was used for including an item in a factor score, corresponding to medium loadings (69).Internal consistency reliability on the extracted factors was computed using Cronbach's alpha reliability coefficient.Methodologists generally recommend a minimum internal constancy reliability [i.e., alpha (α) reliability] coefficient between 0.65 and 0.8, and α coefficients below 0.5 are generally considered unacceptable (70,71).Using only themes with at least 25 participant endorsements within a prompt (i.e., 2% of the sample), principal components analysis (PCA) with varimax rotation was used to reduce the number of variables into composite scores for the coded open-text data, separately by prompt (i.e., QOL Meaning, What's Important, Who's Important, and Focus On).The "No Direct Answer" or "No Change" themes were excluded from the PCA.Extracted components had to have eigenvalues greater than 1.0.These two data-reduction techniques reduced the number of variables from 34 closed-ended items to 9 factors, and 46 distinct themes to 21 composites (30 in total), of which one did not load on any profile and thus was not retained in the variables used in the final Latent Profile Analysis (LPA) solution (29 in total).The 29 factor and composite scores were transformed to be on a T-score metric, with a mean of 50 and standard deviation of 10, for ease of comparability and interpretability.
LPA ( 72) is a person-centered method, rather than a variablecentered method.Accordingly, LPA was then used to identify subsets of persons with shared characteristics (i.e., response shift effects) using the 29 indicators with sufficient prevalence.We tested models of one through five profiles and selected the best fitting model based on the lowest Akaike information criterion (AIC) and Bayesian information criterion (BIC) statistics, the highest entropy statistic, and Lo Mendell Rubin adjusted likelihood ratio test (LRT) results.Mplus was used to estimate the most likely profile for each person.
Inferential analysis
Given the high classification accuracy (i.e., entropy) of the final LPA model, we then used the resulting profiles to examine bivariate relationships between the profiles and the set of demographic variables described above, using chi-squared analyses for categorical variables and univariate ANOVA models for continuous variables.Rather than
Sample
The study sample included 1,197 individuals.This sample reflects 25.3% of the baseline sample (n = 4,757), 69.1% of the follow-up 1 sample (n = 1734), and 95.5% of the follow-up 2 sample (n = 1,255).The participation rate of the baseline sample is unknown because the number of people to whom the panel research companies invited to participate is unknown.Table 1 provides the sociodemographic characteristics of the overall study sample.Compared to those who were lost to follow-up from the baseline data set of 4,757 individuals, the 1,197 retained study participants were less likely to report difficulty paying bills, were more likely to report having a college or postgraduate degree and were older (all small effect sizes; see Supplementary Table S1).
Reliability of open-text coding
Supplementary Table S2 provides a full listing of the coding themes for the open-text prompts as well as definitions and examples of each.Inter-rater reliability analyses demonstrated good reliability for the prompts related to QOL meaning, What's Important, and Who's Important, and moderate reliability for the Focus On Prompt (average kappa = 0.652, 0.621, 0.651, and 0.544, respectively; Supplementary Table S3).
Prevalence of coded themes
Supplementary Table S4 provides information about the prevalence of endorsement of each of the coded themes.In order to be included in subsequent analysis, a theme had to be endorsed by at least 25 individuals, representing 2% of the sample.Figure 1 shows the ranked prevalence of QOL Meaning themes for those themes retained for subsequent analysis.Almost one-quarter of the sample endorsed "No Change" and 23% provided no direct answer to how their thinking about QOL changed as a result of the COVID-19 pandemic.Among the remaining individuals who commented on specific aspects of changes in QOL meaning, the most prevalent themes related to family/friend, health, minimizing COVID-19 risk, and gravity (i.e., pandemic-induced renewed appreciation for something).Supplementary Figure S1 provides a full listing of all QOL Meaning themes and their prevalence.
Figure 2 shows a stacked bar chart illustrating the ranked prevalence of the What's Important, Who's Important, and Life-Energy Focus prompts for those themes retained for subsequent %
Data reduction
Factor analyses reduced the 34 items to nine factors that explained substantial variance and generally demonstrated acceptable internal consistency reliability.(See Supplementary Table S5 for details.) Factor analyses on Perspective Changes items yielded two factors that explained 53% of the variance."Inner Life & Relationships" included items focusing on having unscheduled time and improving relationships (Cronbach's α = 0.79)."Job" included items reflecting the importance of one's job and the organizational context (Cronbach's α = 0.72).
Social Norms items were summarized by three factors that explained 52% of the variance (α = 0.73)."Public Health Confidence & Consideration" included items related to confidence that others would follow public-health standards for preventing the spread of disease (α = 0.75)."Trust in Leaders & Media" included items related to trust that leaders have the best interest of the general public when making COVID-19-policy decisions and that the media provides accurate information about COVID-19 (α = 0.64)."Public Incivility" included items related to public displays of impoliteness, inconsideration, and anger.
Stress items were summarized by four factors that explained 43% of the variance."Health-Related QOL Stress" included items related to the usual domains of health-related QOL (i.e., physical health, mental health, social functioning) as well as access to medical care and public services, and COVID-19-pandemic related stress (α = 0.84)."Systemic racism/inequity" included items related to inter-racial relations, interactions with police, and experiences with racism and crime (α = 0.75)."Financial Hardship" included items related to financial, housing or job/unemployment concerns (α = 0.74)."Family Relationship Stress" included items related to stress raising children and with a marital/romantic relationship (α = 0.58).The latter factor's internal consistency reliability was lower than the usual acceptable norms but higher than what would be considered unacceptable.Prevalence of themes for QOL meaning prompt.This bar chart shows the QOL meaning themes, ranked by prevalence of endorsement.Only those themes retained for subsequent analysis (i.e., >2% endorsement) are shown.family / friends and in-person interactions."Renewed Appreciation" reflected themes related to taking life more seriously, appreciating the small things in life, and experiencing a sense of gratitude."Health Concerns" reflected themes related to short-term specific problems and health issues."Circumstances" reflected themes downplaying wellness and emphasizing longer-term situations of concern.What's Important was summarized by six composite scores that explained 54% of the variance."COVID-19 Prevention" reflected themes related to COVID-19-specific prevention."Primacy of Employment" reflected themes related to prioritizing work."Interpersonal Connections" reflected similar themes to the composite of the same name mentioned above: valuing interpersonal relationships and in-person interactions."Positive Self-Focus" reflected themes related to a sudden recognition that something is important, valuing mental health/mood state, and becoming truer to oneself via introspection."Wellness Self-Management" reflected a focus on engaging in activities aimed at improving health and wellness and making family and its long-term continuation a top priority."Primacy of Health" reflected themes related to a pandemicinduced renewed appreciation for small things and specific healthrelated concerns.
Who's Important was summarized by four composite scores that explained 59% of the variance."Primacy of Interpersonal Concerns" reflected themes related to prioritizing interpersonal relationships."Isolation & Disengagement" reflected themes related to dealing with quarantine restrictions and letting go of people or activities as a result."Epiphanic Clarity" reflected a new realization of the importance of something or intensified feelings about something always recognized as important, and mental health concerns."Family Welfare" reflected themes related to the long-term priority of family well-being and a de-emphasis on the seriousness of things.
Focus of Life Energy was summarized by seven composite scores that explained 52% of the variance."Active in the World" reflected themes related to travel, long-term family well-being, and interpersonal relationships."True to Self" reflected themes related to prioritizing becoming truer to oneself and introspection."Hobbies" reflected themes related to engaging in leisure activities and creative pursuits."Seriousness" reflected similar ideas to above, that is a new realization or intensified feelings of the importance of something and taking things more seriously."Wellness Self-Management" reflected themes related to activities of health maintenance and focusing on health issues.
Latent profiles
Five LPA models were tested (one-to five-profile solutions tested) using 29 indicators derived from the factor analyses and principal components analysis.The four-profile model fit the data best, showing the lowest AIC and BIC and the highest entropy.The significant LRT indicated that the four-profile model improved fit over the three-profile model (p = 0.03; Supplementary Table S7).Table 2 shows the factor-and composite-score means on the 29 indicators, conditionally formatted to highlight the magnitude and direction of the scores, with greater magnitude reflected by more saturated color.Red highlighting reflected scores lower than the T-score means of 50, whereas green highlighting reflected scores higher than the T-score means of 50. Figure 3 shows the plot of the mean scores for each of the 29 indicators by profile.The link between the indicator number and its content is shown in Table 2.
Conditional formatting indicates the ES based on Cohen's d for a
T-score metric, with increased color saturation indicating larger ES and the direction of the difference from a mean score of 50 shown in pink hues for scores below 50 and in green hues for scores above 50.
Individuals in Profile 1-named "Index Group"-included 64% of the sample and had relatively average scores on all indicators although they had scores reflecting a lower endorsement of perceived public incivility, stress related to racism/inequity and financial hardship, the importance of COVID-19 prevention, and the primacy of employment (all small ES).
Those in Profile 2-named "COVID-Specific Health & Resignation to Isolation Attributable to COVID"-included 5% of the sample, and their scores suggested a particular focus on COVID-specific preventive health behaviors along with noting the requisite isolation and disengagement entailed in the social distancing necessary for COVID-19 prevention (large and medium ES, respectively).They had scores reflecting a greater sense of public-health confidence/ consideration, and a greater trust in leaders/media.Their changing definition of the meaning of QOL comprised actions taken to survive COVID-19, endeavors to enable post-traumatic growth, and multiple health-and wellness-related concerns (all small ES).Thus, these individuals believe in the public-health messages and are doing the things they need to do to take precautions and protect their health.The consequences of this perspective are that they feel resigned to being isolated and lonely, and they do not see a way out.
Profile 3-named "High Stress, Low Trust"-included 25% of the sample and had high multi-domain stress (three of the four were large ES), high perceived public incivility, perceived change toward inner life/self and relationships, and true to self (medium, small and small ES, respectively).They had low trust/confidence in others' public health practices and in the leadership/media, in the primacy of employment, and the importance of COVID-19 prevention (all small ES).This group is under stress in every domain of their life but does not trust the public health messages or put much emphasis on the importance of COVID-19 prevention compared to others.They are the group with the most elevated scores both on focusing on being true to themselves and in perceiving people to be increasingly uncivil.
Profile 4-named "Active in the World, Low Trust"-included 6% of the sample and placed a particular emphasis on employment (large ES) and other activities (small ES), and emphasized multiple aspects of interpersonal relationships, personal growth, and introspection (medium and small ES).They also endorsed higher levels of perceived public incivility and reported stress in health-related QOL, financial hardship, and family relationships (small ES).Individuals in this group were focusing on trying to get back into the world, especially work.They feel that the pandemic has helped them to put their priorities in focus and are trying to find more meaning in their life activities.comparing groups, for categorical and continuous variables, respectively.The profiles were different on 10 of the 20 variables compared.On the categorical variables, the profile groups differed on race, marital status, difficulty paying bills, employment status, number of times they reported having had COVID-19, number of COVID-19 boosters received, and whether they had Long COVID (all small ES).On the continuous variables, the profile groups differed in age, BMI, and number of comorbidities (medium, small, and small ES, respectively).Figure 4 summarizes these profile differences using a radar plot of ranks for each variable with at least a small ES difference.Higher ranks reflect having a higher proportion with, or scores on, this characteristic.For ease of comparison, the ranks shown in this figure were sorted first by Profile 3 and then by Profile 1, 2.
Demographic differences by latent profile
so that pertinent characteristics were grouped together on the radar plot.Profile 1 (Index Group) reflects a group that is largely retired, with no reported difficulty paying bills, and which had three COVID-19 boosters.Profile 2 (COVID-19-Specific Health & Resignation to Isolation Attributable to COVID) stands out by dint of having the highest age, never having had COVID-19, having the highest proportion of married participants and of divorced participants, and having had two COVID-19 boosters.In contrast, Profile 3 (High Stress, Low Trust) reflects a group that has the highest proportion of non-White individuals, who reported moderate or worse difficulty paying bills, had a disproportionate number of individuals disabled from work due to a medical condition, had had COVID-19 twice, and reported having had Long COVID.They also reported the highest number of comorbidities and the highest BMI.Profile 4 (Active in the World, Low Trust) stands out by dint of having the largest proportion of employed individuals, who had had one COVID-19 booster and had had COVID-19 once.
Discussion
Three years after the beginning of the worldwide COVID-19 pandemic, its impact is notable on most study participants' conceptualization of QOL, their priorities, perspectives on social norms, and perceived stressors.About 75% of the study sample noted changes in these aspects of life, and overall reported the greater prominence of family and friends in their values.Similar to findings from early in the pandemic, a focus on family welfare (e.g., concern about child development, caring for others) was prominent.But many of the concerns highlighted early in the pandemic were not mentioned in our data, such as prioritizing order, stability, and conformity (49,52).
Overall, participants also reported the importance of health and its protection and noted that the pandemic made them take things more seriously.It gave participants a renewed appreciation for what had been the "small things, " such as spending time in person with those they love, appreciating medical care providers, or valuing "the abundance of ordinary life… such as the ability to have family and friends close, receive health care as needed, and having food and entertainment always available" (exemplary direct quote).
While the present study was not longitudinal, its findings do reflect concepts consistent with response-shift phenomena (75,76) by dint of the nature of the questions and prompts asked.Similar to many other studies that have utilized cross-sectional qualitative data to learn about ways that people's values, priorities, and concerns do and do not change (77)(78)(79), such work may have implications for greater insight into the nature of QOL appraisal and response shift.The present study revealed that in response to the catalyst of the pandemic, people perceived changes in how they thought about QOL and either identified new priorities or had epiphanies about their values.Many perceived a change in how they preferred to spend their time, valuing alone time, solitary pursuits, and unscheduled time.As the world continues to change in response to the changing conditions of COVID-19, one wonders whether these perceived changes in priorities and values will persist.
Given the backdrop of these overall trends, further analysis revealed that the sample could be further characterized as comprising four profiles.In comparison to the Index group, which was the largest group and generally had average scores on the 29 indicators of perceived change, the other three profile groups differed in their reactions to COVID-19 restrictions and ways of coping.Individuals in Profile 2, the smallest group, adhered closely to COVID-19 restrictions and adapted to a more selective social world to maintain social distancing and preserve their health as they saw fit.In contrast, individuals in Profile 3, the second largest group, were less focused on COVID-19-specific preventions in their open-response comments, possibly due to a lack of trust in guidelines recommended by public-health officials, elected leaders, and the media.Of note, they expressed that people were more uncivil than before the pandemic, and their experience of multi-domain stressors was much higher than people in the other profile groups.Due to this high stress, there may have been more immediate topics on their minds than COVID-19 prevention at the time of the survey.Instead, individuals in Profile 3 focused on being true to themselves and attending to their inner life.Although individuals in Profile 4 shared with Profile 3 a distrust of public health officials and elevated reported public incivility and multi-domain stress, they appeared to cope in more adaptive ways.This group prioritized their work, personal growth, and wellness self-management.
The four profile groups differed in demographic characteristics as well, with the participants of Profile 3 comprised of more individuals of non-white race (11% of the group as compared to 3-7% of the other profile groups), who reported worse financial difficulties, higher BMI and more comorbidities, and more individuals who were disabled from work due to a medical condition.Of note, they were more likely to have had COVID-19 more than once and to report having Long COVID.Despite Profile 4 individuals sharing with Profile 3 a distrust of public health officials and a focus on their inner life, they had a lower COVID-19 burden and were more likely than any of the other groups to be employed.Profile 2 individuals reported greater focus on adhering to COVID-19-guidelines and were the oldest in the sample, reported never having had COVID-19, had multiple booster vaccinations, and were more likely to be married or divorced.Individuals in Profile 3 may have had fewer economic resources to access public health.
Our study findings thus revealed that in the face of the global pandemic, individuals dealt with the accompanying stress and despair in psychologically distinct ways.A small minority of the sample engaged in strong adherence to COVID-19 restrictions, but they also endured consequential isolation and disengagement.Two groups shared a distrust of public-health leaders, but one seemed to suffer the most negative consequences both in terms of multi-domain stress and Long COVID.The other group focused their attention on their work and were more buffered from the negative consequences.
While the present research did not address the efficacy of treatments to improve participants' mental health, recent research has noted evolutions in mental health systems of care in response to the acute concerns during COVID-19 (80).This evolution focused on infection control, continuity of care for mental-health service users, and facilitating remote access to mental-health assessment and care in the context of new-onset or high-risk Radar chart of prominent characteristics by profile group.This radar chart summarizes the four profile-group differences in the sociodemographic variables compared.Only variables with at least a small ES difference are shown.Higher ranks reflect having a higher proportion with, or scores on, this characteristic.For ease of comparison, the ranks shown in this figure were sorted first by Profile 3 and then by Profile 1, so that pertinent characteristics were grouped together on the radar plot.
Limitations
The present study has notable advantages in its use of both qualitative and quantitative data.Its substantial sample size also enabled a careful series of data reduction and multivariate analyses, and the resulting profiles made theoretical sense.The study limitations should be noted, however.First, the attrition from baseline is notable and both the baseline participation rate and the causes for this attrition remain unknown.While it could be due to the usual reasons hindering survey research (e.g., lack of interest or time), it is also possible that it is due in part to COVID-19-related mortality.The selection bias analyses implicated only three characteristics in the attrition out of 16 considered, and two of these may reflect social determinants of health (more financial difficulties and lower education).The study sample is also less representative of non-white and/or Hispanic individuals, so the generalizability of study findings to these race/ethnicity groups is limited (80).Its generalizability to other countries, cultures, and healthcare systems is also unknown and may also be limited.The attrition and data may also reflect other biases (82).Social desirability may also play into participant responses to an unknown extent, in that they might have limited their disclosures of perceived changes due to their own theories about what can and cannot be said.Finally, the qualitative data used in this study is based on open-ended data from an online survey, researchers have no control of the depth of material provided by study participants and therefore relevant information might have been missed.Future research might build on the current findings by examining how health and well-being outcomes differed by profile group.Such research might also examine the experience of Long COVID, and how social problems such as domestic violence (83) relate to the detected changes in priorities and concerns.Other researchers might build on the current work by utilizing the same set of questions in other, more diverse samples.
Conclusion
In summary, the present study revealed that participants perceived substantial changes in priorities and/or values due to the COVID-19 pandemic in three-quarters of the sample.The four profile groups identified reflected distinct ways of dealing with COVID-19prevention guidelines, some adapting by adherence and resignation, some by increasingly focusing on inner life and others by balancing engagement in the world with a focus on inner life.Future research might examine the impact of these different coping approaches on health and well-being outcomes.
PCA's were done only on themes with at least 25 endorsements per prompt (i.e., 2% of the sample) in order to yield a robust solution.Of the 23 QOL Meaning coding themes, 14 were retained for the analysis.Of the 56 coding themes used for the What's Important, Who's Important, and Focus On prompts, 14, 8, and 11 were retained, respectively.(See Supplementary Table S6 for details.) QOL Meaning was summarized by six composite scores that explained 56% of the variance."Surviving COVID-19" reflected themes related to minimizing COVID-19 risk, retaining COVID-19 prevention behaviors, and survival."Post-traumatic Growth" reflected themes related to personal growth, positive attitude, and balance."Interpersonal Connections" reflected themes related to valuing
FIGURE 2
FIGURE 2Prevalence of themes for what is important, who is important, and life-energy focus prompts.This stacked bar chart shows the themes coded for what is important, who is important, and life-energy focus prompts, ranked by prevalence of endorsement.Only those themes retained for subsequent analysis (i.e., >2% endorsement) are shown.
FIGURE 3
FIGURE 3 Plot of mean scores on the 29 indicators by profile.This line graph shows the plot of the mean scores for each of the 29 indicators by profile.The indicator content is shown in Table2.
FIGURE 4
FIGURE 4 19-induced changes in what QOL means to them, about what is important, who is important, and what they should focus their life energy on.
(74)ues, which would have been unduly affected by the relatively large sample size of the present study, we focused on effect sizes (ES) to facilitate interpretation, using Cohen's cut-offs for explained variance (eta 2 )(68).Table values were conditionally formatted to highlight the small, medium, and large ES of the magnitude of eta 2 estimates (i.e., 0.01, 0.06, and 0.14, respectively).More saturation reflects larger ES.Statistical analyses were implemented using IBM SPSS version 29(73), Mplus version 8.8(74), and Microsoft Excel.
e., relating to a moment where suddenly realize something as important), and health & wellness.Supplementary FigureS2provides a stacked bar showing a full listing of all themes used to code What, Who and Life-Energy Focus prompts and their prevalence.
Table 3
shows the sociodemographic characteristics of the four profile groups, and the results of chi-squared or ANOVA analyses
TABLE 2
Factor and composite score means for four-profile lpa solution.a a All scores are on a T-score metric, with sample-specific mean of 50, and standard deviation of 10. 10.3389/fpubh.2024.1365657Frontiers in Public Health 10 frontiersin.org
TABLE 3
Overall and profile-group demographic characteristics.
TABLE 3 (
Continued) General Educational Development (i.e., high-school equivalency test); SD, standard deviation; Mn, mean; SD, standard deviation."Prefer not to respond, " "do not remember, " "do not know, " "not applicable, " and "other" responses were excluded from the chisquare test.The bolded values of Cramer's V are all small effect sizes.
10.3389/fpubh.2024.1365657Frontiers in Public Health 16 frontiersin.orgpatients (80).Such interventions were facilitated by time efficiency and flexibility, but often failed to reach specific vulnerable populations and those with low technological literacy (81).Future research might examine whether remote treatments impact individuals' priorities and concerns, possibly using similar methods and measures as used in the present work.
|
2024-06-21T15:10:18.706Z
|
2024-06-19T00:00:00.000
|
{
"year": 2024,
"sha1": "e48d9c882419ecac38b1f9780045543c7f936777",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/journals/public-health/articles/10.3389/fpubh.2024.1365657/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dab9b6ccc3fecc88109bd4128449b336de417272",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
23981006
|
pes2o/s2orc
|
v3-fos-license
|
Body Height Preferences and Actual Dimorphism in Stature between Partners in Two Non-Western Societies (Hadza and Tsimane')
Body height influences human mate preferences and choice. A typical finding in Western societies is that women prefer men who are taller than themselves and, equivalently, men prefer women who are shorter than themselves. However, recent reports in non-Western societies (e.g., the Himba in Namibia) challenge the view on the universality of such preferences. Here we report on male and female height preferences in two non-Western populations—the Hadza (Tanzania) and the Tsimane' (Bolivia)—and the relationships between body height preferences and the height of actual partners. In the Hadza, most individuals preferred a sexual dimorphism in stature (SDS) with the man being much taller than the woman. Preferences for SDS and actual partner SDS were positively and significantly correlated in both men and women, suggesting that people who preferred larger height differences also had larger height differences with their partners. In the Tsimane', the majority of men preferred an SDS with the man being taller than the woman, but women did not show such a preference. Unlike in the Hadza, SDS preference was not significantly correlated to actual partner SDS. We conclude that patterns of height preferences and choices in the Hadza and Tsimane' are different than those observed in Western societies, and discuss possible causes for the observed differences between non-Western and Western societies.
Rather than looking at absolute height, other studies have focused on preferences for height difference between an individual and his/her partner in a romantic relationship (i.e., sexual dimorphism in stature, SDS = male height/female height). Although a positive assortative preference has been reported for height preferences in Western societies (i.e., taller individuals tend to prefer taller partners; Courtiol, Raymond, et al., 2010;Stulp, Buunk, Pollet, Nettle, et al., 2013), it has also been shown that both men and women have a preference for SDS such that a woman is shorter than a man (Fink et al., 2007;Pawlowski, 2003;Shepperd and Strathman, 1989;Stulp, Buunk, Pollet, Nettle, et al., 2013;Stulp, Buunk, Verhulst, and Pollet, 2013). Both men and women prefer moderate SDS with men being slightly taller than women, and a maximum acceptable SDS of about a 17% difference in height (Salska et al., 2008). Moreover, SDS preferences are dependent on height in that taller men and shorter women prefer larger height difference than do shorter men and taller women, possibly because this increases the pool of potential partners (Pawlowski, 2003).
Although these findings have been observed and replicated in a variety of Western populations (e.g., Courtiol, Raymond, et al., 2010;Stulp, Buunk, Pollet, Nettle, et al., 2013), giving some validity to the hypothesis that SDS preferences are consistent across Western societies, it is yet unclear if they can be generalized to non-Western societies. Recent studies on SDS preferences in non-Western societies argue against the universality of such an effect. For example, Sorokowski, Sorokowska, Fink, and Mberira (2012) report that in the Himba of northern Namibia, more than 30% of the individuals preferred an SDS where partners were of equal height. Moreover, more than 20% of men chose women taller than themselves. Another study in a non-Western society, the Yali of Papua, concluded that men and women did not rely on SDS when evaluating an opposite-sex individual as a potential mate . Finally, in Datoga of Tanzania, both men and women preferred "extreme" SDS; i.e., men and women preferred partners either much taller or much shorter than themselves (Sorokowski and Butovskaya, 2012).
Together, these results suggest that SDS preferences in traditional societies are substantially different from those observed in Western societies (see Table 1). However, most of these studies have relied on preferred rather than actual SDS between partners, thus raising the question about the correspondence between these two. In other words, do people's preferences for body height in opposite-sex mating partners correlate with one another? Gillis and Avis (1980) showed that in a Western society, husbands are more likely to be taller than their wives than would be expected by chance (see also Stulp, Buunk, Verhulst, et al., 2013), which is at least consistent with earlier mentioned height preferences. The evidence is, however, much more mixed when examining non-Western samples. In Gambia and among the Hadza, in approximately 10% of all marriages the wife was taller than the husband, which was not significantly different from chance, suggesting that height played less of a role in partner choice in these populations (Sear, 2006;Sear, Allal, and Mace, 2004;Sear and Marlowe, 2009). Thus, as was the case for the SDS preferences, patterns of actual SDS in married opposite-sex partners seen in Western populations are not necessarily observed in non-Western societies and cannot be regarded as universal (e.g., Sear and Marlowe, 2009).
The observed differences with respect to partner height preferences and mating patterns between Western and non-Western populations, and the variation between non-Western populations (e.g., Sear and Marlowe, 2009), raises the question as to why these between-population differences exist. Before such a question can potentially be answered (see discussion for potential reasons), it is necessary to first document height preferences and mating patterns from many different non-Western populations. In the present study, we sought to investigate SDS preferences and patterns of partners' actual SDS in couples in two traditional societies: the Hadza of northern Tanzania and the Tsimane' of Bolivia. In addition to the investigation of possible differences between SDS preferences in these two societies compared to those reported in Western societies, we examined the relationship between SDS preferences and actual SDS between partners in order to determine how much preferred mate characteristics align with actual mate characteristics.
Participants
The Hadza of Tanzania. The Hadza are a hunter-gatherer society living in Tanzania. They number approximately 1,000-1,500 individuals and live in mobile camps, each comprising 30 people on average. This society has been extensively described in the literature (see Marlowe, 2010;Sear and Marlowe, 2009;Butovskaya, 2013). Women typically marry between 17 and 18 years of age, and men marry around the age of 20. Marriages are typically not arranged. Female choice seems to be the main factor leading to marriage because young single men appear willing to marry a wide range of women. Because divorce is common, serial monogamy is the best way to characterize the mating system (Sear and Marlowe, 2009;Butovskaya et al., 2012;Butovskaya, 2013). Although approximately 4% of men have two wives, polygynous marriages in Hadza are in most cases a transitory state. In most cases, such polygynous marriages are not stable. Majority of men and women preferred and realized a sexual dimorphism in stature (SDS) with the man being taller than the woman; in approximately 10% of all marriages, the wife was taller than the husband.
Preferred + Actual This study
No evidence for the male-taller norm: In approximately 8% of all marriages, the wife was taller than the husband. Actual Sear and Marlowe, 2009 Tsimane' (Bolivia/ Amazon)
Farmer-foragers
Monogamy, rare sororal polygyny Majority of men preferred an SDS with the man being taller than the woman, but women did not show a specific preference for their partner's height. In less than 4% of all marriages, the wife was taller than the husband.
Preferred + Actual
This study
Datoga (Tanzania) Pastoralists Polygyny
Majority of men and women preferred extreme sexual dimorphism in stature (SDS) sets (i.e., men and women chose partners much taller or much shorter than themselves).
Semi-nomadic pastoralists Polygyny
Majority of men and women preferred an SDS with the man being taller than the woman, but a significant percentage of participants (over 30%) preferred partners' body height to be similar to their own; many Himba men preferred women taller than themselves. At the time of the study, the participants lived in the Lake Eyasi region. The height of all participants was measured by one of the authors (MB) using a portable anthropometer. Age was self-reported, although some participants did not know exactly how old they were.
The Tsimane' from Bolivia. The Tsimane' are a native Amazonian society of farmerforagers. Their population of around 8,000 is distributed throughout approximately 100 villages, most of which are in the area of Beni in northern Bolivia. This tribe has been extensively described in the literature (e.g., Godoy et al., 2005;Huanca, 2008;Ringhofer, 2010).
The participants inhabited the region near the Maniqui River and lived in the villages of Maracas, Puerto Yucumo, Campo Bello, Catumare, and Anachere. Each participant's height was measured using a portable anthropometer. All measurements were taken by the principal investigators (PS, AS). Some of the participants did not take part in all stages of the study (e.g., for some individuals we lack information about the height of the partner, because that partner was not present in the village at the time of the study). Age was self-reported.
The marital customs in the Tsimane' have been described in detail (e.g., Winking, Gurven, and Kaplan, 2011). Similar to other native Amazonian societies, the Tsimane' still practice cross-cousin marriage. Traditionally, marriage is arranged by parents. However, in our sample only 29% of participants declared that their marriage was arranged. Nevertheless, these measures should be used with some caution as partners did not always agree on this, and respondents were not always entirely sure, given the long time it took them to answer. Rest assured, in this population, the parental influence on marriage is strong (Gurven, Winking, Kaplan, von Rueden, and McAllister, 2009).
Procedure
The data were collected through individual interviews conducted by the authors (MB among Hadza; AS and PS among Tsimane') with the help of a Tsimane' translator who was fluent in Spanish and a Hadza translator fluent in English. We used stimuli described in Sorokowski and Butovskaya (2012), which were slightly modified versions of silhouettes used by Pawlowski (2003). Although the stimuli are not ideal, they were used in previous studies both in Western (Fink et al., 2007;Pawlowski, 2003;Valentova et al., 2014) and non-Western societies (Sorokowski and Butovskaya, 2012;, and they ensure direct comparability with previous work. The participants were shown six pairs of silhouettes, each pair with a different SDS (see Figure 1; participants were shown the version without the letters). For every pair, the size of the image of the man remained unchanged, but the body of the woman from the original pair (SDS = 1.09) was shortened or elongated (without manipulating her width) to obtain SDS ratios ranging from 1.19 (i.e., the man being much taller than the woman) to 0.96 (i.e., the woman being slightly taller than the man). The ratios decreased in increments of about .05. In all silhouettes, the size of the head remained unchanged.
Participants were told that each pair represented partners in a romantic relationship and that the pairs of partners differed in their body heights. They were asked to choose the pair they would prefer in their own relationship (preferred SDS). Participants were then requested to provide information on their partner's height (actual SDS). In the Hadza sample, participants chose the pair from SDS stimuli set which depicted their current relationship; in the Tsimane' sample, we measured the actual height of the partner. Additionally, to examine the general validity of the SDS stimuli that we used in both studies, we also collected the data on perception of the actual SDS for a subsample of Tsimane' men and women (similarly as described for the Hadza). We correlated the participants' perception of their actual SDS (six categories; see Figure 1) with the measured SDS between the partners (raw, uncategorized data), resulting in a strong and significant correlation, r(105) = .66, p < .0001 (Spearman's rank correlation was identical). This suggests that judgments of a couple's actual SDS on the basis of our stimuli were reliable (note that because participants necessarily had to approximate their SDS given the limited range of categories to choose from, and the categories presented do not cover all possible SDSs, there is a degree of unavoidable measurement error in these assessments. This means that we should not expect a perfect correlation between perceived and actual SDS). In the few polygynous marriages (three men had two wives, who in all cases were sisters), we collected information about both partners. In general, the Hadza and Tsimane' were able to comprehend the questions easily.
Analyses
We performed separate analyses for each sex and each population. We did so because: 1) we wished to facilitate comparison with previous studies that followed a similar analysis strategy; 2) we could not use parametric analysis methods because of the distribution of our variables; and 3) the methods used differed slightly between our study populations.
Chi-squared tests were used to assess the findings based on the stimuli due to their categorical nature. Sex differences were analyzed using Mann-Whitney U tests, and Spearman rank correlations (rs) were used to test for association between two variables.
Hadza preferences for SDS
Our Hadza sample comprised 55 women between 18 and 70 years of age (M = 37.78, SD = 14.16) and 74 men (for one man, age was missing) between the ages of 17 and 80 (M = 38.89, SD = 16.07). The height of male participants ranged from 142.6 cm to 178.3 cm (M = 161.32; SD = 7.01) and the height of female participants ranged from 138.0 cm to 165.1 cm (M = 151.44; SD = 5.89). Therefore, similarly to all previously investigated populations, body height is sexually dimorphic in the Hadza, with men being on average 7% taller than women.
From the 74 male participants, the majority (32%) preferred the highest SDS (1.19), indicating a preference for relationships in which the man is much taller than the woman. Many men also preferred to be in a relationship in which the SDS was 1.14 (18%), 1.04 (15%), 1.00 (15%), and 0.96 (15%). An SDS of 1.09 was least preferred (5%) (see Figure 2). Similarly to the men, the majority of female participants (38%) preferred the highest SDS (1.19). From the 54 women, a considerable number preferred a relationship in which the SDS was 1.14 (11%), 1.04 (9%), 1.00 (18%), and 0.96 (16%). Again similar to men, an SDS of 1.09 was least preferred (7%) (see Figure 2). In both men and women, the distribution of height preferences was significantly different from uniform (men: X 2 [5, n = 74] = 17.14, p <.01; women: X 2 [5, n = 55] = 21.26, p < 0.001). Furthermore, the sexes did not differ significantly in their median SDS preference (Mann-Whitney U = 2015, z = -.098, p = .92). In conclusion, both men and women mostly preferred an SDS such that the man was much taller than the woman.
When examining the correlation between one's own height and preferred SDS, we found that neither male height (rs[73] =.01, p = .93) nor female height (rs[54] =.11, p = .45) was related to the preference for SDS in Hadza.
The association between preferences and actual partner height among the Hadza
Next, we investigated the patterns of (perceived) actual SDS among couples (obtained through the use of the stimuli), examining data from 62 married men and 52 married women. The majority of male participants (32%) reported to be in marriages with the highest SDS (1.19). Marriages in which the SDS was reported to be 1.14 (24%), 1.09 (7%), 1.04 (18%), 1.00 (13%), or 0.96 (7%) were all lower in frequency. The patterns as reported by the married women were very similar to those reported by the married men. The majority of female participants (33%) reported to be in marriages with the highest SDS (1.19). Fewer women were in marriages in which the SDS was reported to be 1.14 (17%), 1.09 (6%), 1.04 (17%), 1.00 (15%), or 0.96 (12%). Thus, marriages were reported to most likely occur when the SDS was very large, which is at odds with the average SDS of 1.07 in this population (see Discussion).
When examining the association between an individual's preferences and actual partner height, we found a positive correlation in both men (rs[62] = .40, p < .001) and women (rs[52] = .49, p < .0001). This suggests that in both men and women, those people who preferred large SDS were also in marriages with large SDS.
Tsimane' preferences for Sexual Dimorphism in Stature
Our sample comprised 70 women between the ages of 17 and 50 (M = 30.49, SD = 10.31) and 68 men between the ages of 18 and 50 (M = 33.97, SD = 11.56). The height of male participants ranged from 151.3 cm to 180.0 cm (M = 166.50; SD = 5.66); the height of female participants ranged from 146.0 cm to 165.9 cm (M = 154.92; SD = 4.10). Therefore, body height is sexually dimorphic in the Tsimane', with men being on average 7% taller than women.
For 14 women and 9 men we were unable to record their height preferences. From the 59 male participants, the majority (36%) preferred the highest (1.19) or average (1.04) SDS (27%). A relatively high percentage of participants also preferred an SDS of 1.14 (20%). A lower percentage of men preferred an SDS of 0.96 (5%), 1.0 (7%), or 1.09 (5%) (see Figure 3). Thus, Tsimane' men generally preferred to be taller than their wives, with the largest SDS most preferred. From the 56 women, the highest percentage (23%) preferred an SDS of 1.0, indicating a preference for relationships in which the man and the woman are of the same height. Furthermore, a considerable proportion of females preferred romantic relationships in which the SDS was 0.96 (14%), 1.04 (18%), 1.09 (18%), 1.14 (9%) or 1.19 (18%). In men (X 2 [5, n = 59] = 29.98, p < .0001), but not in women (X 2 [5, n = 56] = 3.79, p = .58) the distribution of preferences for SDS was significantly different from uniform. Furthermore, there was a significant difference in median SDS preference between the sexes (Mann-Whitney U = 1117, z = -3.06, p < .01). Thus, these results suggest that men are more consistent in their preferences, and that they prefer on average a larger difference in height between themselves and their partner than women do.
The association between preferences and actual partner height among the Tsimane'
When examining the heights of married individuals, we found no evidence for assortative mating for height, as evidenced by the non-significant correlation between partner heights (rs[76] = .11, p = .35). In total, we observed three couples out of 76 in which the woman was taller than the man. Based on the SDS of all partnered individuals, we found that the average SDS was 1.07 (SD = 0.05).
Preferences for SDS did not reflect actual partner height in Tsimane' women (rs[54] = -.15, p = .28) and men (rs[54] = -.04, p = .77). When examining the subsample of men and women that also judged their actual SDS using our stimuli, we also found that preferred SDS did not predict estimated actual SDS (in men: rs[43] = -.06, p = .72; in women: rs[41]=.05, p = .78). Thus, preferences were not related to partner height.
In contrast to the Hadza, we observed that the estimates of actual SDS assessed via stimuli were in line with the average measured SDS of 1.07 in this population, and more generally with the shape of the distribution based on the measured SDSs: 30% of participants' answered that they were in a marriage in which SDS was perceived to be 1.09. A large number of participants also suggested their perceived SDS was 1.04 (24.5%) and 1.14 (21%). Much lower proportions were found for an SDS of 1.19 (10%), 1.00 (9.5%), and 0.96 (5%).
Discussion
In the present study, we sought to investigate SDS (Sexual Dimorphism in Stature) preferences and patterns of partners' actual SDS in couples in two traditional societies-the Hadza of northern Tanzania and the Tsimane' of Bolivia. Our findings show that preferred height differences are different in these populations compared to previous reports on Western populations (Courtiol, Raymond, et al., 2010;Fink et al., 2007;Pawlowski, 2003;Shepperd and Strathman, 1989;Stulp, Buunk, Pollet, Nettle, et al., 2013;Stulp, Buunk, Verhulst, et al., 2013).
In nomadic foragers from the native Tanzanian Hadza tribe, both men and women most often preferred the SDS with the man being much taller than the woman in the couple (but see the limitations), and men and women were similar in their most preferred SDS. Furthermore, an individual's height did not predict their preferred partner height. We did observe that preferred height differences were associated with (perceived) actual partner height differences in both men and women, suggesting that those who preferred a larger SDS also were in a relationship with a relatively large SDS. Interestingly, in a previous study conducted in the Hadza, Sear and Marlowe (2009) reported no evidence of a male-taller norm in this tribe, but they have been collecting their data since 1995, so there might have been some cultural changes with this regard during the last 20 years.
Preferences for SDS among the Tsimane', foragers and farmers living in the Amazon, were different to those observed among the Hadza (and again different to those observed in Western societies). Whereas partner stature did not seem to be important for women's mate preference (the observed pattern of preferences was not different from random), men did seem to value height in their partner, with the majority of male participants (36%) preferring the largest partner height differences. Consequently, men preferred a larger SDS than women. Like in the Hadza, an individual's height was not associated with preferred SDS. Moreover, and unlike the case in the Hadza, preferred SDS did not predict spousal SDS.
Although the preferences for very large SDS in Hadza men and women and Tsimane' men are different to typical patterns observed in Western countries (e.g., Pawlowski, 2003), the fact that the preferred SDS by men is generally so that the male is taller than the female is at least consistent. The lack of clear height preferences in Tsimane' women, however, is interesting and again at odds with those found in Western populations. We speculate that marriage patterns may account for this result. In the Tsimane', parental influence on children's marital decisions is stronger in the case of daughters compared to sons (Gurven et al., 2009). Women may thus appear to be less selective in terms of preferences for partner height because they have less influence over their choice of marriage partners than in Western populations. In other words, Tsimane' women's preferences may be masked to a certain degree.
Additionally, an overview of published findings from small-scale societies (including the data of this present study) shows that partner height preferences in monogamous societies (e.g., Hadza and Baka) are more similar to those observed in Western societies than in "extremely" polygynous societies like the Datoga, or Yali (see Table 1). We suggest that in societies where partners are free to choose their partners, women's height preferences might tend towards a preference for taller men, whereas this preference might not be observed in other societies, including polygynous societies with significant wealth stratification or when spouses are predominantly selected by relatives. We do note, however, that very few studies on this topic have actually been published (see Table 1), and our conclusions thus should be treated with a degree of caution.
A similar pattern of results like those observed in Western societies may also be present in societies where height is related to measures of social status and physical strength, as is the case for those Western populations (e.g., Judge and Cable, 2004;Sorokowski, 2010;Stulp, Buunk, Kurzban, et al., 2013), in which taller men are also perceived as being more dominant, having higher status, being more intelligent, and being healthier (Chu and Geary, 2005;Jackson and Ervin, 1992;Melamed, 1992). In the Tsimane' adults, there is no association between height and perceptions of dominance and intelligence (Undurraga et al., 2012). According to Godoy et al. (2010), height is furthermore not associated with income or wealth in this society. Consequently, height may be less important as a trait affecting mate choice in this population, which could explain why women did not express any clear height preference in a potential partner. Similar reasoning might also hold for other populations (e.g., Yali) where no clear SDS preferences were observed (see Table 1).
We also examined the relationship between preferred SDS and actual SDS. In the Hadza, there was a positive correlation between preferred and actual SDS, suggesting that in this population mate preferences might influence mate choice. Among Tsimane', however, preferred SDS did not correlate with actual SDS. Thus, preferred height differences did not noticeably relate to actual partner height differences. One reason for the existence of this effect in the Hadza, but its absence in the Tsimane', is that in the latter population marriage partners are substantially influenced by parents (Gurven et al., 2009). Although there may be conflicts between parents and offspring about the most appropriate partner (Apostolou, 2008;van den Berg, Fawcett, Buunk, and Weissing, 2013), it is difficult to predict how "preferences" among individuals and their parents diverge in this population. Nonetheless, it is certain that because of the parental influence, the choice of partner is more constrained in the Tsimane' than the choice of Hadza men and women. Another reason why preferred SDS was not associated with actual SDS among Tsimane', may be the limited number of mate options (Wolanski and Siniarska, 1984) in this society. A man mostly chooses a partner from his cousins (Winking et al., 2011). Finally, characteristics other than height may be much more important when selecting a mate (Stulp, Buunk, and Pollet, 2013).
Another possible explanation of the between-population differences with respect to partner height preferences and mating patterns may be the fact that the relationship between height, health, and survival is dependent on the environment (Godoy et. al., 2010;Sear, 2010;Stulp and Barrett, in press). For instance, life history theory predicts that in ecologies with high mortality and scarcity of resources, more energy should be invested in reproduction rather than growth, and shorter rather than taller individuals may have an advantage in such populations (Walker et al., 2006). However, comparing the Hadza with the Datoga (see Table 1), two neighboring populations, we find remarkably different SDS preferences. Given that these tribes live in very similar ecological conditions but are culturally very distinct-the Datoga are polygynous pastoralists, and the Hadza are serially monogamous nomadic foragers-these divergent preferences may be due to differences in cultural practices (e.g., traditions, mating systems, economy) rather than ecological conditions. We note some limitations of our study. First, the stimuli used may have resulted in biased measures of actual SDS between partners in the Hadza. Indeed, both when choosing their preferred SDS (see Figure 1) and when indicating their actual SDS, most men and women indicated a rather large difference. For instance, 63% of men and 56% of women indicated that their actual SDS was between 1.09 and 1.19. Given that the average SDS in this population equals 1.07, the large proportion of respondents reporting these higher values suggests that Hadza men and women may have perceived our stimuli or our questions differently to other populations. Alternatively, our Hadza sample may have been biased towards couples with large SDS. There are two important things to note here, however: First, although there may have been biases in perception, these are unlikely to explain the correlation between mate preferences for SDS and actual SDS in men and women; second, similar limitations were not observed in the Tsimane'. In the latter sample, the perceived SDS between partners was in line with the average SDS and its distribution in that population. Moreover, perceived partner SDS was strongly associated with measured partner SDS, suggesting that, in the Tsimane' sample, preferences for SDS are reliable. Another limitation is that, although we interpreted the association between preferred SDS and actual SDS in the Hadza as a correspondence between partner preferences and choice, we cannot rule out the possibility that partner characteristics influence partner preferences (see Courtiol, Picq, et al., 2010 for a discussion).
We suggest that height preferences in (small-scale) societies may best be interpreted as a compromise between the costs and benefits associated with male and female body height, relating to reproduction, individual well-being, and survival. Because of different ecological and social pressures, height preferences may vary among foragers, traditional pastoralists, and farmers. In some societies, height in men is associated with dominance and status, which may facilitate male reproductive success and have an impact on female preferences. Power asymmetries between the genders, in equalities in wealth, pathogen loads, and access to modern medicine might be additional factors that help explain height preferences. Documenting further variation in height preferences across distinct populations is required before any reliable conclusions can be drawn regarding how patterns of height preference covary with ecological and social circumstances.
|
2018-04-03T03:51:19.721Z
|
2015-04-01T00:00:00.000
|
{
"year": 2015,
"sha1": "a8bcf76ec8fd7fb8d280ec515d9ffffd234dcf3a",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1177/147470491501300209",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "26aa687213672c0b3ab79038cf3ba2924f2ab39a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
234339552
|
pes2o/s2orc
|
v3-fos-license
|
The IceCube Pie Chart: Relative Source Contributions to the Cosmic Neutrino Flux
Neutrino events from IceCube have recently been associated with multiple astrophysical sources. Interestingly, these likely detections represent three distinct astrophysical source types: active galactic nuclei (AGN), blazars, and tidal disruption events (TDE). Here we compute the expected contributions of AGNs, blazars and TDEs to the overall cosmic neutrino flux detected by IceCube based on the associated events, IceCube's sensitivity, and the source types' astrophysical properties. We find that, despite being the most commonly identified sources, blazars cannot contribute more than 11% of the total flux (90% credible level), consistent with existing limits from stacked searches. On the other hand, we find that either AGNs or TDEs could contribute more than 50% of the total flux (90% credible level), although stacked searches further limit the TDE contribution to $\lesssim 30%$. We also find that so-far unknown source types contribute at least 10% of the total cosmic flux with a probability of 80%. We assemble a pie chart that shows the most likely fractional contribution of each source type to IceCube's total neutrino flux.
INTRODUCTION
The Universe produces a quasi-diffuse flux of highenergy (> TeV) neutrinos (hereafter cosmic neutrino flux; Aartsen et al. 2013;Aartsen et al. 2014), whose properties are now well characterized (Aartsen et al. 2016). The origin of this cosmic flux is, nevertheless, not yet understood.
High-energy neutrinos have recently been identified from several distinct astrophysical sites. These sources include the TXS 0506+056 blazar (Aartsen et al. 2018a,b), a nearby Seyfert galaxy (NGC 1068; Aartsen et al. 2020a), and a tidal disruption event (TDE; AT2019dsg; Stein et al. 2021). Several other blazars have also been identified with probable high-energy neutrino associations (Kadler et al. 2016;Kun et al. 2021). At the same time, the contribution of several source types to the overall flux have been constrained, including blazars (Yuan et al. 2020), and TDEs (Senno et al. 2017;Murase et al. 2020b;Murase & Waxman 2016;Guépin et al. 2018;Stein 2019) and gamma-ray bursts (GRBs; Abbasi et al. 2012) . * imrebartos@ufl.edu In this paper we evaluate the expected overall contribution of different source types to the cosmic neutrino flux based on the associated individual neutrino sources, taking into account the uncertainty in source associations. We show that the widely different properties of different source types mean that the same number of detections translates to different expected contributions to the overall flux. In addition, computing the expected contribution of each source type to the overall cosmic neutrino flux allows us to estimate the fraction of the overall flux that arrives from so far unidentified source types, i.e sources that are not AGNs, blazars or TDEs.
An additional goal of the present description is to demonstrate how different source features, such as their number density, number of detections or their cosmic rate evolution, contributes to their estimated contribution to the overall neutrino flux. Therefore, before discussing our Bayesian estimate that includes most accessible details about the source populations, we first compute fractional contributions through simpler estimates in which the role of different source properties is more accessible.
The paper is organized as follows. We first carry out our simplest "warm-up" calculation of fractional contributions in Section 2 using detections that followed arXiv:2105.03792v2 [astro-ph.HE] 28 Jan 2022 high-energy neutrino alerts publicly released by Ice-Cube. Next, we introduce a simplified model in Section 3 that highlights the relative importance of population properties. We then introduce our most detailed and realistic method in Section 4 and its implementation in 5 to obtain the estimated relative contributions for both detected and unknown source types. Results for this most detailed model are presented in Section 6. We summarize our conclusions in Section 8.
WARMING UP: THE FRACTION OF DISCOVERIES
It is instructive to evaluate the fraction of high-energy neutrino alerts (Aartsen et al. 2017a;Blaufuss et al. 2019) that lead to likely associations with counterparts. If one further factors in the completness of the catalog of potential counterparts, one can translate this fraction into a constraint on the contribution of the specific source class to the total neutrino flux. Below we examine blazars and TDEs which were identified so far in association with IceCube's neutrino alerts.
We first consider blazars that were found in association with the N alert ∼ 60 alerts 12 sent to the community by IceCube so far: besides the well known TXS0506+056 (Aartsen et al. 2018b), more recently PKS 1502+106-another exceptionally bright blazarwas identified in coincident with a well localized neutrino track (Taboada & Stein (2019)). Other claimed associations are less significant (e.g. GB6 J1040+0617, Garrappa et al. (2019)) and/or have not been found in associations with alerts (PKS 1424-41; Kadler et al. 2016), and since we do not consider backgrounds here, they are not counted in this simple exercise. We hence count two detections, N det = 2. We take into account that roughly half of IceCube's alerts are estimated to be of astrophysical origin (signalness 3 s ≈ 0.5). In addition, based on their gamma-ray flux, we assume that a fraction f ∼ 0.7 of the high-energy neutrino flux from this source type are from electromagnetically resolved blazars . With these quantities we can estimate the fraction of cosmic, high-energy neutrinos due to a blazar population: N det /N alert /f /s = 0.10.
Likewise for TDEs, Stein et al. (2021) associated one TDE (N det = 1) from a search of alerts (N alerts = 9). We estimate TDEs' completeness factor to be f = 0.5 that assumes that TDEs can be detected electromagnetically out to ∼ 1 Gpc. These numbers translate to a fraction of the diffuse neutrino flux due to a population of TDEs of N det /N alert /f /s = 0.45.
These initial estimates come with significant caveats, e.g. they do not take into account the significance of the observations, and moreover are not generalizable to observations that are not based on single-neutrino coincidences. Nevertheless, they are illustrative of a key point, namely the importance of the completeness of the catalog of counterparts in extrapolating the flux to a full population of sources.
SIMPLE MODEL
In order to characterize the role of different population properties in connecting detections with the estimated contribution to the overall cosmic neutrino flux, we consider the following simple model. We separately consider continuous neutrino sources that are detected through time-integrated searches, and searches that identify sources via the detection of a single highenergy neutrino. Similar computations of converting resolved sources to overall cosmic flux can be found in the literature, see e.g. Lipari (2008)
Time-integrated detection
Time-integrated searches are particularly relevant for continuous sources with high number densities where the detection of multiple neutrinos is necessary to claim detection. Let F ν,0 be the neutrino threshold above which such a time-integrated search is able to identify a neutrino flux. For simplicity we neglect any dependence on the sources' neutrino spectrum and sky location.
Let sources within a given source type be uniformly distributed in the local universe with number density ρ. Let each source have unknown identical neutrino luminosity L ν . The maximum distance r max within which these sources can be detected is Within this distance the expected number of sources is We will consider the number N det of detected sources to be the best estimate of the expected number of detections, i.e. N det = N det . With this, the expected neutrino flux from the considered source type within r max is where we made use of Eqs. 1 and 2 to change variables.
Let f r (r max ) be the fraction of the total neutrino flux from a given source type that comes from sources within distance r max . For not too large distances we approximate the dependence of f r on r max to be linear, i.e. f r (r max ) = f 0 r max , where f 0 is a source-type-dependent constant. To compute f 0 , we need to integrate the neutrino flux from the entire source population accounting for cosmic evolution (Bartos et al. 2017). With this, we can express the overall flux expected from the considered source type as (see also Murase et al. 2018Murase et al. , 2012) (4) where the superscript "(int)" refers to time-integrated detection. We see that, for the same number of detections, the corresponding flux can be very different due to its dependence on ρ, which can vary orders of magnitude between sources.
Single-neutrino detection
For rare and/or transient sources, a single high-energy neutrino can be sufficient to claim detection. Let F ν,1 be the source flux for which the expected number of detected neutrinos is 1 during the observing period T obs . For transient sources with duration τ the corresponding actual source flux is T obs /τ times higher.
Let sources within a given source type be uniformly distributed in the local universe with number density ρ. Let each source have identical neutrino luminosity L ν . For transient sources, we will adopt the same notation defining ρ = RT obs with source rate density R, and L ν = E ν T −1 obs with E ν total energy radiated in highenergy neutrinos.
A single neutrino can only lead to discovery if it is directionally (and temporally for transients) coincident with a source candidate. A single neutrino can be detected with non-vanishing probability even from distant sources. Therefore, single-neutrino searches will not be limited by a distance threshold for detection as in the time-integrated case. Instead, a solution is that the number of source candidates considered in the search is capped in order to limit the search's trial factor. Source candidates will be selected for a search such that those expected to have the highest neutrino flux at Earth are included. If all sources produce the same neutrino flux then the closest sources are selected. If we limit the source candidates in a search to the number N max , then this corresponds to an effective maximum search distance of The expected number of detected sources within this distance is where we assumed that the expected number of detected neutrinos from a single source is 1. We will consider the number N det of detected sources to be the best estimate of the expected number of detections, i.e. N det = N det . With this, the expected neutrino flux from the considered source type within r max is where we made use of Eq. 6 to change variables.
Similarly to the previous section, we introduce the fraction f r (r max ) = f 0 r max of the total neutrino flux from a given source type that comes from sources within distance r max . With this, we can express the overall flux expected from the considered source type as where the superscript "(single)" refers to single-neutrino detection. This formula is very similar to the one we obtained for time integrated searches (see Eq. 4), and shows that the number density (or rate density) of sources can substantially alter the expected contribution from a source type.
Estimated contribution based on simple model
We now estimate the contribution of different detected source types to the cosmic neutrino flux. We additionally consider sources that have so far been undetected to gauge how strong constraint their non-detection represents to their contribution to the overall flux. In this section we made no constraint on the combined contribution of the considered source type, i.e. it is interesting that their total contribution is less then, but comparable to, the total IceCube flux.
Flux fraction
First, we computed f 0 for different source populations, which is needed to estimate their total flux. We considered (non-jetted) AGNs, tidal disruption events (TDEs) and, for comparison, populations that follow the cosmic star-formation rate. Our results are shown in Fig. 1. For AGNs we adopted their number density for bolometric luminosity > 10 43 erg s −1 from Lacy et al. (2015). For TDEs we adopted their cosmic density evolution from Sun et al. (2015), while we adopted the cosmic star formation rate from Li (2008).
Blazars
We adopted the density ρ blazar = 10 −9 Mpc −3 corresponding to a high-lumminosity class of blazars called flat-spectrum radio quasars (FSRQs; Murase & Waxman 2016). With this choice we assume that a subclass of blazars, FSRQs, are mainly responsible for neutrino emission from this class. We used N det = 3 and T obs = 5 yr. The choice of T obs reflects the duration since IceCube began real-time alerts. We considered blazars to be detectable with single-neutrino searches. We adopted F ν,1 = 4 × 10 −10 erg cm −2 s −1 (1yr/T obs ), where we assumed a neutrino spectral index of 2.5 and neutrino energy range ε ν ∈ [100GeV, 100PeV]. 4 We further adopted f 0 = 5 × 10 −4 Mpc −1 (see Fig. 1, we con-4 Here we used the flux corresponding to an expected number of 1 detected neutrino in the εν ∈ [100TeV, 100PeV] from IceCube Collaboration (2016). We then extrapolated this flux to the energy range εν ∈ [100GeV, 100PeV] to make it compatible with the flux from the time-integrated search. We adopted a neutrino spectral index of 2.5 for the extrapolation.
With these parameters, we obtained F
In order to determine N max , we set the maximum detectable distance of TDEs to be r max,TDE = 1 Gpc given the distance range of past identified TDEs (van Velzen et al. 2021). We assumed that electromagnetic follow-up of 100 TeV neutrinos could identify every TDE within r max,TDE . We set N max = 4/3πr 3 max,TDE ρ TDE ≈ 1. With these parameters, we obtained F (TDE) ν,tot = 1.6 × 10 −8 erg cm −2 s −1 .
Gamma-ray bursts
While no significant neutrino emission has been associated with gamma-ray bursts (GRBs), here we considered the corresponding total flux from GRBs assuming N det = 2.3, which is the 90% confidence level upper limit corresponding to no detections. This can give us a picture of the total flux contribution that is consistent with the lack of association. We adopted a local density of ρ GRB = R GRB T obs = 10 −8 Mpc −3 where we assumed T obs = 10 yr (Wanderman & Piran 2010). We considered GRBs to be detectable with single-neutrino searches.
Given the short duration and rarity of GRBs, and their easy observability with all-sky gamma-ray detectors, single-neutrino searches can consider neutrinos below the 100 TeV limit used for other single-neutrino searches. Adopting a threshold of 1 TeV and a neutrino spectral index of 3, we consider a threshold flux F ν,1 = 4 × 10 −9 erg cm −2 s −1 (1yr/T obs ), i.e. 100 times lower than for an energy threshold of 100 TeV. We adopted f 0 = 2 × 10 −4 Mpc −1 , similar to the fraction expected for a population tracing the star formation rate. We took N max = 1000, comparable to the total number of detected GRBs.
With these parameters, we obtained F (GRB) ν,tot 10 −9 erg cm −2 s −1 . We conclude that the lack of detection presents a strong constraint on the GRB contribution to the total cosmic neutrino flux, their contribution is < 1%. This limit is consistent with constraints from searches by IceCube for neutrinos coincident with GRBs (Aartsen et al. 2017b).
Core-collapse supernovae
No significant neutrino emission has been associated with supernovae. Here we considered the corresponding total flux from core-collapse supernovae assuming N det = 2.3 to characterize the limit this lack of detection represents.
We adopted a core-collapse supernova rate density of ∼10 −4 Mpc −3 yr −1 (Taylor et al. 2014), and assumed that neutrino emission is significant for a duration of T obs = 10 yr. With this we obtain an effective density ρ SN = 10 −3 Mpc −3 . We considered supernovae to be detectable with time-integrated searches.
With these parameters, we obtained F (CCSN) ν,tot 10 −7 erg cm −2 s −1 , i.e. greater than IceCube's overall flux. We therefore conclude that the lack of observation does not present a meaningful constraint on the contribution of supernovae to the total cosmic neutrino flux.
We further considered N det = 2.3 to characterize the limit that the lack of detection represents. Using Eq. 4 we obtained F (starburst) ν,tot 2.5×10 −8 erg cm −2 s −1 , limiting the starburst contribution to 40% of the IceCube flux. A higher effective number density for starburst galaxies would correspond to an even less stringent constraint.
FULL BAYESIAN MODEL
We now turn to a more detailed derivation of the expected neutrino flux from different source populations where we take into account the cosmic evolution of sources and the statistical uncertainty of the number of detections.
Probability density of the expected number of detections
Assume we have a set of N tot detection candidates for source type S. Each candidate is either from the astrophysical source of interest or from the background. For our purposes here, astrophysical neutrinos from unassociated sources are counted as background. Candidate i has a set of reconstructed parameters denoted with x i . The set of reconstructed parameters for all candidates is denoted with Let p( x i |S) and p( x i |B) be the probability densities of observing x i from an astrophysical source of type S and from the background, respectively. We compute the probability density of the expected number of detected events, denoted with N det as (Farr et al. 2015) (note the similarity to Eq. 7 in Braun et al. 2008): where N B is the expected number of detected background events, which we marginalize over. We used the Poisson Jeffreys prior for N B . We define the prior probability density π(N det ) of N det implicitly by assuming that the prior probability density of the total flux from a given source type is uniformly distributed between 0 and IceCube's measured cosmic neutrino flux minus the flux of the other sources. Therefore, we will have a three-dimensional prior probability density for the three sources considered below, with uniform probability density and with the boundary condition that the sum of the total flux from the three sources cannot exceed IceCube's measured flux.
Computing the neutrino luminosity of individual sources
If we know the expected number of detected sources, we can compute the related neutrino luminosity of individual sources. This computation, however, also depends on the luminosity's probability density.
Here we will assume that the neutrino luminosity L ν of a source depends on its electromagnetic luminosity L γ (see Section 5 for model-dependent assumptions on L γ ). For simplicity, we will assume that L ν = α γν L γ with unknown α γν constant. We further assume that we know the number density ρ(z, L γ ) of a continuous source type as a function of redshift z and L γ .
With this, we can compute the expected number of detections (see also Murase & Waxman 2016): (10) Here, d L is the luminosity distance and p det (z, L ν ) is the probability that a source with luminosity L ν at redshift z will be detected. We can compute the expected detection rate for transient sources as well, where we also need to take into account time dilation, obtaining Here, T obs is the duration of observation, R(z, E γ ) is the comoving rate density and E γ = α γν E ν is the radiated electromagnetic energy. We can then compute the unknown α γν factor by equating the expected number of detections from Eqs. 10 or 11 with the expected number of detections from observations.
Note that if we assumed that all sources have the same neutrino luminosity then this step would not be needed, and we could simply compute the overall neutrino flux from the number of detections from observations. However, this step enables the incorporation of the source luminosity distribution, which we can base on the observed γ luminosity distributions for the source types in question.
An interesting side product of this step is the determination of α γν , i.e. the connection between the sources' gamma flux and expected neutrino flux given the number of detections. For blazars and AGNs discussed below, we obtain a characteristic conversion factor of α blazar γν ∼ 5 and α AGN γν ∼ 0.04. These characteristic values are obtained by assuming that the number of detections is the expected number (as considered in our "simple model"). It therefore appears that blazars are more efficient neutrino producers than AGNs.
Expected total flux at Earth
To obtain the expected total neutrino flux at Earth for a source type, we integrate over all sources in the universe. We also marginalize over the distribution of the expected number of detections from Eq. 9. For continuous sources we obtain where α is the spectral index of the neutrino spectral density dN ν /dε ν ∝ ε −α , and we expressed ρ as a function of N det and L ν . This latter takes into account that we do not know the density spectrum of L ν , but can determine it using the density spectrum of L γ and N det , and by assuming that L ν ∝ L γ . We can similarly compute the total flux at Earth for transients:
IMPLEMENTATION
Here we implement the general framework discussed above by considering available information from observations.
Source parameter probability densities
We now turn to p( x i |B) and p( x i |S). We consider x i = p i . The probability density of the p-value is naturally defined for the background distribution as uniform (p( x i |B) = 1). However, the signal distribution p( x i |z i , L ν,i S) cannot be determined without a specific astrophysical signal model and the data analysis framework used in the search. Instead, we adopt a "calibrated" ratio of the background and signal probability densities from Sellke et al. (2001): for p i < 1/e, where e is Euler's number. This ratio is a lower bound over a wide range of realistic p-value distributions for the signal hypothesis where the only assumption is that the density of p i under the signal hypothesis should be decreasing in p i . As this "calibrated" ratio is a lower bound it may still be optimistic in terms of the flux contributions, nevertheless we consider this a reasonable "calibrated" estimate given the unknown signal hypothesis.
Detection probability
For a given redshift and neutrino luminosity, the probability of detection depends on both the identification of the source through electromagnetic observations and on the detectability of the neutrino signal. Electromagnetic identification determines the completeness C(z, L γ ) of a source catalog, where L γ is the electromagnetic luminosity of the source. We assumed here that neutrino luminosity is proportional to the electromagnetic luminosity of the source, i.e. L γ ∝ L ν The detectability of the neutrino signal depends on multiple factors, including the neutrino flux and spectrum at Earth, detector sensitivity, and the number density of the source population. In the limit of rare sources, even a single neutrino will be sufficient to identify a source. In this case, detectability will be the probability that a single neutrino is recorded. To obtain this probability for a given source flux, we adopted the expected number of detected neutrinos for a given flux presented by IceCube Collaboration (2016). For a source spectrum dN ν /dε ν ∝ ε −2.5 ν , the flux density corresponding to an expected 1 detected neutrino is 1.2 × 10 −17 GeV −1 cm −2 s −1 (ε ν /100 TeV) −2.5 (T obs /1 yr) −1 .
For more common sources, detectability can be approximated with a flux threshold F ν,th such that all those -and only those -sources with flux F ≥ F ν,th are detected. Below we adopt F ν,th from IceCube's 10-year, 90% confidence-level median sensitivity based on Aartsen et al. (2020a), integrated within [100 GeV, 100 PeV]. For E −3 and E −2 spectra this sensitivity is F ν,th,3 = 2×10 −10 erg cm −2 s −1 and F ν,th,2 = 10 −11 erg cm −2 s −1 , respectively. To obtain the sensitivity for an E −2.5 spectrum, we consider the geometric mean of F ν,th,3 and F ν,th,2 , obtaining F ν,th,2.5 = 2 × 10 −11 erg cm −2 s −1 . These values are valid for the northern hemisphere. We account for the fact that IceCube is much more sensitive towards the northern hemisphere by introducing an effective factor of 2 reduction in source completeness.
To characterize the dependence of our results to these approximate sensitivity threshold, we can look at Eqs. 4 & 8, which show in the case of our simplified model that the results are linearly dependent on the thresholds. This can be somewhat mitigated in our full model by the overall constraint that the total flux is less than the IceCube flux.
Source types
Here we introduce the source properties used for the analysis.
Active galactic nuclei
One active galactic nucleus (AGN), NGC 1068, has been identified as a likely neutrino source at 2.9σ posttrial significance Aartsen et al. (2020a). We use this one detection, with p-value p = 2 × 10 −3 . We adopt the cosmic number density and luminosity function ρ AGN (z, L γ ) for AGNs from the Spitzer mid-infrared AGN survey Lacy et al. (2015), with threshold L γ ≥ 10 41 erg s −1 . We consider AGNs detected as neutrino sources if their neutrino flux is above the threshold F ν,th . This flux threshold is adopted based on the measured (90% C.L. median) sensitivity of IceCube's 10-year search (Aartsen et al. 2020a). We use the typical sensitivity at the northern hemisphere given that IceCube is much more sensitive in this direction, and adopt a factor of 2 reduction in the completeness of the AGN catalog. For a source with neutrino spectral index 2, this corresponds to F ν,th,2 = 6 × 10 −13 erg cm −2 s −1 considering neutrino energies ε ν ∈ [100GeV, 100PeV]. For neutrino spectral index 3 we find F ν,th,3 = 2×10 −10 erg cm −2 s −1 .
In the detection process, only a few AGNs have been used in neutrino searches, selected based on their γ-ray brightness (Aartsen et al. 2020a). This limitation ensured that the trial factor in the search remained low. We take this into account by limiting the completeness of our simulated catalog to the 10 brightest sources on the northern hemisphere, i.e. p det is set to zero for sources with z and L γ for which the electromagnetic flux at earth is below a threshold (10 −9 erg cm −2 s −1 ) such that 10 sources are expected.
Blazars
For blazars we consider 3 detections (see Table 1). We adopt the cosmic number density and radio luminosity function ρ blazar (z, L γ ) for FSRQs from Mao et al. (2017), with threshold L γ ≥ 10 40 erg s −1 . We consider a blazar detected if a single extremely high energy neutrino with energy above 100 TeV is detected from it. For a source with spectral index 2, one extremely high energy neutrino is expected to be detected from a source in a random direction (IceCube Collaboration 2016) for a neutrino flux of F ν,single,2 = 4 × 10 −10 erg cm −2 s −1 (1yr/T obs ) in the neutrino energy range range ε ν ∈ [100TeV, 100PeV]. For spectral index 3 we also have F ν,single,3 = 4 × 10 −10 erg cm −2 s −1 (1yr/T obs ). Similarly to AGNs, we limit the number of blazars in our search catalog. IceCube's 10-year catalog search included about 100 blazars (Aartsen et al. 2020a), we therefore adopt the same number here.
TDEs
The detection of one TDE, AT2019dsg, has been reported so far in Stein et al. (2021), which we include in this analysis. We adopt the cosmic rate density of TDEs from Sun et al. (2015), using a minimum TDE luminosity of L γ = 10 44 erg s −1 . We consider the rate and evolution of all TDEs, i.e. we do not require the presence of TDE jets. For simplicity, we treat all TDEs as having identical neutrino luminosity and detectability.
Given the regular electromagnetic follow-up of veryhigh-energy neutrinos released publicly by IceCube, we will consider the catalog of TDEs complete out to the distance they can be found through electromagnetic observations. Given the distance range of past identified TDEs (van Velzen et al. 2021), we set their detectable distance at r max,TDE = 1 Gpc.
We can compute the total fluence needed for the expected detection of a single neutrino analogously for blazars, but using fluence instead of flux. For a source with spectral index 2, we find a neutrino fluence of S ν,single,2 = 0.1 erg cm −2 in the neutrino energy range ε ν ∈ [100TeV, 100PeV]. For spectral index 3 we also have S ν,single,3 = 0.1 erg cm −2 .
RESULTS
We computed the expected cosmic flux for AGNs, blazars, and TDEs using the above prescription. To understand the statistical uncertainty of our results, we obtained the flux probability densities for the three cases based on the probability density p(N det | x) in Eq. 9. Flux probability densities are computed similarly to Eq. 12 for AGNs and blazars, and Eq. 13 for TDEs, but without marginalization over N det . Specifically, in this step we compute the probability densities ∂F ν,S,cont. /∂N det and ∂F ν,S,trans. /∂N det by carrying out the source simulation with different N det values and then converting the array of results into a distribution.
The results are shown in Fig. 2. We also list the expected values and 90% credible intervals in Table 2. We see that AGNs and TDEs have comparable expected fluxes, while the expected flux from blazars is about a factor of 3 lower than these. We also see that, due to the low number of detections so far, the expected flux has considerable uncertainties. Aartsen et al. 2020b), and the estimated flux for the three source types using our simple model (Section 3; vertical dotted lines). We assumed an E −2.5 astrophysical neutrino spectrum (Aartsen et al. 2020b Table 2. Estimated cosmic neutrino flux as a fraction of IceCube's total measured flux (φIC). Results are shown for the "warm-up" (Section 2), "simple" (Section 3) and "full" Bayesian (Section 4) models, for AGNs, blazars and TDEs, and the total estimated flux from unknown source types.
Error bars indicate 90% credible interval. For our "simple" model we also show upper limits for GRBs, indicating that non-detection presents are very strict constraint on their allowed contribution to the overall flux due to their very low rate density, and core-collapse supernovae (CCSNe), showing that non-detection does not meaningfully constrain their contribution to the overall neutrino flux due to their high rate density.
To obtain a pie chart of total fluxes, we look at the properties of the cosmic quasi-diffuse neutrino flux. Ice-Cube detections are consistent with an astrophysical flux following a power-law distribution with spectral index of 2.53 ± 0.07 Aartsen et al. (2020b). Therefore, we consider a neutrino spectrum that scales as E −2.5 with neutrino energy E. Our results would be similar to those presented below if we adopted a somewhat softer spectrum with E −3 that best fits the energy distribution of the highest energy neutrinos (Abbasi et al. 2020).
We combine our results together for blazars, AGNs, and TDEs into a pie chart that shows the expected relative abundance of these three source types in the overall flux of high-energy neutrinos IceCube is detecting. The obtained pie chart is shown in Fig. 3.
Expected contribution from other source types
Based on the probability densities of AGNs, blazars and TDEs shown in Fig. 2, we computed the expected contribution from other (unspecified) source types. For this we considered the AGN, blazar, and TDE probability densities to be independent and computed the probability density of their combined neutrino flux, with the boundary condition that the total cannot exceed Ice-Cube's measured total flux. We found that the unknown sources represent at least 10% (1%) of IceCube's total flux with 80% (98%) probability. This fraction could be even higher given our (upper bound) estimation of the probability density ratio in Eq. 14.
We note that this computation only accounts for the observed associations. It does not take into account independent source constraints or theoretically expected source energetics (Murase & Fukugita 2019) or emission efficiencies (Murase et al. 2020a). It is therefore interesting to compare our results with alternative expectations, which are largely consistent with our results within uncertainties.
INDEPENDENT OBSERVATIONAL LIMITS
While the present work computes fractional source contributions from associated events, source contribution limits have been previously derived through independent observing strategies.
A main strategy is to use the non-detection of veryhigh energy neutrino multiplets to constrain the the flux of different source types (Kowalski 2015;Murase & Waxman 2016;Capel et al. 2020). The lack of such multiplets particularly limits transients, such as GRBs and TDEs, and rare source types, such as blazars. These source types are excluded as the dominant sources of the observed quasi-diffuse neutrino flux.
A stacking analysis found that TDEs cannot contribute more than 27% of the total diffuse astrophysical neutrino flux at 90% confidence level (Stein 2019).
The contribution of known blazars (those in Fermi's second catalog) to the total neutrino flux within 10 TeV and 2 PeV was limited by a stacking search to less than 27% assuming a neutrino spectrum with index −2.5 (Aartsen et al. 2017).
Stacked searches have also been carried out for Type Ibc core-collapse supernovae (Senno et al. 2018;Esmaili & Murase 2018), but these do not rule out these sources as major contributors to the IceCube flux.
CONCLUSION
We computed the expected total high-energy neutrino flux at Earth from AGNs, blazars, and TDEs based on the associations of individual sources with astrophysical neutrinos detected by IceCube, IceCube's sensitivity, and the astrophysical properties and distributions of the three source types. We first carried out a simple derivation of the expected neutrino flux in order to demonstrate how the results scale with the properties of the detections, IceCube and the sources. We then carried out a more detailed derivation that accounts for the statistical uncertainty of the detection process, varying neutrino luminosity within a source type, and the cosmic evolution of source densities and properties. Our conclusions are as follows: • Despite having detected more blazars with neutrinos than AGNs or TDEs, blazars are expected to be the smallest contributor to the cosmic neutrino flux. We found their contribution to be 3.9 +3.7 −2.6 × 10 −9 erg s −1 cm −2 (error bars indicate 90% credible interval), or a contribution that is < 11% of the total quasi-diffuse neutrino flux detected by IceCube (at 90% credible level). This relatively small contribution is due to the fact that blazars are rare, making them much easier to identify through multi-messenger searches than a more common source type with similar total flux contribution. Significant contribution from low-luminosity blazars, nevertheless, could increase their fractional contribution (Palladino et al. 2019).
• AGNs and TDEs represent similar overall contributions. We estimated the AGN flux to be 2.1 +1.8 −1.6 × 10 −8 erg s −1 cm −2 , while for TDEs the estimated flux is 1.8 +1.8 −1.4 × 10 −8 erg s −1 cm −2 . Either AGNs or TDEs could be the majority source of cosmic high-energy neutrinos. Their most likely contribution is about 1/3 rd of the total flux for each.
• One or more, so far unidentified, source types also likely contribute to the overall flux. We estimate their contribution to be 1.7 +2.3 −1.5 × 10 −8 erg s −1 cm −2 . The above results only accounted for information in source types with neutrino associations, i.e. our pie chart does not consider the breakdown of "other" source types, or a more detailed prior theoretical expectations from promising source types such as starburst galaxies, galaxy clusters or supernovae. We also do not fold in information from observational limits from stacked or other searches. Further sources of uncertainty include the assumed neutrino spectrum, which may have a different, and possibly not-power-law, spectrum for different source types (Palladino 2018). Despite these limitations, our results are broadly consistent with other observational and theoretical expectations within uncertainties.
The diversity of neutrino sources that apparently contribute to the diffuse flux and the further possibility of unidentified classes of neutrino sources that remained hidden so far makes future observations, and next generation neutrino observatories such as IceCube-Gen2 (Aartsen et al. 2020c) or KM3NeT (Adrián-Martínez et al. 2016), particularly interesting. The discovery of more AGNs, blazars and TDEs, and the better resolution of the astrophysical high-energy neutrino spectrum, will also be key in enabling the characterization of unidentified source types even if they remain undetected through electromagnetic observations.
|
2021-05-11T01:16:18.342Z
|
2021-05-08T00:00:00.000
|
{
"year": 2021,
"sha1": "1947cb2c8ebb21c2089988fabd564779683e6f13",
"oa_license": null,
"oa_url": "https://bib-pubdb1.desy.de/record/471326/files/2105.03792.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1947cb2c8ebb21c2089988fabd564779683e6f13",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
119110144
|
pes2o/s2orc
|
v3-fos-license
|
Cervical brucellar spondylitis causing incomplete limb paralysis
Brucellosis, a zoonosis with worldwide distribution, is a systemic infection caused by bacteria of the genus Brucella. Meanwhile, brucellosis often causes complications, such as osteoarticular involvement, and spondylitis is the most prevalent and important clinical form. Here, is a case of cervical brucellar spondylitis causing incomplete limb paralysis in a middle-aged male. The diagnosis was based on clinical history, and supported by Brucella serology and magnetic resonance imaging. Quadruple antibacterial treatment continued for four weeks. In this case, the epidural abscess causing spinal cord compression resolved without surgery. In addition, the patient had recovered from most of the neurologic deficits.
INTRODUCTION
Brucellosis, a zoonosis that has spread to many countries, is caused by facultative intracellular bacteria of the genus Brucella.Brucella species are transmitted by means of direct contact with infected animals or consumption of products from infected animals 1 .Brucella is also endemic in the Mediterranean Region, Middle East, and parts of Africa and Latin America 2 .Each year, half a million new cases are reported worldwide, but according to the World Health Organization, this number is greatly underestimated.At the same time, the situation is also not optimistic in China and the incidence is reported to be 0.59 per year per 100,000 population 3 .Brucellosis affects the whole body, including various human organs, the musculoskeletal system and the reticuloendothelial system 4 .Symptoms related to the musculoskeletal system are commonly seen, and the most frequently detected complications are peripheral arthritis, sacroiliitis, and spondylitis 5 .Up to now, the diagnosis and treatment of vertebral involvement are rather difficult.Brucellosis spondylitis has rarely been reported in China.Furthermore, rare information about cervical brucellar spondylitis causing incomplete limb paralysis has already been reported.Cervical brucellar spondylitis is a rare and serious clinical presentation which may cause irreversible complications.A case of cervical brucellar spondylitis causing incomplete limb paralysis is presented in this report and the diagnosis and treatment are discussed referring to other reports.
CASE REPORT
A 47-year-old middle-aged man was admitted to the Infectious Diseases Department of our hospital because of incomplete limb paralysis.The patient was employed as a breeder in a livestock farm.Moreover, he usually consumed dairy products without disinfection or sterilization.Two weeks ago, the patient developed symptoms of sweating, fever and fatigue.At the local hospital, the laboratory conducted the Rose-Bengal test and the result was positive (+++), and the erythrocyte sedimentation rate (ESR) and C-reactive protein (CRP) were elevated.During his hospitalization, the patient gradually experienced limb weakness and then came to our hospital for further treatment.
Physical examination of the patient revealed pain and restriction during cervical movements.He had difficulty in standing up straight because of incomplete paralysis and decreased limb muscle strength (3-).His body temperature was 37.5 o C. Initial laboratory values were as follows: white blood cell count: 7,600/mm 3 , hemoglobin: 13.5 g/dl, platelet count: 237,000/mm 3 , ESR: 86 mm/h, CRP: 55 mg/L, and a normal blood biochemistry profile.The Rose-Bengal test applied to the patient was positive (+++) and Brucella agglutination test (Wright test) was positive at a titer of 1:320.However, Brucella species were not isolated from repeat blood cultures.Repeat acid-fast bacilli sputum stains and culture for tuberculosis yielded negative results.Cervical enhanced magnetic resonance imaging (MRI) showed C5-6 vertebral body and intervertebral disc destruction, posterior epidural abscess in the spinal canal, low signal on T1WI, high signal on T2WI and pressure-fat T2WI, marked enhancement after contrast infusion, and significant pressure on the spinal cord (Figure 1).Meanwhile, thoracic and lumbar MRI showed no obvious abnormalities.
The patient was diagnosed as having cervical brucellar spondylitis with epidural abscess causing incomplete limb paralysis.Quadruple anti-biotherapy had been started with oral doxycycline 200 mg/day and rifampicin 600 mg/day, intravenous drip levofloxacin 500 mg/day and cefotaxime sodium 6.75 g/day.Antibacterial treatment was continued for four weeks.A surgical intervention was planned for excision of the lesion and reduction of epidural abscess.However, there was an obvious improvement in local symptoms and signs.He started to experience an improvement in limb muscle strength (4+).Also, in the control examination of the patient, it was found that the white blood cell count was 6,500/mm 3 , ESR was 29 mm/h, and CRP was 10.6 mg/L.The Rose-Bengal test was (+) and Wright test yielded a titer of 1:160.The size of the posterior epidural abscess in the spinal canal had reduced earlier.T1WI showed a slightly lower signal.T2WI and pressed fat T2WI showed a mixed high signal (Figure 2).His treatment was continued with oral doxycycline 200 mg/day and rifampicin 600 mg/day for three months after discharge.
At the 1-year follow-up visit, he started to experience an improvement in limb muscle strength (5+).Also, in the control examination of the patient, it was found that the white blood cell count was 5,600/mm 3 , the ESR was 7 mm/h and CRP was 6.7 mg/L.The Rose-Bengal test was (+) and Wright test yielded a titer of 1:80.The cervical enhanced MRI showed that the C5-6 vertebral body and intervertebral disc destruction was normalized, the posterior epidural abscess in the spinal canal had disappeared, and no obvious compression was seen in the spinal cord (Figure 3).Thoracic and lumbar MRI showed no obvious abnormalities.The patient had recovered from most of the neurologic deficits.
DISCUSSION
Brucellosis is a multisystemic disease transmitted to humans by consumption of unpasteurized milk and dairy products from infected cows.There are two forms of spinal brucellosis: focal and diffuse.In the focal form, the organism becomes localized in the anterior aspect of the superior endplate, causing bony destruction in a small area.In the diffuse form, the infection spreads throughout the involved vertebra, and to the adjacent vertebrae.The spine is the most frequently involved bony structure in brucellosis (2-53%) 3 .Brucella spondylitis was first described by Kulowski and Vinke, and is known as one of the most serious complications of brucellosis 6 .Brucella spondylitis involves the lumbar region most frequently and it may also be seen in the thoracic and cervical regions.Ulu-Kilic A et al. evaluated the efficacy and tolerability of antibiotic regimens and optimal duration of therapy in complicated and uncomplicated forms of spinal brucellosis for an update on treatment options for spinal brucellosis 7 .It was found that there were no significant therapeutic differences and the results were similar regarding the complicated and uncomplicated groups.Meantime, medical management is usually the basis for treatment, alone or in combination with surgery.Guerado E et al. considered a surgical approach, either by endoscopy or as an open surgery if clinical evolution is unsatisfactory biopsy is needed and no micro-organism has been isolated, and also whenever a root, spinal cord or dural compression is seen on MRI.Spinal instability or severe deformities are also clear indications for surgical treatment 8 .
Treatment of an epidural abscess usually consists of prompt surgical drainage and antimicrobial therapy.Nonsurgical treatment should be reserved for the few patients who carry considerable surgical risk and for those who have no neural tissue damage or significant compression or who have complete paralysis lasting for longer than 3 days.Brucellar epidural abscess occurs primarily during the subacute-chronic phase of the disease process, and Brucella responds well to antibiotics, which should be administered in the required doses and continued until complete recovery 9 .There is no consensus on the selection of appropriate antibiotics or the treatment duration in patients with brucellar spondylitis and epidural abscess.However, it is generally accepted that short-term monotherapy is associated with a high relapse rate (30%) 10 .It is recommended that antibiotic therapy should be maintained from 6 weeks to 1 year 11 .Although it has been managed with surgery in previous reports, we treated our patient without surgical therapy because he had no aggravating neurologic deficits due to the spinal cord compression caused by the epidural abscess.
The principal mode of treatment of brucellar spondylitis is conservative, namely, immobilization of the affected area and antimicrobial therapy.An open biopsy of the cervical vertebra body or epidural abscess should be indicated when the differential diagnosis of spondylitis cannot be made by the conventional diagnostic methods.In the vast majority of reports, surgery for decompression of the spinal cord or nerve roots is essential in cases with neurological deficits.However, antimicrobial chemotherapy is essential in the presence of a good response which may lead to the avoidance of surgery.
FIGURE 1 :
FIGURE 1: Cervical enhanced MRI showed C5-6 vertebral body and intervertebral disc destruction, the posterior epidural abscess in the spinal canal, low signal on T1WI, high signal on T2WI and pressure-fat T2WI, marked enhancement after enhancement (a-d), and significant pressure on the spinal cord (e-l).
FIGURE 2 :
FIGURE 2: Cervical enhanced MRI showed destruction of C5-6 vertebral bodies and intervertebral discs (a-d).The posterior epidural abscess in the spinal canal was reduced earlier.T1WI showed a slightly lower signal.T2WI and pressed fat T2WI showed a mixed high signal, strengthened, and reduced spinal cord pressure (e-l).
FIGURE 3 :
FIGURE 3: Cervical enhanced MRI showed that the C5-6 vertebral body and intervertebral disc destruction basically returned to normal (a-d), the posterior epidural abscess disappeared in the spinal canal, and no obvious compression was seen in the spinal cord (e-l).
|
2019-04-18T13:03:34.309Z
|
2019-04-11T00:00:00.000
|
{
"year": 2019,
"sha1": "42310bdee044e209e41e5ec31adeae999aabb28e",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/rsbmt/v52/1678-9849-rsbmt-52-e20180243.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "42310bdee044e209e41e5ec31adeae999aabb28e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212664408
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Nutrition and Intestinal Microbiome on Elderly Depression—A Systematic Review
The aim of this review is to systematically review the evidence whether proper nutrition has a positive impact on the prevention or decline of depressive symptoms among elderly people. In addition, possible connections between nutrition, microbiome, and serotonin molecules and its tryptophan precursor are discussed. The methodology follows the PRISMA guidelines, including the PRISMA flow chart. The authors systematically reviewed peer-review, English-written articles published in Web of Science and PubMed between 2013 and 2018. The findings of six original articles, detected on the set inclusion and exclusion criteria, indicate that there is an association between nutrition and depressive symptoms in the target group, i.e., that proper nutrition has a positive impact on the prevention or reduction of depressive symptoms among elderly people. The findings also reveal that there is a considerable correlation between the intakes of vitamin B and a decrease in the prevalence of depressive symptoms. Furthermore, sufficient nutrient intake of tryptophan appears to be an important factor in terms of nutrition and serotonin levels in the body. The authors consider it important to explore associations between the overall dietary intake and depression since diets are not consumed as individual nutrients. Returning to preventive approaches seems to be a rational way to promote the mental health of seniors. Future studies thus need to include interdisciplinary collaboration: from a good diagnosis of the disease by a psychiatrist, through an analysis of the need for nutrient metabolism by a biochemist to the development of a nutritional plan by a nutritional therapist. The limitations of this review consist in a relatively small number of the studies on this topic, including just few randomized controlled trials, which are a guarantee of efficacy and objectivity in comparison with cross-sectional studies.
Introduction
At present, the global population is aging. One of the most serious problems of the aging population is depression. In fact, depression together with dementia is the most common disorder in this age group. It affects between 5% and 7% of the world's population [1]. Depression can be perceived as a mood disorder that results in a continuous feeling of melancholy, apathy, and indifference. In severe cases, it can result in suicide, especially in elderly white people [2,3]. The suicide rate is two times higher among elderly people at the age of 80-84 years than among other general population. Therefore, the National Institute of Mental Health sees depression in people at the age of 65 and over to be a major public health problem [4]. Depression seems to be one of the key economic issues as far as the healthcare costs are concerned. By 2030, depression is estimated to be the principal reason of disease burden in middle-and higher-income countries. Depression in later life is connected with frailty, a higher death rate, and poorer outcomes from physical illness [5].
Methods
The methodology follows the PRISMA guidelines. The authors systematically reviewed peer-reviewed, English-written articles published in Web of Science and PubMed between 2013 and 2018, since several review studies [23][24][25][26] on this topic had been published before. The searched collocations included the following words: nutrition and elderly with depression, nutrition and older people with depression, diet intervention and elderly with depression, dietary deficiencies and elderly with depression, diet intervention and older people with depression, dietary deficiencies and older people with depression, vitamin intake and elderly with depression, microbiome and elderly with depression, microbiome and older people with depression. The terms used were searched using and to combine the keywords listed and using 'or' to remove search duplication where possible. In addition, a backward search was also performed, i.e., references of detected studies were evaluated for relevant research studies that authors might have missed during their search. In addition, a Google search was conducted in order to detect unpublished (gray) literature.
The authors performed an independent quality assessment of these studies. They read the articles to assess eligibility and to determine the quality. The basic quality criteria were adequately described study design, participant characteristics, control conditions, outcome measures, and key findings, with special focus on statistically significant differences ( Table 1). The authors selected these basic quality criteria using Health Evidence Quality Assessment Tool for review articles.
The primary outcomes of this review were as follows: • to explore the impact of nutrition and intestinal microbiome on elderly depression; • to discuss possible associations between nutrition, microbiome, and serotonin molecules and its tryptophan precursor.
Results
Altogether, 470 titles/abstracts were detected by using the collocation words described above from the database/journal searches. The majority of articles were found in Web of Science (297 studies). In PubMed, 173 studies were found. Another six articles were detected from additional sources, most often from the references of the identified studies. After removing duplicates and titles/abstracts unrelated to the research topic, 142 English-written studies remained. Of these, only 62 articles were relevant for the research topic. These studies were investigated in full and they were considered against the following inclusion and exclusion criteria. The inclusion criteria were as follows: • The time span of the issuing the article was limited by January 1, 2013 up to December 31, 2018.
•
Only peer-reviewed English-written full-text journal articles were involved.
•
Only randomized controlled trials and experimental/cross-sectional studies were included.
•
The primary outcome focused on the association of nutrition and depressive symptoms.
•
The subjects had to be at the age 50+ with or without depression.
Considering the above described criteria, six articles were eventually included into the final analysis. Figure 1 below describes the selection procedure.
The authors altogether identified six original articles on the research topic. Only two studies [35,36] were randomized controlled trials (RCT) and four studies [37][38][39][40] were cross-sectional studies. Three of these studies originated in Asia (Malaysia, Japan, and South Korea) [36,39,40], one in Australia [35], and two in Europe (Croatia, Norway) [37,38]. Two studies [38,39] examined the relationship between the vitamin intake and depressive symptoms among elderly people; one study [35] examined the association between ω-3 fatty acid supplements and their effect on elderly at risk for depression; one study [37] concentrated on the connections between psychological distress and diet patterns; one study [36] explored fasting and calorie restriction dietary regime and its impact on mood in aging people; and one study [39] examined the predicting factors of depression, including nutrition, in elderly people. The subject samples ranged from 31 elderly to 11,621 older people. They were with or without depressive symptoms. The subjects were usually at the age of 65+ years, apart from one study [36]. The assessments were conducted by standardized methods, which comprised the Geriatric Depression Scale, statistical analyses, and questionnaires. The authors altogether identified six original articles on the research topic. Only two studies [35,36] were randomized controlled trials (RCT) and four studies [37][38][39][40] were cross-sectional studies. Three of these studies originated in Asia (Malaysia, Japan, and South Korea) [36,[39][40], one in Australia [35], and two in Europe (Croatia, Norway) [37,38]. Two studies [38,39] examined the relationship between the vitamin intake and depressive symptoms among elderly people; one study [35] examined the association between ω-3 fatty acid supplements and their effect on elderly at risk for depression; one study [37] concentrated on the connections between psychological distress and diet patterns; one study [36] explored fasting and calorie restriction dietary regime and its impact on mood in aging people; and one study [39] examined the predicting factors of depression, including nutrition, in elderly people. The subject samples ranged from 31 elderly to 11,621 older people. They The main strengths of the detected research studies are their effort for the objective assessment. However, the limitations include differences in methodologies of the selected studies and particularly the fact that in the cross-sectional studies, it is not likely to identify connections between diet patterns/nutrition intakes and depressive symptoms among elderly people. In some studies, there were small sample sizes, insufficient representativeness, or short duration of RCT.
Nevertheless, the results of all studies indicate that there is an association between the nutrient intake and depressive symptoms among elderly people. This means that a proper nutrient intake may have a positive impact on the prevention/reduction of depressive symptoms in aging population. Table 1 below gives a summary of the key results from the selected articles. The findings are outlined in alphabetical order of their first author. To explore the association between vitamin consumption and depressive symptoms in Japanese adults.
Irrelevant for this type of study. 1634 subjects at the age of 65+ years.
Self-administered questionnaires, interviews, comprehensive health examination, Geriatric Depression Scale, statistical analysis.
There are connections between vitamin deficiencies and depressive symptoms in women and overweight older adults.
Not an RCT (no assessment of causality), self-reported assessments might be bias.
Park et al. [40] Cross-sectional study (South Korea) To examine the dominant and envisaging factors of depression in older Korean people.
Irrelevant for this type of study. 258 subjects at the age of 65+ years.
In men-deficient protein consumption, suffering from more chronic diseases; in women-deficient vitamin B6 consumption, lower cognitive functions, and higher social isolation.
The same as above + limited representativeness of the sample.
Discussion
The findings of all six studies described in Table 1 indicate that there is an association between nutrition and depressive symptoms in the target group, i.e., that proper nutrition has a positive effect on the prevention or reduction of depressive symptoms among elderly people. In other words, an unhealthy nutrient intake is considered to be a modifiable risk factor for depression among elderly people [36]. The findings also reveal that there is a significant correlation between the intakes of vitamin B and an increase in the prevalence of depressive symptoms [38][39][40]. This has also been confirmed by other research studies [41,42].
Furthermore, depressive symptoms are connected with increased oxidative stress [43], which, as the findings of this review showed, could be reduced by ω-3 fatty acid supplements [34]. ω-3 fatty acid is especially essential for the maintenance of cell structure and have anti-inflammatory effects. Duffy et al. [34], in their study, proved that ω-3 fatty acid supplements may prevent the worsening of subclinical depressive symptoms. On the one hand, the protective effect of 3-3 supplementation is described, but on the other hand, it appears that polyunsaturated fatty acids (PUFAs, -3 and -6) may also play an important role in the pathophysiology of depression. Polyunsaturated fatty acids are involved in the structural and functional regulation of neurons, glial cells and endothelial cells in the brain. PUFAs undergo enzymatic metabolism in the body (cyclooxygenase, lipoxygenase, cytochrome P450) to form other bioactive molecules [44]. It is generally accepted that neuro-inflammation plays an important role in the pathophysiology of depression. Recent studies have linked inflammation with polyunsaturated fatty acids, respectively, with their metabolites. Soluble epoxy hydrolases (sEH), which are commonly present in all organisms, metabolize epoxy fatty acids with an anti-inflammatory effect to the corresponding 1, 2-diols with pro-inflammatory properties [45,46].
The results of Hussin et al.'s [36] study also show that fasting and calorie restriction dietary regime (with the retention of protein, vitamin, mineral, and water intake) are effective in improving mood states and nutritional status. Hussin et al. [36] also claimed that fasting is associated with increased brain availability of neurotrophic factors such as serotonin, whose deficiencies are often connected with depression [11,47,48].
The findings also point out gender differences in food preferences [37,49]. Older females seem to have a healthier diet pattern than older males, and therefore, they are less psychologically distressed. Nevertheless, interestingly enough, depression is more common in females than males, which might be associated with stressful life events and higher sensitivity to seasonal changes [50].
In addition, the link between nutrition, human intestinal microbiome, enteric nervous system, central nervous system, and depression could be significant, signaling the molecule serotonin or its tryptophan precursor, regardless of age. Tryptophan is known to have a direct effect on the dietary composition of microflora [51,52]. Tryptophan cannot be created by the human body itself; its presence is fully dependent on its dietary intake. One of the functions of tryptophan in the human body is that it serves as a precursor for the synthesis of the neurotransmitter serotonin. Interestingly, the conversion of tryptophan to serotonin occurs predominantly in the intestine, in the enterochromaffin cells of the intestinal mucosa. Therefore, the vast majority of serotonin is not found in the CNS but in the gastrointestinal tract [53]. The serotonin precursor tryptophan is converted by the enzymes to serotonin.
Serotonin is one of the neurotransmitters and, in addition to its well-known effects in the CNS, it has effects that are also implicated in the gastrointestinal tract (GIT) due to the present serotonin receptors [54]. In GIT, it is mainly used to control secretion, vasodilation, peristalsis, pain perception, and nausea. Serotonin levels vary depending on plasma tryptophan levels and on the current availability of tryptophan contained in nutrition [55]. In addition, the gut-brain axis plays an important role, as recent research revealed. The findings show that the metabolism of tryptophan and serotonin in mice have been able to be influenced by administering Lactobacillus reuteri. Even in the animal model, the authors talk about the antidepressant effect [56].
There is another serotonin-independent relationship between tryptophan and affective disorders. The second metabolic pathway of tryptophan in the human body is towards the cynurenine product. It is then metabolized by two different pathways: either leading to kynurenic acid or quinolinic acid [57]. Thus, in susceptible populations, a decrease in tryptophan plasma concentration can cause changes in moods such as depression [52].
Depression can be seen as a combination of disrupted regulations in the neuroendocrine, immune, metabolic, and neurotransmitter systems. Recent preclinical studies suggest that at least some of these processes may be modulated by the intestinal microbiota, or by its changes [58]. In addition to the generally accepted neurotransmitter serotonin, anxiety and depression may be associated with a disorder of another neurotransmitter GABA (gamma-aminobutyric acid) system. Strains of bacteria, such as Lactobacillus and Bifidobacteria, that can synthesize GABA from glutamate (an amino acid from food) are described. Lactobacillus rhamnosus has been shown to reduce anxiety and depressive behavior, while increasing GABA levels in the hippocampus [16,58].
Depression episodes may be also related to hypothalamic-pituitary-adrenal axis hyperactivity. It has been shown that the intestinal microbiota plays an important role in programming this axis in the early stages of life, but also generally in stress reactivity throughout life. The presence of intestinal microbiota at an early stage of life is very important for proper brain development, but the absence of intestinal bacteria during development may negatively affect the hypothalamic-pituitary-adrenal (HPA) axis [59].
Furthermore, antibiotics are widely used worldwide. They are effective in indicated cases but can also be potentially harmful drugs. There are two recent studies of antibiotic administration to animal models and gut-brain monitoring, in which the effect of intestinal microbiome in anhedonia-like phenotypes in mice after chronic social defeat stress (CSDS) [60] was observed on stress resilience using a rat learned helplessness paradigm [61]. The authors made very interesting findings, in that the antibiotic-induced microbiome depletion contributed to the resilience to anhedonia in mice subjected to CSDS, and that abnormal composition of gut microbiota contributes to susceptibility versus resilience to learned helplessness in rats.
Another factor that may contribute to the development of depression is an increase in intestinal permeability with subsequent bacterial translocation. In depressed patients, inflammatory manifestations in the gastrointestinal tract may increase. Typically, elevated levels of inflammatory biomarkers, specifically IL-6, TNF-α, or C-reactive protein, can be detected [16,62,63]. The theory of the impaired intestinal barrier and depression is also supported by Maes et al. [64], who induced intestinal mucosal dysfunction through stress, thereby increasing translocation of gram-negative bacteria, which induced greater HPA axis activity, thereby promoting depression [16].
Immunological reactions also play an important role in affective disorders. Significantly higher serum concentrations of IgM and IgA (against lipopolysaccharide walls of Gram-negative bacteria walls) were found in depressed patients compared to healthy controls [16,58,63]. It is likely that increased translocation of gram-negative bacteria enhances the immune response.
Overall, it seems important to make people aware of the value of healthy nutrition in the prevention of depressive symptoms since it is not only beneficial for their health-related quality of life, but it is also cost-effective [65]. It can be effectively done through public health education or individual consultation with nutritional specialist as research illustrates [66,67]. The studies also show that combined multi-domain interventions (e.g., a healthy diet, physical exercises, and cognitive training) produce a bigger effect than single domain interventions [67,68]. In addition, the most recent project MooDFOOD [9] aims to provide an insight into the causality of the link between diet and depression, and underlying pathways on the basis of both short-term and long-term research studies and transform the acquired knowledge into nutritional strategies, which could serve as guidelines and practical tools to guide policy at EU and Member State levels.
As with physical disorders, mental disorders are affected by nutrition. In the context of an increasingly aging population, the cost of public health budgets is rising. Returning to preventive approaches seems to be a rational way to promote the mental health of seniors. Given that human nutrition is very easy to adjust by simply changing the diet (as compared to other risk factors for depression), one of the first steps should include implementing a sustainable and tolerable nutrition plan by an appropriate expert. It is well known that directive-based diets do not work in the long run and people are unable to sustain them for their lives. The plan should be the result of close cooperation between the patient and the nutritional specialist focusing on the geriatric population. This is also because the evaluation of eating habits is very demanding in the elderly-these patients often have lower cognitive abilities, are burdened with polypragmasia, and are polymorbid.
Currently, professional literature is struggling with the lack of quality studies on this topic, especially in the long run. Many studies lack robust diagnosis of mental illness, and in others, eating habits have not been carefully evaluated. Longitudinal studies will be needed to explore these complex relationships. Future studies will need to include interdisciplinary collaboration: from a good diagnosis of the disease by a psychiatrist, through an analysis of the need for nutrient metabolism by a biochemist, to the development of a nutritional plan by a nutritional therapist. The overall supervision of the patient in terms of polymorbitis and polypragmasia should be covered by a geriatrician. With adequate funding for such projects, the association between diet and depression at a late age can be explained and preventive approaches may reduce the impact on the public budget.
The limitations of this review include a relatively small number of the studies on this topic, which might also be caused by selecting the studies within a certain time period. In addition, there were just few RCT, which are a guarantee of efficacy and objectivity in comparison with cross-sectional studies.
Conclusions
The findings of this review reveal that there is an association between the nutrient intake and depressive symptoms among elderly people. This means that a proper nutrient intake (e.g., vitamins or minerals) may have a positive effect on the prevention/ reduction of depressive symptoms in the aging population. However, it is important to study associations between the overall dietary intake and depression since diets are not consumed as individual nutrients.
Dysfunctional intestinal microbiome may be associated with behavioral disorders such as depression. This connection stems from a recently characterized two-way communication channel between the intestine and the brain, mediated by neuroimmune, neuroendocrine, and sensory neural pathways. Serotonin or its precursor tryptophan seems to be an important molecule.
Furthermore, in the geriatric population, depression and malnutrition go hand in hand and seem to be an interrelated relationship. Depression leads to changes in appetite; appetite is often reduced and the older population suffers from malnutrition, which only intensifies psychiatric symptomatology (depression, irritability, and stress). Dietary supplements containing, for example, vitamin D, folate, magnesium, zinc, and unsaturated fatty acids, do not appear to improve depression in the elderly. It seems much more likely that a comprehensive assessment of a nutritional plan and setting up a suitable diet already at a younger middle age could be the right preventative pathway. This also applies to the case of malnourished elderly patients, where an intensified diet plan drawn up by a specialist and supervised, for example, by a carer, is needed. There is still a consistent recommendation that the Mediterranean diet and regular physical activity together with non-smoking are associated with reduced depressive symptoms later in life, while some dietary supplements (consult above) can positively complement a healthy lifestyle.
Future research should include more randomized controlled studies, which would conclusively prove the efficacy of nutrition in the prevention of depressive symptoms.
|
2020-03-12T10:57:03.231Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "046cd527cd7e1d1458de8d06a36190db64de8ab0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/12/3/710/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5dda0bc1f8a5813487498d09406d1fa312ede8cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17488985
|
pes2o/s2orc
|
v3-fos-license
|
Why are Some Games More Addictive than Others: The Effects of Timing and Payoff on Perseverance in a Slot Machine Game
Manipulating different behavioral characteristics of gambling games can potentially affect the extent to which individuals persevere at gambling, and their transition to problematic behaviors. This has potential impact for mobile gambling technologies and responsible gambling interventions. Two laboratory models pertinent to this are the partial reinforcement extinction effect (PREE) and the trial spacing effect. Both of these might speed up or delay the acquisition and extinction of conditioned behavior. We report an experiment that manipulated the rate of reinforcement and inter trial interval (ITI) on a simulated slot machine where participants were given the choice between gambling and skipping on each trial, before perseverative gambling was measured in extinction, followed by measurements of the illusion of control, depression and impulsivity. We hypothesized that longer ITI’s in conjunction with the low rates of reinforcement observed in gambling would lead to greater perseverance. We further hypothesized, given that timing is known to be important in displaying illusory control and potentially in persevering in gambling, that prior exposure to longer intervals might affect illusions of control. An interaction between ITI and rate of reinforcement was observed, as low reinforced gamblers with a long ITI gambled for longer. Respondents also displayed extinction and a PREE. Gamblers exposed to a higher rate of reinforcement gambled for longer in acquisition. Impulsivity was associated with extended perseverance in extinction, and more depressed gamblers in the high reinforcement short ITI group persevered for longer. Performance in the contingency judgment failed to support the second hypothesis: the only significant contrast observed was that participants became better calibrated as the task progressed.
Manipulating different behavioral characteristics of gambling games can potentially affect the extent to which individuals persevere at gambling, and their transition to problematic behaviors. This has potential impact for mobile gambling technologies and responsible gambling interventions. Two laboratory models pertinent to this are the partial reinforcement extinction effect (PREE) and the trial spacing effect. Both of these might speed up or delay the acquisition and extinction of conditioned behavior. We report an experiment that manipulated the rate of reinforcement and inter trial interval (ITI) on a simulated slot machine where participants were given the choice between gambling and skipping on each trial, before perseverative gambling was measured in extinction, followed by measurements of the illusion of control, depression and impulsivity. We hypothesized that longer ITI's in conjunction with the low rates of reinforcement observed in gambling would lead to greater perseverance. We further hypothesized, given that timing is known to be important in displaying illusory control and potentially in persevering in gambling, that prior exposure to longer intervals might affect illusions of control. An interaction between ITI and rate of reinforcement was observed, as low reinforced gamblers with a long ITI gambled for longer. Respondents also displayed extinction and a PREE. Gamblers exposed to a higher rate of reinforcement gambled for longer in acquisition. Impulsivity was associated with extended perseverance in extinction, and more depressed gamblers in the high reinforcement short ITI group persevered for longer. Performance in the contingency judgment failed to support the second hypothesis: the only significant contrast observed was that participants became better calibrated as the task progressed.
INTRODUCTION
The emergence of new gambling technologies comes with the concern that novel reinforcement schedules might increase the risk of harm to gamblers. Models of problem gambling assume there are a set of common behavioral and cognitive processes underpinning the development of addictive behavior (Blaszczynski and Nower, 2002;Sharpe, 2002). We report an experiment investigating the effects of partial reinforcement and timing on perseverative gambling behavior, as these may underpin part of the transition to problem gambling. Deficits in processing partial reinforcement have been previously observed in heavy gamblers (Horsley et al., 2012), while increasing inter trial intervals (ITIs) facilitates the acquisition of conditioned behavior (Gallistel and Gibbon, 2000). In this report we outline an experiment in which participants played on a simulated slot machine on which win frequency and ITI were manipulated between groups and perseverance in extinction was measured.
Delay, Trial Spacing, and ITIs
Increasing the interval between gambles might be instrumental in encouraging continued play and may be a component behind the popularity of certain games. Lottery games for example have extended delays between gambles and are often the most popular and frequently played games (Wardle et al., 2011). While this might be because lotteries are highly available (amongst numerous considerations), in some jurisdictions (e.g., the UK) other games are offered alongside lottery tickets (e.g., scratchcards), controlling for availability. Despite this, many more people play the lottery than similarly available games, and do so more frequently. However, the perceived risk of harm is very low, although it is unclear whether the 'addictiveness' of gambling lies in specific games (Afifi et al., 2014) or specific behavioral features (Griffiths and Auer, 2013). Some mobile video games exploit similar effects by enforcing delays between plays of gambling-like games. In-play betting, which is associated with mobile (Hing et al., 2014) and problem gambling (Gray et al., 2012;LaPlante et al., 2014), combines continuous and discontinuous play. Understanding the role of timing and latency on gambling behavior has important consequences for newer forms of gambling, such as mobile gambling (where in-play betting is heavily promoted), as the manner in which people use smartphones is likely to increase latencies between gambles. Inplay refers to bets made on an event (e.g., a soccer match) while the event itself is occurring whereas in traditional forms of betting the wager is made prior to the event. Griffiths and Auer (2013) argue in-play betting might be more addictive because it is more continuous. However, considerable discontinuities persist in play as betting remains constrained within an event. Real data on inplay betting (LaBrie et al., 2007) reveals mixed findings: although there is a clear risk of problem gambling, the findings do not decisively conclude this is because of its continuous nature; inplay gamblers placed fewer bets and there was little difference in daily levels of betting. Although in-play bettors wagered more money overall, the median wagered was lower than traditional sports betting, and in-play bettors had a lower net loss. Gray et al. (2012) suggest the immediacy between wager and outcome may be instrumental in attracting risky or impulsive gamblers to in-play gambling.
The associative learning literature indicates that increased latencies between reinforcements facilitate acquisition of conditioned behaviors (Gallistel and Gibbon, 2000). Gallistel and Gibbon's (2000) timing model hypothesizes that a decrease in the ratio between reinforcements and ITI in classical and operant conditioning reduces the number of reinforcements to acquisition. This is claimed to be independent of partial reinforcement, which increases the number of trials but not reinforcements. The literature on the 'trial spacing' effect, primarily studied in the context of classical conditioning (Barela, 1999;Stout et al., 2003;Sunsay et al., 2004;Moody et al., 2006;Sunsay and Bouton, 2008;Miguez et al., 2014), has found that dispersed trials facilitate conditioning.
It is less clear whether greater latencies in extinction affect performance. Gallistel and Gibbon (2000) claim that the interval without reinforcement rather than non-reinforcing events is key, and that omitted reinforcements in extinction are unaffected by partial reinforcement. Other research has identified ITI effects on extinction, with greater suppression of responding observed with shorter ITI's (Mackintosh, 1974;Moody et al., 2006).
Timing is thought to be an important component of the illusion of control (Msetfi et al., 2005(Msetfi et al., , 2007Baker et al., 2010), a cognitive bias that is prevalent in problem gambling (Fortune and Goodie, 2012). Illusions of control, operationalized as an overestimation of the relationship between a response and outcome, can be induced using a contingency judgment task in which these events are unrelated but the outcome occurs very frequently. Standard examples of this task include a button pushing task associated with the activation of a light (Alloy and Abramson, 1979), or a medical decision-making task judging the relationship between an experimental drug and patient improvement (Orgaz et al., 2013). The extent to which non-depressed individuals show illusions of control is affected by the latency between trials: longer ITI's are associated with stronger illusory control in non-depressed individuals (Msetfi et al., 2005). Problem gamblers show stronger illusions of control in contingency judgment paradigms (Orgaz et al., 2013), although the causal direction of this relationship remains unclear: extensive exposure to certain schedules of reinforcement might increase illusions of control, or individuals susceptible to illusions of control may be more likely to develop gambling problems. We included a task derived from the same paradigm as Orgaz et al. (2013), which participants were asked to complete after the slot machine task. We also measured depression as depressed individuals appear to make more calibrated judgments in this paradigm (Alloy and Abramson, 1979) with a longer ITI (Msetfi et al., 2005). Disordered mood has also been identified as a potential pathway to problem gambling (Blaszczynski and Nower, 2002).
Partial Reinforcement Extinction Effect and Impulsivity
The partial reinforcement extinction effect (PREE) is a behavioral paradox in which weakly reinforced behaviours persist for longer without reinforcement relative to more consistently occurring reinforcers (Mackintosh, 1974;Bouton et al., 2014), such as during an extended period of losses in gambling (Dickerson, 1979;Fantino et al., 2005;Horsley et al., 2012). Partial reinforcement deficits have been identified in high frequency gamblers 1 , who take longer than recreational gamblers to extinguish these associations (Horsley et al., 2012), a change that might occur from chronic exposure to the schedules of reinforcement in gambling. Horsley et al. (2012) report that although partial reinforcement is hypothesized to be an important component in gambling, the evidence base is sparse. Failure to extinguish has been identified as a marker of problem gambling (Weatherly et al., 2004). Failure to extinguish also directly (e.g., unsuccessful efforts to stop gambling, gambling more than intended to) or indirectly (e.g., chasing losses) corresponds onto indicators for Gambling Disorder (American Psychiatric Association, 2013) or problem gambling (Lesieur and Blume, 1987).
It is unsurprising that the PREE has been linked with gambling, and considerable attention has been devoted to studying this in slot machines. Slot machines tend to have a very low rate of reinforcement (although this varies on computerized machines), and gamblers persevere in play despite mounting sequences of losses. There is a literature that has used slot machine tasks to probe the effects of partial reinforcement on operant learning. Lewis and Duncan (1956, 1957, 1958a conducted a series of experiments using simulated gambling to test theories of partial reinforcement, finding that lower reward probabilities were associated with greater perseverance. Poon and Halpern (1971) used a similar paradigm to test Capaldi's (1966;Capaldi and Martins, 2010) partial reinforcement theories by manipulating trial order in a slot machine task with a small number of acquisition trials. Kassinove and Schare (2001) manipulated big wins and nearmisses in perseverative behavior in extinction in a similar slot machine paradigm, finding that near-miss density affected the extent to which participants persisted gambling but not big wins.
Different schedules of reinforcement potentially affect how behaviors extinguish (Madden et al., 2007;Haw, 2008a) Gambling operates on a random ratio schedule of reinforcement, a subset of the variable ratio schedule. Less well understood than variable ratio schedules, it is informative to contrast how random ratio schedules differ from variable ratio schedules. The typical distribution the number of trials until a response is reinforced on a random ratio schedules follows an L-shaped pattern; the number of trials rapidly drops off after a small number of plays but continues indefinitely at very low probability. In contrast on a variable ratio schedule it is usually (but not necessarily) the case that the probability of the number of trials to reinforcement is evenly distributed, and there is an upper limit on the number of trials before a behavior is reinforced (Haw, 2008a). Studies comparing these schedules have not shown clear differences; Hurlburt et al. (1980) found no difference between variable and random ratio schedules in gambling, although weaknesses with this study have been identified (Haw, 2008b). Crossman et al. (1987) found no difference between three ratio reinforcement schedules (variable, fixed and random) in animals. Recent studies have suggested that random-ratio schedules demonstrate more perseverative behavior compared to fixed-ratio schedules, particularly when the number of trials to reinforcement is very large (Madden et al., 2007).
The slot machine task we outline in this report was designed so that participants were asked to risk money they had won during the experiment, but the amount of money won would gradually increase. The low-reinforcement conditions attempted to create a situation similar to real-money gambling. One criticism of many slot machine experiments was that these studies tended to utilize a high rate of reinforcement relative to real slot machines (Kassinove and Schare, 2001;Harrigan, 2007). A mechanical three-reel slot machine has a win probability of 9%, but this varies on computerized machines (Harrigan and Dixon, 2009). In gambling research (e.g., MacLin et al., 2007;Dixon et al., 2011) higher rates of reinforcement (20%) have been used in extinction paradigms. We decided to use a rate of reinforcement of 30%, operating on a random ratio schedule of reinforcement similar to real slot machine gambling.
Self-reported impulsivity was measured impulsivity predicts perseverative gambling in the face of mounting losses, and is a pathway to problem gambling. Breen and Zuckerman (1999) found that impulsive gamblers 'chased' losses for longer in a gambling game where the win probability decreased as the experiment continued. Impulsivity has been identified as risk factor for problem gambling, problem gamblers Kräplin et al., 2014) show higher self-reported impulsivity.
To test whether these behavioral effects encourage perseverative gambling, we conducted a two-part experiment where ITI and rate of reinforcement were manipulated. Participants were assigned to one of four groups and exposed to a high or low rate of reinforcement, and a long or short ITI between gambles. Associations were extinguished after a certain amount of money had been won. Participants subsequently completed a contingency judgment task in which they judged the efficacy of an experimental drug. The literature on partial reinforcement predicts that individuals exposed to a lower rate of reinforcement will persevere longer. Trial based accounts of extinction predict that massed extinction trials should suppress responding faster, as opposed to a timing-based account where there ought to be no difference. Impulsive gamblers should persevere for longer in extinction as well, on the basis of previous experiments looking at perseverance in loss-chasing.
Design
The experiment was a 2 × 2 between-subjects factorial design, the rate of reinforcement and ITI were the factors manipulated. The rates of reinforcement were 0.7 and 0.3. ITIs were either long (10 s) or short (3 s).
On every trial the participants were given the choice either to gamble or not. The number of trials in which participants decided to gamble was the dependent variable. The outcome of the gamble and the amount of money participants had won was also recorded. The extinction phase was divided into blocks of 10 trials for analysis. Participants were also administered a contingency judgment task. In the contingency judgment task measures were of the proportion of trials in which the drug was administered, and the contingency judgment made by participants. Impulsivity and depression were measured using the Barratt Impulsiveness Scale (BIS-11; Patton et al., 1995) and Beck Depression Inventory (BDI; Beck et al., 1961) respectively. The BIS-11 is a 30-item measure that measures three higher order factors of attentional, non-planning, and motor impulsivity (Patton et al., 1995). The BDI is a 24-item measure that measures multiple levels of depression severity, discriminates depression from anxiety and has strong internal consistency (Beck et al., 1988). No further measurements of individual difference or behavior were taken apart from the ones reported herein.
Participants
A total of 122 participants were recruited from the University of Nottingham community to take part in this study (Mean age = 22.63, S.D. = 3.96, gender -69 females and 53 males) 2 . This study was carried out in accordance of, and with ethical approval by the University of Nottingham School of Psychology Ethics Review Committee. All participants gave written consent prior to the beginning of the experiment. 2 A number of participants across conditions dropped out (n = 18). Participants who withdrew were resampled. All the participants who dropped out completed measures of depression and impulsivity. The majority of these dropouts (82%) were in the low rate of reinforcement, high ITI condition. Non-parametric tests were carried out to test whether the participants who dropped out differed from other participants from the same condition in any regard. No significant differences were observed in impulsivity or depression scores, nor the rate they were gambling prior to dropping out (Wilcoxon's signed rank test, p > 0.05). All participants were debriefed upon withdrawal from the experiment. Participants who dropped out reported that they withdrew from the experiment because the length of the study conflicted with other engagements (e.g., lectures).
There was no evidence of any trait differences between the groups. A one-way Analysis of Variance (ANOVA) was conducted on both questionnaires, and the ANOVAs for the BIS [F(4,166) = 1.543, p = 0.192] and the BDI [F(4,166) = 0.662, p = 0.619] were non-significant.
Procedure
Participants were randomly assigned to one of four conditions. For the first part of the experiment, participants were asked to participate in a PREE paradigm in the context of a simulated slot machine (Figure 1). Participants were told how the slot machine worked, and the magnitude of the payoff for each type of winning outcome. The simulated slot machine was a simple one-line slot machine with three reels. Participants won money if the icons on three reels matched. There were five different icons (lemon, cherry, pear, orange, and lucky seven), with winning values of 10, 15, 20, 25, and 30p. The likelihood of each winning outcome occurring was the same, so the mean winning outcome was 20p ($0.35).
For each trial, participants were given the choice between gambling and skipping. The buttons were highlighted so that participants were aware of the two choices they had. Regardless of whether they chose to gamble or not, the images on the three reels presented on the screen refreshed every 500 ms to give the appearance of movement. At 1500, 3000, and 4500 ms, one of the reels (from left to right) stopped reeling. If the reels matched and the participant gambled, the participants was awarded money correspondent to value of the icons on the reel. If the reels did FIGURE 1 | Screenshot of the slot machine display participants were given during the partial reinforcement task.
Frontiers in Psychology | www.frontiersin.org not match, they lost the wager they had made, which was fixed at 3p (£0.03, equivalent to around US$0.05). Wins and losses were accompanied by visual and auditory feedback which differed for each outcome. These noises were different if the participants skipped the gamble. Throughout the task participants were informed of their current balance. Between each trial, the buttons on the screen remained red, signifying that the participants were unable to make another wager. The ITI for the short ITI condition was 3000 ms, and 10000 ms for the long ITI condition.
Participants were presented with 10 practice trials before the game began crediting or deducting money from the player. Participants were informed when the practice trials had ended. Once the experimental trials began, participants played until they reached criterion, set as having won than £10.00 (US$15.40) in the bank. Once participants reached criterion, they were exposed to 50 trials of extinction, where it was not possible to win any money from the slot machine, and then the task ended automatically. Extinction was measured by the suppression of their gambling behavior; participants were not informed of the extinction phase at the end of the experiment. The practice trials had winning trials (which did not pay out), and the extinction phase had no wins or money. The practice and extinction phases were identical in each condition, bar the different ITI's participants were exposed to.
After completing the PREE paradigm, participants were asked to make a series of contingency judgments about the effectiveness of a fictitious experimental drug related to patient recovery. The contingency judgment paradigm was adapted from a previously published study (Orgaz et al., 2013). In this paradigm participants were presented information about a fictional drug that was designed to cure a fictional infectious skin disease that had unpleasant consequences when an outbreak/crisis occurred. Participants were given the option of choosing between administering the drug and not administering the drug, and they were given feedback concerning the outcome immediately afterward (whether the patient's situation had improved or not). The paradigm was designed to elicit illusions of control by having a high outcome density -the base rate of the desired outcome (patient recovered) was high (0.8), and was completely independent of the users decision. After making their decision, the participants were informed of the outcome of the choice, and there was a small pause (3500 ms) before being presented with the decision again.
After each set of 10 trials, participants were asked to judge the effectiveness of the drug. Participants were asked to judge the effectiveness of the drug on a scale from 0 to 100. This was represented by a shaded bar in the middle of the screen, on which they were given feedback about the number they chose, determined by how far along the bar they clicked. Participants could repeat clicking along the slider until they were happy with their choice, and were asked to confirm their choice using a separate button.
Analytic Approach
To assess the length of extinction for each group, the proportion of gambles made were averaged across five blocks of 10 trials. Data analysis proceeded in two stages. Firstly, factorial ANOVAs were conducted on the extinction and contingency judgment data, with a 5 (block) × 2 (ITI) × 2 (Rate of Reinforcement) mixed design ANOVA being conducted. A 10 × 2 × 2 mixed design ANOVA was carried out on the 10 contingency judgments participants made. To test the effects of individual differences on gambling behavior and perseverative gambling, a series of poisson regression models were estimated on the number of trials participants gambled on during acquisition and extinction. This was conducted in three steps. First, an initial model was constructed where no covariates were entered into the model. Then, a second regression model was constructed in which ITI, rate of reinforcement, BIS scores, BDI scores and an interaction term between ITI and rate of reinforcement were included. ITI and rate of reinforcement were dummy coded (high ROR = 1, low = 0; short ITI = 1, long = 0), and BIS/BDI scores were rescaled with a mean of 0. This was compared against a null model using a likelihood ratio test (LRT). LRT's are typically used in latent variable modeling to compare between two nested models, for example in latent class analysis (Collins and Lanza, 2010), or between the fit of two regression models, as in this case. This was then compared against a full model in which interaction terms were modeled across each covariate.
At this point, the data was tested to examine whether the data fit a poisson distribution. Crucially, poisson regression assumes that the conditional mean and variance are equal. While deviations from this assumption have little effect on the overall regression coefficients, when overdispersion (the variance being larger than the mean) is substantial this tends to depress standard errors, increasing the risk of false positive findings. While robust standard errors can be used to adjust these (Cameron and Trivedi, 2009), an alternative is to estimate a negative binomial regression model, which includes an extra parameter to model overdispersion. For the acquisition data, this approach was taken. For the extinction data, while the data was overdispersed the level of dispersion was considerably less, and so robust standard errors were applied to the regression model.
A number of outliers were found in the low rate of reinforcement extinction data. An examination of the data indicated that a number of gamblers in the low reinforcement, long ITI condition stopped gambling less than two gambles into extinction occurring and that these were outlying data points. These participants (n = 3) reported in debrief they treated £10 as salient, either stopping immediately after they won £10 or stopped to remain above £10, independent of any change in contingency. These participants were excluded from further analysis.
Gambling Behavior
To study the effect of behavioral and trait variables on acquisition behavior, an offset negative binomial regression model was used to control for differential effects of exposure, where the same variables were used for the restricted and full factorial models as the extinction data. These revealed that the restricted model (Table 1) was a better fit than the null model (G 2 = 22.74, p < 0.001), but that a full factorial model was no better fit than the restricted model (G 2 = 6.359, p = 0.784). This revealed that participants exposed to a higher rate of reinforcement gambled more frequently in acquisition.
PREE Task
The ANOVA conducted on the extinction data revealed main effects of block This interacted with rate of reinforcement, as participants exposed to a higher rate of reinforcement extinguished more quickly, suggesting the presence of a PREE. The main effect of rate of reinforcement signified the same finding. The rate of reinforcement and ITI interaction indicated that when there was a low rate of reinforcement with a long ITI, participants gambled for longer in extinction (Figure 2). The block and rate of reinforcement effects, and the interaction between block and rate of reinforcement were all large in size (η 2 p > 0.12), whereas the interaction between rate of reinforcement and ITI interaction was a small to medium effect.
Individual Differences
To test the role of individual differences in perseverative gambling, a poisson regression procedure was used on the number of gambles in extinction The LRT indicated that the initial restricted model was a better fit of the data compared to the null model (G 2 = 581.15, p < 0.001). The restricted regression model ( Table 2) indicated that lower rates of reinforcement and longer ITI's predicted longer perseverative gambling. These terms interacted in the same manner as the factorial ANOVA. A further regression model including interaction terms between the different covariates was subsequently conducted (Table 3) with the same variables as the regression in Table 1. A LRT comparing the restricted and full factorial regression models indicated that the full factorial model was a better fit of the data (G 2 = 66.44, p < 0.001). This revealed the same significant effects as previously, but also that higher self-reported impulsivity predicted longer perseverative gambling. There was a trend suggesting that this interacted with rate of reinforcement, FIGURE 2 | Plot of extinction data for all groups, in blocks of 10 trials.
Frontiers in Psychology | www.frontiersin.org with less impulsive individuals appearing to persevere less in low reinforcement conditions. Scores on the two psychometric measures interacted, and there was a three way interaction between ITI, rate of reinforcement and BDI, with more depressed individuals in the high rate of reinforcement, short ITI group gambling for longer in extinction (Figure 3).
DISCUSSION
The results of this experiment demonstrate how different schedules of reinforcement affect behavior during a simulated gambling task, and can produce extended gambling in the face of continued losses. This also extends findings from a number of behavioral paradigms measuring perseverance to situations where participants are asked to name a specific preference. Both rate of reinforcement and ITI were instrumental in affecting how long participants gambled for when associations were extinguished, and these interacted. There was evidence that individual differences affected behavior under these conditions, with more impulsive individuals gambling for longer in extinction. In terms of rate of reinforcement, the findings of this study mirror an extensive literature that has repeatedly found that a leaner schedule of reinforcement is associated with greater perseverance in extinction. The findings concerning ITI (and the interaction term), have been predicted in the past, and a couple of studies have identified trial spacing effects in extinction with animals, but to our knowledge human research on this issue is very limited. This also highlights how the effects of timing on perseverative gambling have potential implications for gambling practice, particularly with newer gambling technologies being likely to alter the latencies between gambles. The impulsivity related findings speak to a literature that has previously suggested that impulsive individuals persevere for longer when the amount of money lost. This furthers research that highlights the importance of behavioral processes on gambling behavior, and has implications for gambling games and technologies, particularly those that encourage intermittent patterns of play.
Our findings broadly mirror a number of studies that used simulated slot machine paradigms to test partial reinforcement (Lewis and Duncan, 1956, 1957, 1958aPoon and Halpern, 1971). We measured extinction slightly differently to previous studies, asking participants to choose whether to continue or not rather than when they walked away from the machine. Similar effects have been observed previously when asking people to choose between one of two machines (Dymond et al., 2012). It is important to note that it has been contested whether gamblers are able to discriminate between machines with different rates of reinforcement, measured in terms of preference (e.g., time spent on machine) between two or more simulated slot machines (Weatherly et al., 2004;Haw, 2008b;Dixon et al., 2013;Coates and Blaszczynski, 2014). We found that high rates of reinforcement were associated with a higher level of engagement on a simulated machine. This is broadly consistent with the literature, which has found that differences emerge but only when there is a sufficiently large enough gap in reinforcement. These results extend these to when different groups are exposed to different machines.
Both of the low reinforcement groups displayed extensive perseverative gambling. This continued gambling is potentially a behavioral marker of loss-chasing. Chasing losses is the often the first criterion of Disordered Gambling to emerge Miller et al., 2013), and in models of problem gambling is theorized as a tipping point towards problem gambling. The extinction paradigm probes within-session continuation, a phenomena thought to be very closely related to loss-chasing in problematic gambling (Breen and Zuckerman, 1999). Partial reinforcement has previously been suggested as an alternative explanation for the phenomenon of loss-chasing (Dickerson, 1984), particularly for the continuation of gambling. Other explanations for loss-chasing tend to invoke the gamblers fallacy (Campbell-Meiklejohn et al., 2008). The results of this study provide support for the role of partial reinforcement in losschasing, albeit being limited to the perseverative aspects of chasing. Further research would need to be conducted on wager size to verify this. It should be noted though that in terms of clinical criteria (e.g., for Gambling Disorder in the DSM), there is a greater emphasis on perseverance. Similarly we found that impulsive individuals gambled for longer in extinction, a finding that has been previously observed in the literature (Breen and Zuckerman, 1999), and interpreted as demonstrating that impulsive individuals chase losses for longer.
Considering ITI, while we found that individuals persisted for longer in extinction with a longer ITI, their gambling behavior did not systematically differ in acquisition. The extinction finding appears to be somewhat more consistent with a trial based account of the PREE (Mackintosh, 1974), although we did not directly test between the two accounts. This finding somewhat contrasts with studies that have found that shorter latencies are associated with greater engagement (Linnet et al., 2010) and greater risk preferences (Hayden and Platt, 2007). We did not find that individuals preferred the longer ITI machines, but they did gamble for longer on them when forced to make a choice.
A key qualification is that the development of slot machines indicates that machines have tended to speed up rather than slow down. However the way in which individuals interact with devices that can be used for gambling such as smartphones tends to increase latency, and is occasionally used within mobile video games for a similar purpose; players are offered the opportunity to gamble for an in-game valuable with large intervals (e.g., once a day), and can play again for real money. A similar concern is that some interventions aimed at reducing the harm caused by gambling intervene by forcing pauses within a gambling session. While this affects timing between sessions rather than trials, associative accounts of timing indicate a similar outcome. The findings of this study imply that care should be taken with these interventions. Moreover, this concern is not without empirical support, as a recent study has found that forcing breaks without including content to target gamblers' attitudes or behaviors increases individuals' motivations to continue gambling (Blaszczynski et al., 2015). Although this study explains these findings in the context of behavioral completion, an associative interpretation that is closely aligned with the present findings can be postulated.
The main effect of block (and a significant linear contrast) showed that participants' gambling behavior was suppressed as extinction proceeded, and that extinction continued the longer that participants continued to lose. A main effect of rate of reinforcement was found. This is the classic PREE effect that has been observed in many studies since Humphreys (1939). These two main effects also interacted; behaviorally this is a restatement of the PREE, as the speed at which participants extinguished was faster with a high rate of reinforcement.
An interaction between the rate of reinforcement and ITI was also observed. The analyses strongly suggest that this interaction was driven by the low reinforcement, long ITI group, which appeared to show a resistance to extinction in the first two blocks (although no interaction with block was observed). Moody et al. (2006) found a similar pattern of results manipulating ITI in a partial reinforcement paradigm, albeit with much larger gaps between trials. This finding also appears to be consistent with Mackintosh's (1974) review of extinction. This finding is particularly interesting in the context of newer gambling technologies, such as smartphone gambling, where larger gaps between gambles are anticipated because of how these devices are used. The Pathways Model (Blaszczynski and Nower, 2002), a well-supported model of problem gambling, predicts there are three pathways to problem gambling that share common associative learning and cognitive bases, and in particular that there is a 'behaviorally conditioned pathway' driven purely by this, compared to others which emphasize emotional vulnerabilities and antisocial/impulsive traits.
The only difference observed in the contingency judgment task was a main effect of block: participants' judgments became better calibrated as the task progressed. The linear contrast on this was also significant, confirming the direction of the finding. Participants showed an illusion of control, as contingency judgments were substantially greater than relationship between response and outcome. There were no effects of ITI and rate of reinforcement. Given the unclear causal mechanisms underlying illusions of control (Orgaz et al., 2013), it might be that a behavioral processing deficit poses a risk factor for problem gambling. Consequently it would be interesting to examine whether performance on this task, taken prior to a gambling task, subsequently predicts gambling behavior.
We found that depressed individuals gambled for longer in the highly reinforced, short ITI group. Depressed individuals often prefer rapid, random games (e.g., slot machines) that produce negative reinforcement from poor mood (Blaszczynski and Nower, 2002). Problem gambling theories emphasize the importance of negative reinforcement in individuals experiencing traumatic life events or disordered mood; negative reinforcement is strongly hypothesized to be an important component in dependence related behaviours. With regard to ITI, resistance to expectancy changes observed in depressed and individuals (Abramson et al., 1978), in conjunction with changes in learning in depression due to ITI that has been used to explain the depressive realism effect might explain this finding. Specifically, the ITI and illusion of control literature identified that in positive contingencies, increases in ITI did not affect contingency judgment, but in depressed individuals these were inhibited in the same manner as non-contingent associations (Msetfi et al., 2005(Msetfi et al., , 2007Baker et al., 2010). Given that this line of research strongly suggests that ITIs affect behavior different in depressed people, it might be the case that increasing ITI has the same effect on expectancy changes as it does on contingency judgments, which might explain these findings. However this is speculative, and would require further research to investigated.
This study highlights how different schedules of reinforcement affect gambling behavior. Participants exposed to a lower rate of reinforcement persevered for longer. This interacted with ITI, as participants exposed to a longer ITI and a low rate of reinforcement gambled for longer in extinction. Participants with higher self-reported impulsivity gambled for longer in extinction. The results demonstrate that manipulating behavioral features in a simulated gambling game can produce longer perseverative gambling.
FUNDING
This research contained in this report was funded by the Economic and Social Research Council (ES/J500100/1) and Engineering and Physical Sciences Research Council (EP/GO37574/1).
AUTHOR CONTRIBUTIONS
All authors listed, have made substantial, direct and intellectual contribution to the work, and approved it for publication. Richard James was responsible for the data collection and analysis. This work forms part of his doctoral research.
|
2016-05-12T22:15:10.714Z
|
2016-02-02T00:00:00.000
|
{
"year": 2016,
"sha1": "b428348396e48da96d718ce76456d2e7a46f4b1c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2016.00046/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b428348396e48da96d718ce76456d2e7a46f4b1c",
"s2fieldsofstudy": [
"Psychology",
"Economics"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
119357636
|
pes2o/s2orc
|
v3-fos-license
|
Momentum transfer to small particles by aloof electron beams
The force exerted on nanoparticles and atomic clusters by fast passing electrons like those employed in transmission electron microscopes are calculated and integrated over time to yield the momentum transferred from the electrons to the particles. Numerical results are offered for metallic and dielectric particles of different sizes (0-500 nm in diameter) as well as for carbon nanoclusters. Results for both linear and angular momentum transfers are presented. For the electron beam currents commonly employed in electron microscopes, the time-averaged forces are shown to be comparable in magnitude to laser-induced forces in optical tweezers. This opens up the possibility to study optically-trapped particles inside transmission electron microscopes.
I. INTRODUCTION
Electromagnetic forces in optical tweezers are currently employed to trap small particles ranging in size from nanometers to several microns [1,2], and to manipulate them in all spatial directions [3,4]. This type of forces is also used to characterize the elastic properties of deformable tiny objects (e.g., living cells [5]), to obtain quantitative information on mechanical properties at small length scales [2], and in general, to fix the position of those particles so that they can be manipulated at will.
In this context, transmission electron microscopy offers a potentially useful tool to study optically trapped particles, providing excellent spatial resolution (sometimes below 1Å) when sub-nanometer electron beams are employed [6], while allowing spectroscopic characterization with sub-eV accuracy. Actually, transmission electron microscopes are routinely exploited to probe local optical response properties [7], and more recently, also to determine photonic structures of complex materials [8].
A major problem that may arise when combining electron microscopy with optical tweezers or other types of optical trapping (e.g., optical lattices [9,10,11]) is that the passing electrons can kick the particles out of the trapping locations (see Fig. 1). In this work, we show that the momentum transferred from the passing electrons to the particles can be well below the threshold needed to kick them out for commonly employed trapping laser intensities, although a detailed comparison between trapping forces and electron-induced forces suggests that both weak and strong perturbation regimes are possible depending on the distance between the particles and the beam, all of them within the range that allows a sufficiently large electron-particle interaction as to perform electron energy loss spectroscopy (EELS) with significant statistics for in vacuo optically-trapped particles.
The moving electrons can be in fact regarded as a source of evanescent electromagnetic field that probes the sample locally, and in this sense, they can be also used to produce deformation in elastic particles, oscillations of trapped particles around their equilibrium positions, FIG. 1: (color online). Schematic representation of the process considered in this work: a fast electron moving with impact parameter b and velocity v with respect to a polarizable particle transfers momentum ∆p = (∆px, ∆pz) to the particle via electromagnetic interaction. and other interesting effects associated to the transfer of momentum within accurately controlled spatial regions.
II. THEORY
The electromagnetic force exerted on a particle in vacuum is given by the integral of Maxwell's stress tensor over a surface S embedding the particle [12] as wheren is the surface normal and Gaussian units are used. The momentum transferred to the particle, ∆p, is obtained by integrating of F(t) over the time. This yields where and the Fourier transform is defined as E(r, ω) = dtE(r, t) exp{iωt}. The force acting on the particle is due in part to radiation emitted as a result of interaction with the electron and in part to the reaction force experienced by the projectile. For small particles, the effect of radiation emission is negligible and the trajectory is deflected by an angle ≈ ∆p/mv, where m and v are the mass and velocity of the electron. Non-retarded calculations have shown that this angle is too small to be easily measured [13].
A. Small particles
Let us first consider a small isotropic particle sufficiently far away from the electron beam as to neglect higher multipoles beyond induced dipoles. The particle is then characterized by its frequency-dependent polarizability α(ω), and the force exerted by each frequency component of the external field E(r, ω) reduces to [14] This expression can be derived from Eq. (2) by considering an integration surface arbitrarily close to the object and by using the expressions for the electric and magnetic fields induced by a small polarizable particle in terms of its polarizability α. For an electron moving with velocity v towards to positive z direction and passing by the origin at t = 0, the external field is readily calculated from Maxwell's equations to yield where R = (x, y) and γ = 1 where f (ζ) = K 2 1 (ζ) + K 2 0 (ζ)/γ 2 , and the particle is taken to be situated at R = (−b, 0) with respect to the beam (see Fig. 1). Symmetry considerations lead to the conclusion that Rayleigh scattering of the external-electron evanescent field (4) produces a radiation pattern with inversion symmetry with respect to a plane perpendicular to the trajectory. This means that the overall transfer of momentum to the induced radiation is zero in the small-particle limit, so that ∆p z accounts for all momentum transfer to the moving electron along z. Then, the contribution of each ω component to the electron energy loss rate is, within the non-recoil approximation valid for sufficiently energetic electrons, vF z (ω). Actually, one finds that the identity vF z (ω) =hωP (ω) is satisfied, where P (ω) is the frequency-resolved loss probability as previously obtained for small particles [15]. As a consequence, F z vanishes in the ω → 0 limit, since P (ω) remains finite.
This behavior is quite different from F x , which goes to a finite value for small ω's, namely F x (ω = 0) = 4e 2 Re{α(0)}/v 2 b 3 . (Incidentally, momentum transfer along x produces negligible energy transfer in the nonrecoil approximation.) This latter formula can be used to derive a close expression for ∆p x valid for arbitrarilylarge, finite objects in the large impact parameter limit. In that case, only small ω's contribute to F(ω), due to the effective exponential cut-off imposed by the modified Bessel functions K 0 and K 1 . This means that only long wavelengths are relevant (to which the object appears as small), so that it can be described by its static polarizability. Then, the ω integral can be performed numerically to yield For comparison, the momentum transferred to a charge e at a distance b from the beam is ∆p = −(2e 2 /bv)x.
The large-b limit given by Eq. (6) is compared in Fig. 2 with more detailed calculations that include highermultipole moments, as described below. Also, the small particle limit of Eq. (5) is discussed in Fig. 3.
B. Arbitrary size
For larger particles or for close electron-particle encounters, higher multipoles become relevant in the induced forces [16]. Then, it is convenient to express the evanescent field of the electron in terms of multipoles centered at the particle, so that the external electric and magnetic fields admit the following decomposition [15,17]: and with m is the Gegenbauer polynomial [18]. The impact parameter b is defined in Fig. 1.
The induced field around the particle is given by similar expressions obtained by substituting ψ ν,ext L by new coefficients ψ ν,ind L , and j l by the Hankel function h (+) l [19].
In particular, L = (l, m) is conserved for spherical particles and one has a linear dependence ψ ν,ind L = t ν l ψ ν,ext L , where t ν l are scattering matrices that are given by analytical expressions in the case of homogeneous particles of dielectric function ǫ and radius a [15]: and , where ρ 0 = ka, ρ 1 = ρ 0 √ ǫ with Im{ρ 1 } > 0, and the prime denotes differentiation with respect to ρ 0 and ρ 1 .
At this point, it is convenient to write the operators L and (1/k)∇ in matrix form. One finds and thex andŷ components of N are obtained from (8) by rotating the reference frame using rotation matrices for spherical harmonics [19]. Exactly the same matrices as above apply to L and (1/k)∇ acting on Hankel functions h (+) L . Furthermore, these matrices satisfy the properties L + = L and N + = −N. Now, the electric field admits an expansion of the form where the coefficients are obtained from the above expressions. Similar formulas are obtained for H ext and for the induced fields E ind and H ind in terms of multipole coefficients. Finally, we insert them into Eq. (2) and perform the integral over a sphere in the s → ∞ limit. Then, the first two terms inside the integrand give a vanishing contribution because the induced far-field is transverse. The remaining part of the integral can be recast, noticing that only real terms must be retained, andn ] is the radial vector as a function of the polar direction Ω. Fig. 2 shows the dependence of the momentum transfer on electron impact parameter b for alumina and silver spheres of different radius, as calculated from Eqs. (1) and (9). Measured optical data have been used for the dielectric function of these materials [7]. One can observe a nearly exponential decay of the momentum transfer with b. Besides, the momentum transferred along the direction of the electron velocity vector (∆p z , dashed curves) is generally smaller than the remaining perpendicular component (∆p x , solid curves), which finds an explanation in the fact that the contribution of these components to the energy losshω is v∆p z + (∆p x ) 2 /m, where m is the electron mass: since mv ≫ ∆p, ∆p x is allowed to take larger values than ∆p z for each fixed ω.
III. RESULTS AND DISCUSSION
Notice also that ∆p x converges quickly to the large b limit [Eq. 6, dotted curves], producing a finite result under the scaling of Fig. 2, unlike ∆p z , which goes faster to 0 for large b. In this limit, the electron induces a dipole in the particle directed towards the electron, which results in an attractive force between these two similar to the image potential at surfaces [20], leading to a momentum transfer ∆p ≈ ∆p xx . For small metallic particles and closer encounters this picture is no longer valid and ∆p x can actually reverse its sign and have a net repulsive behaviour (e.g., in Fig. 2 for Ag particles of radius a = 10 nm and also for the fullerenes of Fig. 4).
A more detailed analysis of the magnitude of the momentum transfer effect is given in Fig. 3. The momentum transfer is normalized to the particle mass M and the result is the change in the particle velocity induced by the passage of the electron as a function of particle radius a. Fig. 2: small particle limit (dashed curves) versus full multipole calculation (solid curves). The particle is made of Al2O3 (density ρ = 4.02 g/cm 3 ), the electron energy is 200 keV, and the distance from the trajectory to the particle surface is 10 nm. Dotted curves show the momentum transferred from light in an optical trap (see text for details). The inset depicts the change in the particle angular velocity as a result of the torque exerted by the electron.
electron-particle interaction increases with the radius a, the actual change in the particle velocity shows a nearly exponential decay with increasing a.
In a situation where the particle is trapped by lasers (e.g., in optical tweezers [4] or in optical stretchers [5]), one should compare the interaction with the electrons to the interaction with the laser light. To this end, we will consider a trapping cw-Ti:sapphire 100-mW laser emitting at a wavelength λ = 785 nm and focused on a region of radius R f = 10 µm. Furthermore, we will contemplate the momentum transferred by the laser during the average time span ∆t between two consecutive passing electrons in a transmission electron microscope operating at a current of 1 nA. The particle polarizability α is all that is needed to calculate light forces for the small radii under discussion (a ≪ λ), according to Eq. (3). Now, for real α this equation defines a conservative gradient force that responds to the potential −(α/2)|E| 2 , where E is the laser light field, whereas the imaginary part of α represents photon absorption by the particle that translates into light pressure [21]. These two components are represented separately in Fig. 3 after multiplication by ∆t/M (dotted curves). The light pressure contribution is calculated for an incidence plane wave with the same photon flux as the laser at its focus. The gradient force component is obtained from the maximum force in the focus region assuming a Gaussian profile for the laser field intensity (i.e., |E| 2 ∝ exp[−R 2 /(R f / ln 2) 2 ]). Finally, it is convenient to define the polarizability from its relation to the scattering matrix, which upon inspection permits writing α = (3/2k 3 )t E 1 . Unlike the well-known expression [12] α = a 3 (ǫ − 1)/(ǫ + 2), the former relation predicts a non-zero value for Im{α} even for particles with real ǫ (like our alumina spheres), arising as a pure retardation correction associated to radiation scattering (this is actually the origin of the light pressure component of Fig. 3). (Incidentally, gravity would produce a velocity change g∆t = 1.56 nm/s, which is well compensated for in currently available optical trapping systems.) An important conclusion that can be extracted from Fig. 3 is that the crossover of trapping light into the main source of momentum occurs for particles of 20 nm in diameter when the electrons pass at a distance of 10 nm from the particles surface, thus allowing one to perform energy loss analysis of the transmitted electrons with significant statistics. Therefore, transmission electron microscopy can be combined with in-vacuo optical trapping to study particles of sizes above some tens nm.
While the transfer of momentum by the trapping light occurs in a continuous smooth fashion, the electrons deposit all of the momentum during a small time interval ∼ a/v (≪ ∆t = 0.16 ns for 1 nA electron current). However, the change in particle velocity per electron (vertical scale in Fig. 3) produces a minute particle displacement during ∆t (smaller than 1.6 × 10 −9 nm ≪ a), and therefore, the effect of the passing electrons is experienced by the particle as a nearly continuous source of momentum that is describable by an average force ∆p/∆t. Actually, Fig. 3 suggests that using more intense electron beams (with even smaller impact parameters) acting during periods of the order of one second will still not produce ejection of the particles from their trapping locations.
It should be stressed that the momentum transfers that we have calculated using classical electromagnetic theory must be understood as the average value over many incoming electrons, since the actual strength of the interaction is not large enough as to guarantee that many photons are exchanged between each electron and a given particle. Like in aloof EELS experiments [8], most electrons will not interact with the particles at all, so that the present results must be understood under the perfectly valid perspective of an statistical average performed over many beam electrons. The quadratic deviation from these average forces can play also a role (similar to straggling in stopping power theory), but this subject is left for future consideration.
We have also studied momentum transfer to C 60 clusters (Fig. 4). The scattering matrices t ν l have been obtained within the discrete-dipole approximation [22,23], where each carbon atom is described by an induced dipole whose polarizability is fitted to reproduce correctly the measured optical response of graphite [7]. Further details concerning the procedure followed to obtain t ν l will be given elsewhere [24]. At relatively small interaction distances b, the z component of the momentum is larger than the x component and the latter is negative. These are effects that can be hardly found in the above examples and that originate in high-order multipoles (actually, l ≤ 5 are needed for convergence within the range of b under consideration). Even at a distance of 9 nm (notice that C 60 has a diameter of only 0.7 nm) the change in velocity produced by the passing electron can be substantial. Therefore, the interaction of fast electrons with small clusters can produced dramatic effects if these are not mightily bound by a mechanism stronger than optical trapping. Finally, the passing electron can induce a torque on the particle that changes its angular momentum (∆L y ) and makes it rotate. This is the effect discussed in the inset of Fig. 3, which shows the change in angular velocity per electron, ∆Ω = ∆L y /I, where I = (2/3)a 2 M is the moment of inertia of the alumina sphere. Like the electromagnetic force above, the torque is obtained from the integral of Maxwell's stress tensor [12], and the details follow a similar derivation as the one presented in Sec. II. Averaging over the electrons of a 1 nA electron beam passing at 10 nm from the surface of an alumina sphere of radius a = 20 nm, one finds an angular acceleration of 39 MHz/s. Under these conditions, the linear momentum transferred by the electrons can be absorbed by the trapping light, as discussed above. However, the angular momentum is not absorbed, and the particle will spin with increasing angular velocity until either the centrifugal force breaks it apart or radiation emission at the rotation frequency (vacuum friction) compensates for the electron-induced torque.
In conclusion, we have shown that fast electrons following aloof trajectories (i.e., without direct overlap with the sample) in a transmission electron microscope can exert time-averaged forces on small particles of similar magnitude as those forces associated to trapping in optical tweezers and stretchers, and therefore, this effect can be used for analytical studies of mechanical properties of such particles, while electron energy loss spectra can be actually taken without causing ejection of the particles from their trapping positions.
|
2019-04-14T03:13:18.818Z
|
2004-02-02T00:00:00.000
|
{
"year": 2004,
"sha1": "bb94bd6d19c0e8d8df4076d4a3d4c3bb5a7eb1fd",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dd1f89013bc91f519288b660035d4a708c428e08",
"s2fieldsofstudy": [
"Physics",
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219306859
|
pes2o/s2orc
|
v3-fos-license
|
Hypergraph Unification-based Parsing for Incremental Speech Processing
Parsing word graphs is hard. Even though they provide an extremely compact way of representing recognition results (Aubert and Ney, 1995)
Parsing word graphs is hard. Even though they provide an extremely compact way of representing recognition results (Aubert and Ney, 1995), they still consist of a large number of edges. The ambiguity on the lowest level ( words of an utterance) can be high, depending on the experimental setting for the speech recognition system. Parsing word graphs that were produced using an incremental speech recognition system is even harder, since they can be ten times larger than their non-incremental counterparts. However, for highly sophisticated and natural applications, incremental recognition is a sine qua non.
The complexity of incremental word graphs can be partly overcome by converting them to hyper graphs. Here, edges may have several start and end vertices. Following Weber (1995), we call these sets fa milies of edges.
Word graphs and their generalizations as hypergraphs are instances of a chart-like structure (Kay, 1980). Similar to standard charts, hypergraphs constructed by inserting word hypotheses are directed, acyclic, rooted graphs. Thus, the hypergraph resulting from this process can be viewed as the lowest level of representation for chart-based speech processing -it relates to the graph that contains only preterminal edges.
We are using a hypergraph as underlying data structure in the speech translation system MILC ( Machine In terpreting by Layered Charts) (Amtrup, 1999). This system was designed as a prototype to demonstrate the feasibility of incremental speech translation and as an experimental platform for the research into the interaction be tween different processing stages. The figure shows an outline of the architecture. MILC uses typed feature structures with appropriateness for the rep resentation of all linguistic structures in the system.
In order . to be applicable for a distributed system, we developed an array-based encoding for feature structures (Carpenter and Qu, 1995). This kind of representation allows for an easy exchange of fea ture structures across machine boundaries.
Speech Recognition
The structural analysis of hy perg raphs constructed incrementally while the parsing processes are already running poses some interesting questions with regard to the consistency of the overall operation of the system. Within such a system , the quality (score) of a piece of information may change over time. This is partly due to further incoming recognition results, and partly due to penalties inflicted upon certain hypotheses by individual parsing processes.
The structural analysis within MILC consists of three components with different purposes and descriptive power. First, idioms are recognized. In our case, idioms are fixed expressions without any inflectional or order va riation . _ This stage performs a fa st incremental graph search by intersecting the input hypergraph with a graph describing the known idioms and compounds.
The second component performs an in crem�ntal left-to-right parse of an utterance, with an emphasis on the syntactic structure of limited size constituents (Abney, 1991 ).
The following integration component is implemented as a bidirectional island-parser, capable of extending hypotheses to the left. This is useful, e.g., if the subcategorization frame of a verb is known late due to a verb-last position in an utterance. In our experiments, employing a second parser for incorporating complements to the left resulted in a speed up of ten as compared to a strict left-to-right approach. The principle question with this method, however, is how much of the incrementality is lost using islands due to delayed attachment of elements to the left. In practice, this delay involves only a few calls to the fun damental chart parsing rule, mainly to incorporate complements into verb phrases. In theory, the worst case would arise if the island wa s the rightmost element of every rule. Suppose that a rule ha s n elements on the right hand side, then nl more elements would have to . be incorporated than with the left-to-right approach (the standard approach has to incorporate these as well, but does that before the last element is reached). Thus, at any given point in the cha rt, the additional effort is bound by the number of elements on the right hand side of a rule, which in turn is bound by the number of wor ds to the left. The combination of three methods for structurally analyzing an input utterance yields a reasonably high performance, in our tests six-fold real time for a machine translation of spontaneous speech input.
|
2020-06-04T22:53:35.817Z
|
2000-01-01T00:00:00.000
|
{
"year": 2000,
"sha1": "df94389df5b6e5fb325d1559128f4da04f798886",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "df94389df5b6e5fb325d1559128f4da04f798886",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
80083442
|
pes2o/s2orc
|
v3-fos-license
|
Psychosocial aspects, body composition and hemodynamics parameters in Scouts
Background: In order for the activities carried out by the boy scouts to achieve the objectives foreseen in their planning, it is important to have concrete data on the physical situation of children and adolescents, such as blood pressure, knowledge of how children are related to their degree of stress, anxiety and depression. Objectives: To identify blood pressure, Body mass index (BMI), and scores of stress, anxiety and depression in boy scouts from 7 to 17 years of age from the Bernardo Sayão Scout Group of the city of Anápolis in Goiás. Material and method: Adolescents from 7 to 17 years of age of both genders. BMI data were collected and a stress, anxiety and depression questionnaire was applied and blood pressure was measured. The normality of the data was verified with the Shapiro-Wilk test. The difference between the three groups was verified by the Dunusk’s post-hoc Kruskall-Wallis test. The categorical variables were analyzed by the Chi-square and the Spearman correlation coefficient verified the relationship between age and the variables Systolic Blood Pressure (SBP), Dyastolic Blood Pressure (DBP), BMI, anxiety, depression and stress. Results: The SBP of the participants in the “little wolf” category was inferior when compared to the scouts (p <0.001) and senior (p = 0.004), and the same happened with the PAD (little wolf x scout, p = 0.03 and little wolf x senior, P = 0.005). An association was found between BMI and sections. Most of the juniors, scouts and seniors were underweight. Among scouts children and adolescents there was a significant association with stress. In the category of the “little wolf” was found 53.60% of the children with very intense stress. The correlation between the variables SBP, DBP, BMI, anxiety, depression and stress with the age of children and adolescents. Age correlated negatively with anxiety, depression, and stress. Thus, as children are younger increases the degree of anxiety, depression and stress. Conclusion: The little wolf category had better BP results than the other categories, but they had higher levels of stress, depression and greater anxiety.
INTRODUCTION
The evolution process that Brazilian society has been experiencing is making more evident the onset of diseases (e.g. cardiovascular, depression, stress, anxiety) in all age groups, from childhood to old age (1) . Studies show that the emergence of these diseases has a close relationship with the adopted lifestyle, with emphasis on the decrease of the physical activity added to the inadequate eating habits (2) .
There are recurrent studies in the literature showing relationships between two or three factors that contribute to a poor quality of life (3) , however, there are few researches that point to several factors that young people are exposed to, which contribute to the adoption of an inadequate lifestyle, reinforcing the development of pathologies, especially hypertension, stress, anxiety and depression (4) .
All these pathologies are typical of modern society in which one lives, whose time is an aggravating for all, given the hectic life that leads to unhealthy practices such as poor diet and lack of physical activity (5) . This becomes one of the major contributors to the development of health problems, with frequent reports of problems related to blood pressure and stress (6,7) . This problem in the circulatory system can begin in childhood or adolescence and progress slowly into adulthood, but in some cases, it manifests acutely even in youth (8,9) . And in order to combat it, it is necessary a change in the habits of life, seeking to implement actions in daily life that lead the individual, even if involuntarily, to exercise (4) .
One way to promote the practice of physical activity is to bring together children and adolescents of the same age group in Scout Groups in which activities has moderate to high intensity, always being careful and observing the individuality and the state of the physical preparation of each one of them (10) .
The aim of the Scout Movement is to make children, adolescents and young people fully integrated in intellectual, spiritual, social, affective and, above all, physical areas through activities experienced by games and motivational, competitive and fun songs and with the collaboration of everyone in the planning and execution process, taking into account the maturity of young people (11) .
Scout activities especially seek, in the contact with nature, to find challenges that provide the young people to discover the balance of the body, to socially relate to others of the same age, to become more creative and to assume their protagonism before their lives, making decisions that lead them to understand the meaning of having a healthy lifestyle (12) .
In order to these activities, carried out by the Boy Scouts, achieve its objectives foreseen in their planning, it is important to have concrete data on the physical situation of children and adolescents, such as blood pressure, their degree of stress, anxiety and depression. Information that the group does not have, and that the present study seeks to find.
Understanding that these data are important for the adults responsible for the activities to know in what state they are and what effects these activities performed at the meetings leads to, the study aims to identify blood pressure, stress score, anxiety and depression in Scouts from 7 to 17 years old, of both genders, of the Bernardo Sayão Scout Group of the city of Anápolis-GO.
METHODOLOGY
It is a cross-sectional study, supported by a quantitative and descriptive approach. In this research the members of the Bernardo Sayão Scout Group, composed of a sample of 54 children and adolescents from 7 to 17 years of both genders, were subdivided into groups: 28 children and adolescents in "little wolf" (7 to 10 years), with mean age of 8.82 ± 1.33; 18 in "scout" (11 to 14 years), with mean age of 12.56 ± 0.98 and 8 in "senior" (15 to 17 years), with mean age of 16.00 ± 0.07.
In order to participate in the research, the following inclusion criteria were observed: registration with the Bernardo Sayão Scout Group in the Union of Scouts of Brazil in 2016, attendance at meetings, being enrolled for at least 6 months in the group and that the responsible for minors were in accordance with the Free and Informed Consent Form.
During six weekly meetings, the stress questionnaires of the "Saúde em Movimento" website were applied, composed of 23 questions about the state of the individual in the last 2 months. Assessment of anxiety and depression level was performed with Portuguese version of HAD scale (Universidade Estadual Paulista, UNESP) (13) composed of 14 questions with a score of 0 to 3, in which the even numbered questions referred to the degrees of anxiety and the odd numbered questions to the depression, and the final score from 0 to 21 scored as unlikely, possible, and probable in acquire the disease.
Blood Pressure was measured 3 times, with the evaluated at rest. The volunteers who had mean values of systolic blood pressure (SBP) were classified as hypertensive; for diastolic blood pressure (DBP) higher than the 95th percentile, the variables gender, age and height were taken into account (14) .
In addition to this, the BM was also measured by means of a mechanical platform scale from the Filizola brand, with a maximum capacity of 150 kg and a precision of 0.1 g. The height (m) of each volunteer was measured with a Wiso Compact stadiometer, with a capacity of 220 cm. The data collected regarding BM and height were used to calculate BMI. Both BM and height were measured in duplicate and, if they found different values (i.e. >0.05 kg to BM; >0.1 cm to height), a third measure was carried out, so the average of the three measures was considered. Children and adolescents were classified according to BMI/age, as proposed by Cole (15) . This study project was approved by the Research Ethics Committee of the UniEVANGÉLICA with number 0001.403/2011. Data were expressed as mean, standard deviation, frequencies and percentages. The normality of the data was verified with the Shapiro-Wilk test. The difference between the three groups was verified by the Kruskall-Wallis test, with Dunn's post hoc test. The categorical variables were analyzed by Chi-square, and Spearman's correlation coefficient verified the relationship between age and variables BMI, SBP, DBP, anxiety, depression and stress.
RESULTS
The characteristics of the participants are described in table 1. The SBP of the Little Wolf was lower when compared to Scout (p<0.001) and to Seniors (p=0.004), the same occurred with the DBP (Little Wolf x Scout, p=0.03 and Little Wolf x Seniors, p=0.005).
The anxiety scores were lower in the Scout when compared to the Little Wolf (p=0.02), and for depression, the Seniors presented higher scores (p=0.002).
An association was found between BMI and the categories (p = 0.04) of Scout. The majority of Little Wolf, Scout and Seniors were underweight (Table 2). In the Little Wolf group were found 53.60% of the children with very intense stress. Table 3 shows the correlation between the variables SBP, DBP, BMI, anxiety, depression and stress with age of children and adolescents. Age correlated negatively with anxiety (p = 0.03), depression (p = 0.02) and stress (p = 0.01). Thus, as children are younger, the degree of anxiety, depression, and stress increases.
DISCUSSION
It was possible to verify as results that the activities developed at the end of the week, proposed by the Group of Scouts, were positive for the analyzed groups in both blood pressure and stress, adequate according to age. Fact that, in the study of Ferreira (16) showed that of a sample of 8,661 students -since there are no studies analyzing Scouts -79.6% of them had sedentary behavior during the weekend, of these, 50% reported that they spent more than two hours in front of the TV; 44% spent more than two hours playing electronic games and in front of the computer were 55%.
In the study of Bozza (17) are presented data about sedentary behavior associated with hereditary factors, which increase the prevalence of altered blood pressure, influenced mainly by daily habits. These data are in agreement with those obtained in this study, since it showed that the children and adolescents evaluated, who participated in varied scouting activities and adapted to the respective age groups, presented normal blood pressure for the ages, and as the children grow and become more active and independent, blood pressure and stress tend to stabilize at an ideal level for the age.
Lombardi (18) reported in their study that they used anthropometric variables, eating habits and habitual practice of physical activity, noting that only abdominal circumference was associated with high blood pressure, however, in the present study, the data showed that the children and adolescents evaluated had positive results to the blood pressure, leading to believe that the physical activity practiced, even on weekends, contributes to the maintenance of the pressure force. As could be observed in the study of Barbosa (19) the sedentary habits are already inserted in the daily context of the life of the children and adolescents since they enter the school, either by the lack of structure for physical activities, by the teaching program adopted by the school and even by the social context. All of this contributes to the data described in this study and reinforces the need for extra activity, such as those practiced by Scouts, which contribute positively, as observed, to a healthier life and decrease in the short, medium and long term the incidence of hypertension and stress that increasingly tends to appear early in young people and extends into adulthood, worsening (20) .
As for the stress factor, Lopes (21) identified in their research that, among the common disorders, there is stress in children and adolescents, which progressively has a higher incidence in older adolescents of both sexes. These information contributes to the positive aspect of the practice of scouting, in which the evaluated ones showed a compatible classification of stress for the age group. Thus, was observed that stress tends to be more controlled in older adolescents, different from what was observed by the researchers above, showing that the practice of the appropriate physical activity to the individual, performed in a pleasant way can bring positive results to practitioners.
In view of these results, further studies are needed, since this work is a pioneer, thus making it difficult to have a more in-depth discussion. It is of paramount importance to improve the results developed in this research, judging by the fact that there is no study of the peculiarities of physical activities in the health, well-being and educational field of the practitioners of Scouting.
CONCLUSION
In view of the evidence presented, it is possible to conclude that the profile of the children and adolescents practicing the scouting activities of the Bernardo Sayão Group are individuals who, even though they presented a BMI below normal, all other variables studied did not present significant differences for the categories / age groups, which may be the result of the practice of the weekly activities performed with these young people.
A relevant data, for future studies to explore, was the fact that youngsters presented lower levels of stress, depression and anxiety than older ones. A triggering factor for this data may be that there is a possible influence of the activities carried out by the Scouts, since, as these youngsters get more contact with the Scouts, the stress, depression and anxiety index decrease. This is a data of great relevance for developing studies with the practitioners of scouting, and can generate new parameters related to the physical, mental and educational health of this modality practiced so many years in the country.
AUTHOR'S CONTRIBUTION
PEMV: study design, interpretation of results, writing of the manuscript. MHF: data collection and interpretation of results, writing of the manuscript. VS: analysis of data and interpretation of results, writing of the manuscript. IOS, DAAPO, WAL, HLR: writing of the manuscript. GPT: review of all manuscripts, study design, writing of the manuscript.
CONFLICTS OF INTEREST
The author(s) declare that they have no competing interests.
|
2019-03-17T13:08:10.413Z
|
2017-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "ec2d7fd33b80b2fc8e80c5e832c5407b98f6f405",
"oa_license": null,
"oa_url": "https://submission-mtprehabjournal.com/revista/article/download/942/317",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "929ae2233a6570f93c0c4b33846f2be09d03914e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199782700
|
pes2o/s2orc
|
v3-fos-license
|
Digital Transformation in Marketing Mobile Phones Industry: Evaluating Marketers Strategies for Cross-Cultural Advertisements
In upcoming time, media science will be more rapidly growing and developing than today in almost every field whether it will be communication, marketing, advertising, finance or arts. These developments and advancements in technologies make things much easier, cheaper and effective for the people. In present time, the marketing and advertising of products, goods and services has become necessity for everybody’s daily life whether it be the customer, producer or trader but in near future, the standard way of product advertising will be transformed from visual communication to digital visual communication. This digital visual communication will have great impact on marketing and advertising and would have many beneficiaries for the marketers and advertisers. Nowadays, the key elements used in visual communication for advertising of products are line, color, shapes, images, typography, space, size and scale. The latest trends of visual communication are composed of combination of images and text. As in visual communication, images are more powerful communicator than text so, the trends of today’s advertisement design are composed with more images and short text. The evolution of digital visual culture has all the aspects of visual culture but with higher quality and improved level of communication. With the help of improved visual content quality and short texts, advertisers gain more publicity, attraction and acceptance of wider audience toward the product advertisement. Currently the main problem that marketers and advertisers are facing in communication process at international level in marketing and advertising of products are cross cultural effects like language, literacy, religion, arts etc. This research is conducted for the purpose to analyze the effect of digital culture on visual communication through qualitative content analysis. Here the question arises in almost everyone’s mind that can digital visual culture overcome cross cultural effects on marketing and advertising of products?
Since from the year 2000, many changes and advancements have been happening in technology and design trends of graphic design till today and many are still in progress. These advancements make it easy, efficient and effective in terms of quality and quantity on professional grounds. The most common software's that designers used today for graphic designing are Adobe Illustrator (1988), Photoshop (1988), Adobe Indesign (1994) and Corel Draw (1989), Adobe Dreamweaver (1997) for web design while Adobe Premiere pro (1991), After Effects (1994) and Adobe Flash Professional (1996) for video ads have also been used for it. Later many versions of these softwares were developed that add some new features with the existing ones. In 2002, Adobe Creative Suit were developed by Adobe Inc that involves almost all the above mentioned softwares and some others in one software except Corel Draw as it belongs to other company. However, Inkscape are also be used for commercial ads by the expert graphic designers. With the help of these softwares advertisers make different designs of banners ads, posters, logos, flyers and stationary items to promote their products and expand their business at international level (Wikipedia).
The history of advertising industry starts back to 19 th century. The main purpose of advertising is to share information about goods, products and services of different marketing agencies through various media. The most common media used for advertising are print and broadcast. Print media (1906) includes magazines, newspaper, posters, flyers or brochures while broadcasting advertisements includes internet (banner ads, social networking marketing and rich media ads), television (commercial ads in the form of videos) and radio (voice ads) dates back in 1994. Print media starts with the newspaper publishing. The first print ad was "kodak story" then in 1917 tobacco fund ad was published and the first film poster was published in 1938. Nowadays, advertising of products is not only done at national level but also at international level. International advertising is a strategy of developing communication and business between different countries. It is the best way to grab large number of customers and market audience. In today's fast and competitive world advertising of products, goods and services has become necessity for everybody's daily life whether it be the customer, producer or trader. It helps
INTRODUCTION
Visual communication is an essential component of any nation. Its main purpose is to communicate and promote products, business, culture, art, media and many other things through different visuals both at national and international level. Here the term visual can be defined as pictures, texts, colors, posters or a piece of film to show and explain anything that we can see in the world around us. It is completely depending on our eyesight (Claudia, 2017). Nowadays, visual communication is use for five different types of advertisements i.e. Print, broadcast, product/brand integration, outdoor and digital advertising (Neil, 2018). After the invention of computer many major developments and advancements of new technologies and strategies occurred in the field of science which not only bring people closer but also globally connected to each other especially after the arrival of internet (Jose, 2016). These advancements in technologies made communication process between different countries easy for the people to stay connected with each other. The process of communicating goods and services' information between multiple countries is not simple and easy because it occurs between two different nations and every nation has its own culture and values. Thus, it has gone through many cultural difficulties like language, literacy, religion, art etc. in the communication process. This communication process occurs on three stages: input message, processed message and deliver output (Hanisha, 2017) and at each stage barriers might affect the transmission of message which ultimately results in miscommunication. At first stage there is a possibility that if message is not carefully observed and selected. At the second stage of encoding message care needs to be taken of translation and language that it should be understandable by the target audience. Then at the third stage carefully selected the media channel through which your message is being delivered to the target audience. (Dyer, 2008). An attractive and effective advertisements is based on elements (line, shapes, color, space, size, scale, pattern and texture) and principles (balance, proportion, harmony, color interaction and rhythm) of design. Every single design is made for the purpose of communication, so it keeps some kind of message and information in it whether the design is good or bad (Graphic, 2014). The latest trends of visual communication are composed of combination of images and text. As in visual communication, images are more powerful communicator than text so, the trends of today's advertisement design are composed with more images and short text (Claudia, 2017). In this paper, researcher analyzes the products advertising at international level through digital media and see that can digital transformation in visual communication overcome cross-cultural effects on product advertising? With the passage of time our world has been globally transforming towards technological change of digitization especially in media and communication industry. The evolution of digital visual culture has all the aspects of visual culture but with higher quality and improved level of communication. After the evolution of internet and social media, users are able to interact with others and it also helps to develop communication between advertisers and their specific audiences (Kevin, Sarah, & Christopher, 2011). Digital advertising helps the advertisers to communicate with their consumers online by taking their feedbacks on advertisements of their products, goods and services with the help of social media (Warren, 2018). This kind of conversation between advertisers and consumers make the advertising as a two-way communication which is beneficial for both parties and it also helps to overcome many cross-cultural issues occurred in visual communication advertisements. Before the transformation of digitization, advertising was a one-way communication process that directly came from advertisers to target audience and the audience has nothing to do with advertisements but after the digital conversion the audience is able to give feedback and remarks on advertisements which will help the advertisers to improve their ads according to audience needs and desires (Kevin, Sarah, Christopher., 2011). In this modern era, there are many social media platforms that are used for advertising and business expansion purposes such as Facebook, Twitter, Instagram, LinkedIn, Snapchat, Pinterest, Tumblr, Spotify and Foursquare. (Dijk, 2013).
LITERATURE REVIEW
Visual communication has been a part of our world since beginning. The history of design starts from late 19 th century and its main purpose is communication. It is a type of non-verbal communication in which we share information and ideas to communicate and promote products, good, services, business, culture, art, media and many other things through different visuals at national and international level. Thus, we can say that it is an essential component of every nation. Here the term visual can be defined as pictures, charts, maps, texts, colors, posters or a piece of film to show and explain anything that we can see in the world around us. It is completely depending on our eyesight (Claudia, 2017). Nowadays, design is used in different user experience fields such as interaction design, visual design, motion graphics, architecture, marketing, prototype industrial design and even sound design. Hence, the definitions and role of design is also defined differently in every user experience field by different scholars. According to Issey Miyake (Fashion Designer), "design is not for philosophy, It's for life" and Economic scholar Herbert A Simon's saying about design is, "To design is to devise courses of action aimed at changing existing situations into preferred one" while according to an Architect scholar Eero Saarinen saying is, "Always design a thing by considering it in a larger context" but according to International Council Societies Of Industrial Design, "Design is a creative activity whose aim is to establish the multi-faceted qualities of objects, processes, services, and their systems in whole lifecycles". In visual communication, design plays an important role and it is defined as the set of tools and creative processes that combines the art and technology to communicate ideas and to express the aesthetics and functions of products through different visuals (Reo, 2012). Every single design is made for the purpose of communication so, it keeps some kind of message and information in it whether the design is good or bad (Graphic, 2014). To analyze the attraction of consumer towards the ad design, we should first understand the importance of design and its elements. As these elements makes the design good or bad in the eyes of consumers (Daniel T, 2016). According to Ashish, Ambudi and Brijesh research 75% of information processing in our brain is basically visuals.
ELEMENTS AND PRINCIPLES OF VISUAL COMMUNICATION AND ITS ADVANTAGES
The history of design starts from late 19 th century. The elements and principles of design plays an important role in an effective and attractive advertisement design. Thus, the key elements used in visual communication for advertising of products are line, color, shapes, images, typography, space, size and scale. Every element of design has its own identity and characteristic. The basic nature of line has only one dimension but the combination of lines make shapes, creates motion, space, perspective and texture. Lines can be vertical, horizontal and diagonal (Hawaii.edu). Color is another very important element in design as it shows the mood and narrates the image or design that what does it say. It is made up from spectrum of light. Every color has its own meaning and describe some kind of mood in it (Graphic, 2014). Shapes are made up of combination of lines. They are geometric and abstract in nature. The geometric figures are square, rectangle, triangle, pentagon, circle, oval etc. Shapes also create movement in a picture and describes the mood of a picture and it also develop a new element forms which refers to 3D effect (Johny, 2010). Size generally refers to length, width and depth of the object. Whereas scale refers to the comparative relation of two or more object with respect to sizes. The correct use of size and scale of objects make the design interesting, meaningful and brings variety in design. It brings a fresh look and develop new way to design while type refers to the text style and size. In graphic designing it is also known as typography. The basic properties of text style include: font style or font name, font size and font color. It is also used for visual communication (Graphic, 2014). Every design stands on the design elements and design principles. A good design is about how you puts the right combination of elements and principles of design. If you put the right combination of these design elements but do not follow the principles of design correctly then it will definitely affect your design and give the result of either unclear message or inappropriate design. And if you use the principles of design correctly but the elements are not properly used then it will give you the same unclear result because elements and principles work side by side and carry equal importance. So, it is necessary to take good care of elements and principles both while making design. In advertisement design, as images and pictures are easier to understand and take less time in processing by the brain than words or texts so images are more powerful communicator than text or verbal languages. Thus, we can say that verbal processing takes more time and difficult for the brain than visual processing because the process of words and text has been limited in language center by the brain (Ashish, Ambudi, & Brijesh, 2012). However, visual advertisements are more reliable and prominent because our mind get it easily and it is not limited by any cultural factor but one should always keep in mind your target audience's culture, tastes and viewpoints while making advertisement design (Mydee, 2018).
CONSUMERS ATTITUDE AND PERCEPTION TOWARDS ADVERTISEMENT DESIGN
The consumers attitude and perception towards ad design plays an important role in successful advertisement campaign as it shows the consumers response and buying behavior towards the ad design (Juan & Ruiz, 2008). The previous researchers describe two main level of consumers response towards advertisement design i.e. cognitive and emotional response. The cognitive response is based on three different categories namely, aesthetic impression, semantic interpretation, and symbolic association. The first level of consumer response is aesthetic impression and it can be defined as anything that has the ability to gain attraction and desire of consumers sensory properties and the consumers cognitive response of aesthetic impression. Here the word aesthetic can be the beauty, appeal and attractiveness of the product. Aesthetic products deal with sensitivity of the product that how its appearances, feel, touch, perception, taste and smell (Daniel T, 2016). The second level of consumers response is Sematic interpretation response. It can be defined as interpretation of product's functions, mode of use and qualities such as performance, efficiency, function and ergonomic. The third level of consumer response is symbolic association and it can be defined as a difference between the product appearance and consumer's experience about the product (Fenko & Galetzka, 2015). The emotional response of consumers is based on psychological reaction which shows feeling, mood and emotions, to product's appearance. Emotional response is based on five different groups, namely social, interest, instrumental, aesthetic and surprise (Juan & Ruiz, 2008). In this digital era, the consumers attitude and response towards advertisements design is mainly based on visual elements, brand reputation and culture. Brand reputation plays an important role in advertising because it is a part of marketing strategy to promote a product or brand and social media plays a key role in gaining success, popularity and larger customer audience. Thus, social media also boosts sales of marketers and increases the purchase intention of target audience's (Faheemullah & Usman, 2017). According to Andrew Stephen's research, approximately one third of global advertising is done on digital platforms. Now people are moving towards digital or social media and exposing themselves more and more on it for getting more information and developing new relations with other peoples. Thus, it also open new ways of success and helps them to become popular in market, expand their business and increase the engagement interaction with large number of people. With the help of social media, now customers first observe the market value and reputation of brand on social media through the consumers' likes, tweets, comments, reviews and feedback before purchasing any product from market (Andrew, 2016). Advertisers also use visual content with aesthetic appeal in their ads to seek attention and attraction of consumers (Eiman & Passant, 2015).Culture also plays an important role in advertising as it educates the consumers to what and how they should perceive the visual message, when expose to an image or advertisement design (Juan & Ruiz, 2008). Culture can be defined as the language, habits, arts, music, cuisine and religion of some specific group of people (Sukesh, 2011). Every nation has its own culture. Thus, the advertising should be done according to the country's regulations and culture. Here we observe three countries culture, i.e United States, Pakistan and China.
CROSS CULTURAL DIFFERENCES IN INTERNATIONAL ADVERTISING
There are seven main types of advertising (brand advertising, commerce and retail advertising, corporate advertising, advertising with feedback, political advertising, business advertising and social advertising) and four main functions of advertising (economical function, social function, marketing function and communication function. In this study researcher analyzes only brand advertising on Facebook with respect to marketing and communication function. The process of communicating information, goods and services between multiple countries is commonly known as international advertising but it is not that easy and simple as it has to gone through many difficulties like language, literacy, religion etc. in the communication process. This communication process occurs on three stages: input message, processed message and deliver output and at each stage barriers might affect the transmission of message which ultimately results in miscommunication. At first stage the producers thought about the message or information. At second stage the message should carefully observed that whether it fulfill the requirements of people of different cultures. At third stage media channel should be selected to deliver the message to target audience. At first stage there is a possibility that if message is not carefully observed and selected. At the second stage of encoding message care needs to be taken of translation and language that it should be understandable by the target audience. Then at the third stage carefully selected the media channel through which your message is being delivered to the target audience. There are many advantages of international advertising like expansion of business, growing power and wealth, growing acceptance of your brand worldwide and gain the attraction of more customers. But these things only work well when you make an ad that fulfill the cultural requirements of different countries. So, cultural factors play an important role in international advertising both as communication process or business practice (Susan & Samneul, 2003). The cultural differences are the most common issue in international advertising. In this paper we observe the advertising culture of China, Pakistan, India and United States (Sukesh, 2011). The advertising culture of China, Pakistan, India and United States are very different from each other in terms of values, thoughts and languages. China and Pakistan follow eastern culture in advertising, India follows mixture of eastern and western culture in advertising whereas United States follow western culture in advertising (UK Essays, 2018). Eastern people belong to Asia and Middle East countries. Whereas western people belong to Europe and North America. The main reason of difference in cultural advertising are the values that they learn from their home and society. The values of eastern culture are inflexible in traditions and the people are conservative in nature, the idea of arrange marriages are common among them, Family support system is based on Elders, all decisions are taken by them and children obey the elder decisions. Whereas values of western culture are more liberal in their nature, traditions, feelings and thoughts, also individuals are free to take their own decisions and love marriages are common among them. For example, if people of west are angry, their anger are shown on their face whereas people of east hide their anger feeling to show good manners (Michael, 2018).
DIGITAL TRANSFORMATION IN VISUAL COMMUNICATION
Visual communication has been developing since beginning and many of it are still in progress (Ubani, 2016). After the year 2000, the digital media arrives and start developing in almost every field of the world. Thus, the visual communication has also been globally transformed towards digitization (Edward & Bobby, 2018). This digital visual communication will have great impact on marketing and advertising and would have many beneficiaries for the marketers and advertisers. Since from the origin of internet and social media, many past studies and research work have been done on the role and effects of digital transformation on visual communication in the world till now. Nowadays digital advertising is mostly referring to social media advertisements. The social media is now used as a tool or business strategy to endorse and expand their business, gain more audience and engage their consumers with themselves. There are many social media websites used today for advertising of products and business expansion purposes such as Facebook, Twitter, Instagram, LinkedIn, Snapchat, Pinterest, Tumblr, Spotify and Foursquare (Korey C, 2015). In this paper, we will discuss only about Facebook advertising, how advertisers gain popularity and attention of consumers across cultures through their ads. It starts from the year 2007 and since then it is continuously developing with more new tools and advancements of technologies. The annual report of year 2018 shows that 90% people use Facebook for expanding their business and promotion of products through advertisements, which is much higher rate than other social media platforms (Matt, 2018). The main purpose of this digital advertising is to create brand awareness and build an engagement between consumers and advertisers on a low budget to wider audience (Dokyun, Kartik, & Harikesh, 2018). It offers two basic advertisement campaigns for advertising of products good and services to advertisers i.e. Cold traffic campaigns and Remarketing campaigns. After the evolution of mobile phones technology and its apps downloading systems, Facebook advertising was shifted towards mobile phones in 2009 and gain its popularity and strength in the years 2010 -2013. It had been seen that in the year 2010, Facebook had over 517 million users (Kevin, Sarah, & Christopher, 2011). Previous studies show that after the year 2012, Facebook advertising become more easy, efficient and user friendly for the advertisers and after 2016, 84% of total ad profits was due to mobile phones advanced technology (Bamboo, 2017). It has been seen that the shifting of Facebook ads on mobile application technology helps advertisers to personalize their ads and share more information and content about their products, good and services with their customers and make different ads for their regular customers according to their tastes, requirements and values. It is a new strategy of advertisement campaigns that allow advertisers to engage and target cross-cultural audiences with themselves (Matt, 2014). Previous studies show that most of the customers want personalized ads. Personalized ads can be defined as the marketing strategy of advertisers that creates messages and contents for the individuals or specific customers by focusing on individual's needs and desires through cookies, beacons and feedback of individual customers. According to recent researches conducted in the year 2017 -2018, 80% people prefer personalized advertisements (Molly, 2017). As these ads are more relevant to the customer's choice, tastes and value, thus people are more attracted towards these ads and due to this personalization advertisers gain more consumer engagement than non-personalized ads. It also increases the chances of likeness attitude and purchasing response of customers (Kumar & Shaphali, 2016).
RESEARCH QUESTION
Q. Can digital visual communication become successful in overcome cross cultural effects on product advertising?
THEORETICAL FRAMEWORK
The theory I selected for my research is information processing theory. This theory is developed by George A. Miller in 1956. The basic idea of information processing theory is work on three stages: At first stage informational data is gathered from the environment as input. The second stage stores and processed the information data by the brain. At the third stage, the stored and processed information is then delivering the result as output (David L, 2015). The phase of processing information is crucial part of the theory as it relates to the cognitive process of brain. This cognitive process depends on many factors like language, perception, values, knowledge, thinking, judging, imagination, remembering, recognition, visual perception, storage and many more like these (Goodwin, 2005).
Here I select this theory to test its working in visual perception of advertisement design. Visual perception is the result of complex interactions between external visual stimulus and prior knowledge, goals, and expectations. Understanding how we all perceive things visually will help designers to communicate better. The visual perception is of two types: one is the external and other is internal. The external perception is what we see in the world from our eyes and the internal perception is how we see the world; this word how is depend on many aspects like our knowledge, mood, cultural values, education etc. After we publish our advertisement design in front of our target audience or public, its success rate depends on how much it gains the attention and attraction of our audience. Then the audience attraction depends on the way they perceive it in their minds according to their depth of processing, schemas and mental models. The depth of processing depends on how deeply we saw the ad if we only see the physical appearance of advertisement design and do not pay any attention towards its meaning then we do not store the information and cannot recalled later. For example, focusing on the shapes and colors of a graphic is not as effective as studying the graphic and taking in the explanations of that graphic. We want to connect meaning to the aesthetic. Schemas are mental frameworks of information. They're abstract representations of what we know about the world and they become the context or frame for interpreting new information. Whereas, mental models are broader conceptualizations of how things work and they explain cause and effect. Designers should consider the schemas and mental models of their audience. While, we cannot know our audience's exact schema and mental model in advance, we can be aware of an audience's cognitive characteristics which can give us clues like create development level of our audience, create designs that distract less, bring visual literacy element in our design by using symbols, try to bring audience goals in our design, think of the differences in how colors are interpreted across cultures and lastly try to improve our reading skill in your design as this will help to gain audience attention towards ad. (Steven, 2011).
METHOD
Researcher has adopted the qualitative content analysis of three mobile phone companies (Huawei, Vivo and iPhone) advertisements in four different countries (Pakistan. China, India and USA) to analyze the effectiveness of digital transformation in marketing mobile phone advertisements. For this purpose, researcher take four mobile phone companies and compare their ads between two different countries about the same mobile company and analyze the impact of two types of elements of design in it i.e. picture and text. The first one is about Huawei Nova 3 advertisement campaign and its content analysis is between China and Pakistan shown below.
Huawei Nova 3
Picture: Picture has great importance in advertisement design, as most of the consumers are attracted towards the ad only because of image or picture and it also influence the cultural aspect as well in many advertisements. In the above two advertisements, picture has the ability to gain the center of attraction in consumer's eyes and it also represents the culture of both countries. The Huawei nova 3 ad for china is advertise by popular Chinese singer artist Lay Zhang while the Huawei nova 3 ad for Pakistan is advertise by Pakistani popular singer artist Aima Baig as shown in the above pictures.
Text: Text has also great importance in advertisement design because it describes the product's name, specifications and price in the market as shown in the above two figures. Hence the text should be readable and understandable by the target audience. In the above two advertisements, the text used in left advertisement picture is mostly in either Chinese language or English which is easily understandable by the Chinese people but not easily understandable for Pakistani people whereas the text used in the right side of advertisement is in International English which is also easily understandable for Pakistani people.
VivoV9
Picture: In the above two advertisements, the picture has same importance and impact on consumers eyes as it is shown in the advertisement 1 of Huawei nova 3 ad but the only difference is here in right side of advertisement picture represents the ad is for Indian audience as the picture is of Indian Film Star Amir Khan is showing the mobile ad whereas the left side of advertisement picture represents the ad is for Chinese audience as the picture is of Chinese actor Lay Zhang is showing the mobile ad.
Text:
Here the text used is mostly in English which is easily understandable by the both country's audience as English is internationally recognized by all around the world. The other important thing which is notable in text is that in the Indian advertisement the text and company logo vivo is placed on the right side of picture as the rule of Hindi text language is to write from the right to left of the page whereas in the Chinese advertisement the text and company logo vivo is placed on the left side of picture as the rule of Chinese text language is to write from the left to right of the page. iPhone X Picture: In the above two advertisements, the picture used in ads is almost same as it only shows the mobile but the text is different.
Text:
Here the text used in both advertisements is although in same language (English) but the price and channel through which the ad is showing is different from one another as in the left side of advertisement picture the pakistani mobile network company Telenor is showing the iPhone X ad with price in Pak currency 126,159 Rs whereas in the right side of advertisement picture the USA company MSARTECH is showing the ad with price in US currency $999.
Huawei P30
Picture: Here the picture used in ads is almost same as it only shows the mobile Huawei P30 but the text and language of text is different from one another.
Text:
Here the text used in left advertisement picture is mostly in either Chinese language or English which is easily understandable by the Chinese people but not easily understandable for Pakistani people whereas the text used in the right side of advertisement is in International English which is also easily understandable for Pakistani people.
RESULTS
The above content analysis of mobile advertisements of international companies in different countries shows that due to the era of digital transformation in globalization, visual communication has also become more powerful than before and it also become very much successful in overcome cross-cultural effects such as language, text and improved picture quality but some other cross-cultural issues are still present even in digital era such as literacy and religion. Here we observe that to overcome the language issue in cross-cultural advertisements, advertisers adopted two types of strategies in text area: 1. Use of international English writing in the text form.
2. Use of the national language in text writing of that country from whom they are targeting e.g if the target audience of advertisement is Chinese people the advertisers will write text in Chinese language in the ad so that it is easily understandable by their target audience.
However, in picture area advertisers take national heroes in ads to represent the nation and promote the products in audience.
CONCLUSION
From the above research, researcher concludes that before digital era, international advertising has gone through many difficulties like language, literacy, religion etc. in the communication process. Hence to overcome these problems digital era arrived around the year 2000. Nowadays Facebook is the most common tool used for digital advertising by the marketers and advertisers and it had also overcome many cross-cultural issues occurred in visual communication such as language problem. Now marketers adopted different strategies to create awareness, promote the product and boost sales of products in different countries from all around the world such as make separate Facebook pages for every country's audiences' and upload ads according to that country's audience's tastes and views. For example, if they want to promote the product in Pakistan and engage Pakistani audience with themselves then they make separate Facebook page for Pakistani audience and upload ads that are made according to the culture, needs and desire of Pakistani audience so that the ad is easily understandable for Pakistani people. Another most common strategy adopted by the marketers and advertisers who want to promote the product internationally is the use of international English and national language in text writing e.g., if the target audience of advertisement is Chinese people the advertisers will write text in Chinese language in the ad so that it is easily understandable by their target audience while in selection of picture advertisers take national heroes in ads to represent the nation and promote the products in audience.
|
2019-08-16T19:56:20.403Z
|
2019-07-18T00:00:00.000
|
{
"year": 2019,
"sha1": "3fa63fbe02cf557c552af56288348a35962b01d0",
"oa_license": "CCBY",
"oa_url": "https://www.ojcmt.net/download/digital-transformation-in-marketing-mobile-phones-industry-evaluating-marketers-strategies-for-5841.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "41ef7f3c4e3fee32c6282c7ba5d67e540bd389a0",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
248806701
|
pes2o/s2orc
|
v3-fos-license
|
The effect of situated learning environment in enhancing mathematical reasoning and proof among tenth grade students
This study investigated the effect of situated learning environment in enhancing mathematical reasoning and proof among tenth grade students. To achieve the aim of the study, a pre-post-test of mathematical reasoning and proof was used, and its validity and reliability were verified. The sample of the study consisted of 50 female students who were distributed equally into two groups; the experimental who were taught the circle unit in geometry by situated learning environment, and the control group who were taught by the traditional method. The results indicate a significant improvement in the students ’ ability of reasoning and proof in favor of the experimental group. Moreover, a diversity of mathematical reasoning and proof strategies were used by the experimental group.
INTRODUCTION
Traditional teaching methods that depend on indoctrination and memorization without understanding and application in life are among the causes of difficulties in learning mathematics, and the weak ability to develop ways of thinking, as students' performance levels drop due to their lack of understanding, and their negative view of mathematics, because of its abstract teaching without real application and daily life situations.
Nemours studies (Kumar, 2021;Li & Schoenfeld, 2019;Malik & Rizvi, 2018;Melinda et al., 2019;Peng et al., 2022;Wenger, 1998) indicate that educational institutions consider learning an individual process separate from the students' implementation of activities, which leads to receiving knowledge, and focusing on abstract information outside of life contexts, and to go beyond that it is better to put learning in the context of real experience.Zeitoun (2008) showed that many of the learning theories that emerged due to deviation from life contexts are concerned with learning that is related to real life, the most prominent of which is the situated learning.In this context, Voskoglou (2019) indicates that situated learning is a theory developed by Lave and Wenger (1991) based on the opinions of many scholars such as Vygotsky and Dewey who believe that students tend to learn through participation in activities.Moreover, Donaldson et al. (2020) argued that "situated learning pedagogy include field-based instruction and students within a community of practice, and it emphasizes legitimate peripheral participation in this community" (p.722).
Learning is defined by various educationalists (Kumar, 2021;Lave & Wenger, 1991;Xin et al., 2021) as the process of assimilation of knowledge by the learner through discovery or interaction with experienced people, it also means the individual's ability to participate in new activities, perform new tasks, and master new concepts.Learning from the viewpoint of supporters of the situated learning theory takes place through an authentic context of life situations that depend on interaction with others (Illeris, 2009;Klein & Leikin, 2020;Koskinen & Pitkaniemi, 2022;Leonardo et al., 2021;Li & Schoenfeld, 2019;Xie & Cai, 2020).Mousley (2003) claims that the situation is presented in the form of an activity practiced by students, and they interact in it in an experimental or real practical way Fraihat et al. / Situated learning environment in enhancing mathematical reasoning 2 / 10 through a social environment in which they interact actively, and they acquire knowledge through this interaction.
Mathematical knowledge acquired in the context of school mathematics is usable and applicable outside of school in real-world contexts, and in this context, Persky and Robinson (2017) and Watson (1998) claim that students learn when they participate in activities using tools and language of the situation and when they move from the novice level to the expert level, and from the secondary role to the main role in the work.Ling and Choo (2005) confirm that interaction arises between students through their ideas and discussions during the performance of the activity.Ozudogru and Ozudogru (2017) indicate that situated learning activities must be collaborative in order to support knowledge building, higher-order thinking skills and multiple solutions, as well as direct students to identify their problems and express their knowledge, provided they are given enough time.
The role of the teacher in the situated learning environment is determined in designing and preparing activities and their tools, and working as a facilitator of learning, and as a member of the learning community (Besar, 2018;Kumar, 2021;Kurniawan et al., 2020;Polizzi et al., 2021;Zeitoun, 2008).It is also assumed that if the learner acquires knowledge using a material in a specific context, it will be easier to retrieve knowledge in a similar context, but he may not be able to retrieve it in different contexts and may not retrieve it at all (Alzahrani, 2022;Anderson et al., 1996;Assefa & Eshetu, 2019;Csíkos & Hidayatullah, 2022).
In light of the above, situated learning takes place through a real-life situation within the culture of the community and the needs of the students, where the students and the teacher build a community that consists of them as individuals, and the place where the experience or activity that represents the situation will be applied in a practical way, with all the necessary tools provided.And during the implementation of the activity or the experiment, knowledge and experiences are exchanged, which makes the students more knowledgeable through discussion, expression and clarification of different ideas and opinions, and through the implementation of the activity and its real application in order to acquire the required knowledge, and apply this knowledge through similar contexts.
Wenger (1998) mentions that the theory of situated learning consists of the following components, illustrated in Figure 1: 1. Meaning: Learning occurs through activity or experience.
2. Practice: This indicates that learning takes place through mutual participation in action and application of knowledge in a social environment.
Community:
This indicates that learning through the social environment of a situation and includes place, tools, people, and social interaction.
4. Identity: This underlines that learning takes place through changing thinking and knowledge in the context of society.
The assumptions of situated social learning theory include learning depends on everyday situations; knowledge is acquired through situations, and is passed on to similar situations; learning is the result of a social process that includes ways of thinking, understanding, problem solving, interaction, conceptual and procedural knowledge; learning is inseparable from the real world but takes place in social environments that consist of individuals and the situation (Northern Illinois University Center for Innovative Teaching and Learning, 2012;Stein, 1998).Herrington and Oliver (1995) summarize the characteristics of a situated learning environment as: authentic contexts, authentic activities, access to the expert level, as well as multiple roles and perspectives, collaborative work, reflection, articulation, as well as teacher coaching and scaffolding, and assessment.
From the above, the role of learning through situations in developing students' skills in mathematics
Contribution to the literature
• This study has a notable contribution in developing mathematics teaching and learning, because it is the first study investigates the effect of situated learning on students' mathematical performance in Jordan.• This study explains to educational researchers the role of situated learning on improving reasoning and proof.
• The current study shows the role of teacher in developing mathematics learning environments using the situated learning.
Figure 1.Components of a social theory of learning (Wenger, 1998, p. 5) 3 / 10 is clear, as through group work and discussion, students' skills develop in interpretation and giving evidence, which may lead to the development of their skills in mathematical reasoning and proof.In this context, Kumar (2021) claims that "abstract concepts such as mathematics can be more easily mastered when taught in real context than in formal context of the classroom, and classroom situated learning is implicitly based on school culture" (p.2).
Mathematical reasoning is a process performed to obtain a conclusion based on mathematical premises and relevant facts and sources that have been assumed to be true (Besar, 2018;Hasanah et al., 2019;Kadioglu et al., 2020).The principles and standards for school mathematics curriculum document, issued by the National Council of Teachers of Mathematics (NCTM, 2000), states that reasoning is essential to understanding mathematics, because it leads to the development and discovery of ideas, interpretation of results, and the use of conjecture.Therefore, mathematical reasoning is an essential part of students' mathematical experience, and that students need to work with each other to formulate and explore their conjectures and to listen for and understand different interpretations, giving them the opportunity to discuss and to modify or strengthen their arguments and ideas.Stefanowicz (2014) claims that a proof, as a high level of reasoning, is a series of logical statements, one referring to the other, which gives an explanation as to why a particular statement is true.
Process standards in the principles and standards for school mathematics curriculum document (NCTM, 2000) refer to the standard of mathematical reasoning and proof, which emphasizes a set of sub-standards: inferring connections between mathematical ideas, and giving explanations for them; testing the validity of mathematical statements and thus accepting or rejecting them; building mathematical arguments and judging their validity, which includes developing proofs and evaluating their validity; the use of different methods of reasoning, including mathematical proof; discovering patterns and generalizing through inductive reasoning.And through the sub-standards, the skills of analysis, interpretation and reflection are employed to understand and acquire knowledge in the primary level and develop with access to the secondary level.
Based on the foregoing, involvement in situated learning, whether inside or outside the classroom, may be an environment for developing the requirements of the reasoning and proof standard for students, as the situated learning environment stimulates thinking and gives convincing reasoning for solutions.
LITERATURE REVIEW
Previous studies examined situated learning and its impact on enhancing many variables, in different subject areas including mathematics, such as the acquisition of knowledge and the transfer of its impact to other situations, as in the study of Cabral et al. (1997), which aimed to reveal the effect of situated learning in promoting effective learning, where 59 male and female students were involved in planned activities within the contexts of civil engineering, and at the end of the semester they were subjected to a test, in addition to the use of observation and questionnaire to collect data.The results showed that the activities planned to place the students in the context of civil engineering had a positive impact on their awareness of the importance of the acquired knowledge for their future profession, and there was a significant improvement in the students' ability to transfer the acquired knowledge to other situations.
In a longitudinal study conducted by Boaler (1999) in order to reveal the benefits of situated learning in students' learning of mathematics, whether their conceptual understanding, procedural fluency, or their attitudes towards mathematics.The study was conducted on students from the age of 13 to 16 years who are enrolled in two schools: in one of them, students were taught in the traditional method, and in the second, students were given complete freedom to create their own educational environment, and to practice activities that link learning with the real world, and to form the idea that math classes are a community to practice what has been learned.The results showed that the students to whom situated learning was applied were able to solve most of the conceptual and procedural questions in the national test, and considered that mathematics is a science related to life situations.As for the students who learned in the traditional method, they had many conceptual errors, and they were not able to solve many questions, and they considered that mathematics does not exceed the content of the book, and is applied only in the classroom.
In the context of technology-assisted situated learning, Ling and Choo (2005) designed a simulationbased learning environment, where students were trained to install and maintain electrical wiring for buildings and industrial facilities according to the code of practice for electrical installation, in addition to the use of CP5 software, which provides interactive activities and demonstrates real-world applications for a real task that the trainee will encounter in the real field of work.The results of the study showed that the students interacted with this application and their knowledge gained in the situation-based environment was transferred to the real application in life.
Moreover, in the context of robotics, Wang et al. (2021) conducted a study that aimed at empowering undergraduate and graduate computing students in robotics through situated learning.Multi-model collaborative robot was employed in the classroom based learning community for one semester; the proposed situated learning pedagogy was effective in learning robotics.
Since the teacher is an individual who represents the expert in the community of practice and in the situated learning environment, in addition to the students who all represent the members of this community, Hodgen (2007) showed that mathematics teachers' activation of their knowledge based on situations and practices in the classroom, good relationship with students, and collaborative work, whether in the classroom with students or in collaboration with other teachers through the social environment, and the exchange of learning resources, led to an increase in students' ability to better acquire knowledge and apply it in real contexts.
In the same context, Malcolm (2010) showed that communities of practice through situated learning within the school context improved students' assimilation and acquisition of knowledge, which leads to the transfer of skills and knowledge from school to daily life and in the future to the labor market.Also, the study of Widjaja (2013) and through the application of four problems in mathematics with life contexts for the fourth, fifth, and sixth grades, and activating the discussions in which teachers and students participate, and giving different explanations and reasoning, showed that learning through the life context leads to meaningful learning when students take an active part in the discussion, by asking questions, explaining and giving reasoning.
In the field of the role of learning based on life contexts and teamwork in enriching the interaction of secondary school students in learning mathematics, Brown and Redmond (2017) showed that learning through life contexts develops students' understanding of different mathematical concepts, links daily life with mathematics, and enhances students' participation in mathematics.
In the same context, the study of Reyes et al. (2019) revealed that teachers' use of life contexts for their classroom activities in the community of practice while teaching geometry to seventh and ninth grade students has achieved a better understanding of geometry concepts for students.In the context of developing reasoning in geometry among students, Đokić (2015) conducted a quasi-experimental study that used life contexts within collaborative groups and the results through testing and classroom observation showed the development of geometric reasoning among students.Also, regarding using life contexts and the community of practice within the classroom, it was found that this pedagogy enhances students' ability of mathematical reasoning in general and geometric reasoning in particular (Ginting et al., 2018;Habsah, 2017;Kennedy & Dunn, 2018;Koskinen & Pitkaniemi, 2022;Mohamed et al., 2020).
Investigating the effect of situated learning on students' mathematical performance (Rocha, 2020) designed a game based on the principles of situated learning, and an empirical evaluation of the game was performed through five-week experiment, the results revealed significant improvement of students' mathematics performance.In the meanwhile, creating situated mathematics tasks, Yaro et al. (2020) conducted a study to illustrate, for math teachers, the possibility of creating math tasks using environmental, cultural and societal issues of certain local community.
In the sense of globalization, Chang (2021) conducted a study abroad in different fields including education to investigate how foreign sites programs function as learning context.By interviewing 12 participants, the qualitative analysis of the data showed that foreign sites were effective in enhancing students' learning.
By reviewing previous studies, it was found that there were few studies that dealt with the impact of situated learning in the field of mathematics, in addition to the scarcity of studies that dealt with the impact of situated learning on improving reasoning and proof in geometry of circle.Hence, this study came to confirm the findings of previous studies, and to research the impact of situated learning on improving students' ability in mathematical reasoning and proof, as previous studies did not directly address this issue.
STUDY PROBLEM AND QUESTIONS
Through experience in teaching mathematics, the most prominent problem that leads to students' lack of understanding of mathematics is separating mathematics from life situations and dealing with it as an abstract science, which leads to the lack of application of knowledge and its lack of development and consequently forgetting it, and this is evident through the students' inability to reason and prove correctly.Adolphus (2011) points out that one of the most prominent areas of mathematics related to life, in which students' lack of understanding and their weak ability to reason and prove is the field of geometry and its various applications, where students face difficulties in applying theories and linking them with different definitions and relationships, inferring new relationships and characteristics, and proving them individually or collectively in collaboration with students and teachers.
Hence, the trend to teaching mathematics based on situated learning came in order to move students from the novice level to the expert level, and the impact of this in enhancing their ability to reason and prove in the geometry of the circle.Specifically, the study answers the following two questions: 5 / 10 1.What is the effect of situated learning on improving mathematical reasoning and proof ability in geometry among tenth grade students?
2. Do the strategies of mathematical reasoning and proof used by students differ according to the method of teaching?
Study Sample
The study sample consisted of 50 tenth grade students enrolled in a public school for girls in Jordan for the first semester of the academic year 2020/2021.The study sample was divided equally into two groups, one experimental and the other control.The experimental group studied the unit of the circle using situated learning, and the control group studied the same educational unit using the traditional method.
Situated Learning Environment
After determining the learning outcomes for the unit of the circle from the tenth-grade book, five lesson plans were designed, with 15 class sessions, to suit the principles of situated learning and its elements and components.Learning takes place through real life situations within the environment and the culture of the school, and its application in the form of an activity or a practical experience through the formation of an integrated social environment in which the application and the exchange of experiences and knowledge between the members of this community, namely the students and the teacher.This environment, according to Oliver and Herrington (2014), includes providing a real context that reflects the way knowledge is used in real life, providing authentic activities, accessing expert performance, providing multiple perspectives, supporting the collaborative construction of knowledge, promoting reflection to reach abstraction, enhancing articulation to clarify knowledge, and the teacher's assistance to students in training, practice, providing scaffoldings, and providing an integrated assessment of learning within the tasks.All of this is accompanied by collaborative group work forms, worksheets, various assessment tools, and an explanatory table of the geometrical unit concepts.
Examples of situations that were used in the "chords, diameters and tangents of a circle" lesson include circular decorations, metal cutters, wooden bases, and gymnastics rings in the school gymnasium.Another situation in the lesson "arcs and sectors" is the identification of a circular part in the school garden, dividing it into four circular sectors, and planting each part with a type of plant.Another example of situations for the lesson "equation of a circle" is displaying a turntable on one of the walls of the school theater using data show with a fan installed on the wall, in addition to other situations.Due to the conditions of COVID-19, two consecutive class sessions per day were given at a rate of 45 minutes for each session during the first month of the semester, after that, switching to distance learning was made.The control group was subjected to studying the same unit according to the textbook and in the traditional method without exposure to situations.The quasi-experimental pre-post two groups design was used, one experimental and the other control.
Data Collection Instrument
After reviewing the previous literature, the National Council of Teachers of Mathematics (NCTM, 2000) principles and standards document, and the general and specific outcomes of the circle unit in the tenth grade mathematics textbook for the academic year 2020/2021, the mathematical reasoning and proof test was prepared with the aim of revealing the effect of situated learning in enhancing mathematical reasoning and proof.
The test consisted of 12 opened tasks, and the tasks were constructed based on three areas: inferring connections between mathematical ideas with an explanation (four items), testing the validity of mathematical statements and thus accepting or rejecting them (four items), and building mathematical arguments (four items).
A rubric has been designed within performance indicators grading (below the novice, novice, trainee, expert) with numerical scale: 0, 1, 2, and 3, respectively, for each of its four indicators, so that the minimum score is zero and the maximum score is 36.The test was presented in its initial form to specialized referees, and in light of their comments, few tasks were reformulated to achieve the desired objectives of the test in line with the standard of mathematical reasoning and proof.
In order to verify the reliability of the test, it was applied to an empirical sample from outside the study sample consisting of 12 female students of the eleventh grade, as they had studied the circle unit previously, using the test-retest method, and calculating the Pearson correlation coefficient between the performance in both times, as it reached 0.78.Also, the tasks' correlation coefficients with the total score of performance on the test, and with the score performance with their areas were calculated to verify the validity of the internal consistency of the test, and they were within the intervals 0.540-0.955,0.757-0.965,respectively, which are statistically significant and appropriate for the purposes of this study (Odeh, 2010).
FINDINGS OF THE STUDY
The findings of the study are presented in two parts based on the study questions.
Part One
This presents data analysis related to the following question: "what is the effect of situated learning on 6 / 10 improving mathematical reasoning and proof ability in geometry among tenth grade students?"To answer this question, the means and standard deviations of the sample performance on the pre-and post-test of mathematical reasoning and proof were calculated as a whole, and the adjusted means for them according to the group, as shown in Table 1.
It is clear from Table 1 that there are apparent differences between the post mean of the scores of the study sample in the test of mathematical reasoning and proof, according to the group (control and experimental), and to find out whether these apparent differences are statistically significant, (one-way ANCOVA) was used for the post-measurement of the test of mathematical reasoning and proof according to the teaching method, as shown in Table 2.
Table 2 shows that there are statistically significant differences (p<0.05) between the two means of the scores of the two groups in the test of mathematical reasoning and proof attributed to the teaching method.The results also indicate that the differences are in favor of the experimental group, through the results of the adjusted means in Table 1.In addition, Table 2 shows that the effect size is 0.34, meaning that the effect size of the teaching method was large, as the effect size is large and strong as it is more than 15% as indicated by Cohen referred to in Al-Kilani and Al-Sharifin (2016), which means that 34% of the explained variance in the students' performance in the reasoning test is attributed to the situated learning environment.
In order to show which area of mathematical reasoning and proof test was the cause of this effect, ANCOVA was performed on the combined areas of the test, where the value of Hotlling's is 0.674, and F was statistically significant (p˂0.05),then followed by oneway ANCOVA at the level of each area of the mathematical reasoning and proof test according to the teaching method, as shown in Table 3.
Table 3 shows that there are statistically significant differences (p<0.05) in the means according to the teaching method in all areas of mathematical reasoning and proof, which indicates the effectiveness of the teaching method in enhancing mathematical reasoning and proof in each of its areas.The effect size was 0.38, 0.23, and 0.28 for each of the conclusion of inferring mathematical connections, testing the validity of mathematical statements and thus acceptance or rejection, and building mathematical arguments, respectively, which means that 38%, 23%, and 28% of the explained variance in the performance of the students in those areas of reasoning and proof process is due was to situated learning environment.
Part Two
This part of the findings of the study presents the data analysis related to the following question: "Do the strategies of mathematical reasoning and proof used by students differ according to the teaching method?"To answer this question, the responses of the students were analyzed qualitatively using the inductive-deductive approach, as the students' written answers to each question were extrapolated, and for each student, and the strategies used were organized within the three areas of reasoning and proof according to which the test was Math Sci Tech Ed, 2022, 18(6), em2120 7 / 10 built, and the most prominent strategies of mathematical reasoning and proof that were used were categorized according to the teaching method (situated learning, traditional), and then determining the frequencies and percentages of these strategies at the level of each task according to the three areas set out previously.After that, the strategies used by the two groups and their frequencies and percentages on the test of total mathematical reasoning and proof were summarized as shown in Table 4.
Through the solutions of the students in the experimental and control groups, it was found that the methods of reasoning and proof were varied among the students of the experimental group in the three areas of reasoning and proof; they were more flexible and in depth, and the answers were more correct among the experimental group, where the students used all kinds of reasoning and proof mentioned in Table 4, and the methods of reasoning and proof were less diverse in the control group, where few students used the methods of reasoning and proof mentioned in Table 4; as many of its members relied on the three areas of reasoning and proof on conjecture without explanation, incomplete solution, failure to give explanations and reasoning for the followed solution steps, or inability to use concepts and theories correctly.This indicates that the situated learning gave the students flexibility and ability to think, analyze and link better than the traditional method that restricted the students' thinking in very limited ways and strategies, which led to the experimental group gaining a better ability to mathematically reason and proof.
DISCUSSION, CONCLUSIONS AND RECOMMENDATIONS
The results of the first question showed the effect of situated learning in enhancing students' mathematical reasoning and proof.This result can be explained that situated learning prepares the learner to use knowledge in life, interact with students and the teacher while completing tasks, explore by providing authentic activities and collaborative work, and enhance understanding through the exchange of pre knowledge, discussion and articulation, expressing different points of view and defending opinions and various arguments.And that is through the teacher's provision of support and enhancing to collaborative groups when needed, continuous assessment during work, and giving students a greater role in work and access to results.
These results can also be interpreted in favor of the experimental group because of the elements and characteristics of the situated learning environment, which begins with providing the real context, the physical environment and the necessary tools, then carrying out an authentic activity within this environment through collaborative groups and reflection on the activity and students' work, encouraging and motivating students to exchange knowledge and experiences, link previous and later information and express different opinions, in addition to the teacher's collaboration with students and providing scaffoldings, enhancing and assessment for them to reach the level of experts.
From the foregoing, it is clear that the experimental group's superiority in the test of mathematical reasoning and proof in the circle unit over the control group indicates a clear role of situated learning in developing students' ability to appropriate mathematical reasoning and proof, and in different ways from one student to another without being restricted to one method.On the other hand, the control group relied mainly on the teacher, who relied on direct instruction and indoctrination, as knowledge remains theoretical that is forgotten over time.
The result of the current study agreed with the results of previous studies (Besar, 2018;Chang, 2021;Hodgen, 2007;Reyes et al., 2019;Roacha, 2020;Widjaja, 2013;Zahner et al., 2021), whose results indicated an effect of situated learning in improving students' abilities to learn mathematics, use acquired knowledge, and develop students' skills in mathematics.
What supports the previous result, is the existence of an impact of situated learning on the diversity in the methods of mathematical reasoning and proof, their validity and their diversity among the members of the experimental group, through their answers to the test of mathematical reasoning and proof compared to the control group.This result can be explained by the fact that the situated learning led to a suitable diversity of reasoning methods among the students, due to linking previous knowledge with new knowledge, employing acquired knowledge in different ways of solving, which led to the employment of concepts, relationships and theories in the solution and linking the necessary relationships to suit the tasks.
Accordingly, this shows the role of life contexts, authentic activities and collaborative groups in understanding the acquired knowledge better because it matches the student's environment and needs, and the support provided by the teacher when needed led to an increase in the students' ability to express their ideas and opinions and the diversity of ways of thinking due to mutual experiences and knowledge, which increased the students' ability to reason and prove mathematically in a variety of ways and not to memorize one method by the method of indoctrination, which in turn helped not to forget the information and helped to use it correctly and benefit from it in similar situations and problems.Also, the weakness of reasoning and proof among the members of the control group compared to the members of the experimental group, and the lack of diversity in their methods in mathematical reasoning and proof, rather the similarity of their methods in mathematical reasoning and proof and their limitation to specific methods and the inability to implement these methods correctly, can be attributed to the teaching method, which led to memorizing limited methods of mathematical reasoning and proof, without thinking in various ways or trying to deduce new ways or benefit from the acquired information in solving problems, because the students relied on memorization and did not apply knowledge in a practical way or link it with life; This, in turn, led to the inability to use knowledge properly.In addition, the role of the students in the control group did not exceed the repetition of solutions and ideas of some problems that the teacher solved, and this is consistent with the results of previous studies such as studies of Donaldson et al. (2020), Malcolm (2010), andPeng et al. (2022).
Based on the foregoing, and in light of the results of the study, the researchers recommend that mathematics teachers should adopt teaching using situated learning in geometry subjects, enrich the curricula with situated learning activities in mathematics subjects, especially geometry, and provide opportunities for mathematics teachers to view and plan for situated learning.The researchers call for conducting more research that deals with the impact of situated learning on teaching different topics in mathematics, and for other educational levels, and new dependent variables.
Figure and words
Table 1 .
Means, standard deviations, and adjusted means of the sample performance in the reasoning and proof test,
Table 3 .
One-way ANCOVA at the level of each area of the post-mathematical reasoning and proof test according to
Table 4 .
Frequencies and percentages of reasoning strategies used by students on the test as a whole according to the teaching method
|
2022-05-16T15:03:55.381Z
|
2022-05-14T00:00:00.000
|
{
"year": 2022,
"sha1": "c7794bea1e874b5a81f119f05b70941c0d800b6c",
"oa_license": null,
"oa_url": "https://www.ejmste.com/download/the-effect-of-situated-learning-environment-in-enhancing-mathematical-reasoning-and-proof-among-12088.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a0751124e8781ed64ef33a75e53510818b182b54",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
}
|
203608707
|
pes2o/s2orc
|
v3-fos-license
|
Vestibular Evoked Myogenic Potentials in Patients With Low Vitamin B12 Levels
Objective: Vitamin B12 deficiency-induced hyperhomocysteinemia has been associated with impaired microarterial flow, demyelization, and neuronal damage, resulting in cochlear damage and auditory dysfunction. Therefore, we aimed to evaluate the possible vestibular-evoked myogenic potential (VEMP) abnormalities in patients with vitamin B12 deficiency. Material and Method: In this prospective study, 37 patients diagnosed with vitamin B12 deficiency (<220 pg/mL) were compared with 31 audiologically healthy participants with normal B12 levels. Burst-evoked cervical VEMP (cVEMP) measurements were performed on all participants. Additionally, cVEMP responses were analyzed for P1-N1 latency, interpeak amplitude, and amplitude asymmetry ratio. The results of audiometric examination and VEMP records as well as absent responses were evaluated and compared between groups. Results: The rate of absent VEMP responses was twice as high in the patient group than in the healthy control group (12 vs 6 cases, respectively). Moreover, the mean values of interpeak amplitude in both right and left ears were statistically shorter in the patient group than the control group (P values = .024 and .007, respectively). Similarly, the mean amplitude asymmetry ratio was statistically higher in the patient group than the control group (P = .050). There were no statistically significant differences in latency responses between groups. Furthermore, positive, statistically significant correlation was detected between values of the left P1-N1 interpeak amplitude and vitamin B12 levels (r = 0.287, P = .037). Conclusions: Increased rates of absent VEMPs and decreased amplitudes with normal latencies are attributed to peripheral vestibular hypofunction in patients with vitamin B12 deficiency.
Introduction
Micronutrient insufficiencies have been significantly associated with auditory problems and hearing loss in recent years. 1 Vitamin B12 is the cofactor of the methylation reaction, resulting in myelin synthesis. 2 Accumulation of homocysteine levels via vitamin B12 deficiency may cause vascular inflammation, reduced microvessel flow, and permeability. Thus, the reduced cochlear blood flow might result in atrophy of the stria vascularis. 1,3 Demyelinisation-associated axonal degeneration and apoptotic neuronal death caused by impaired methionine synthesis, which directly affects cochlear neurons, is wellknown today. 3 In addition to neuronal damage, effects such as cellular oxidative stress, direct neurotoxicity, and vascular toxicity caused by auto-oxidation of elevated homocysteine have been demonstrated. Recently, vitamin B12 deficiencytriggered hyperhomocysteinemia has been shown to be significantly associated with cognitive impairments, neurodegenerative diseases, neuropsychiatric symptoms, and auditory dysfunction. 4 Vestibular-evoked myogenic potential (VEMP) is a noninvasive, rapid, easy, and cost-effective sound-evoked muscle reflex test that has become widespread for the electrophysiological assessment of vestibular functions in recent years. 5 Both cervical VEMP (cVEMP) and ocular VEMP (oVEMP) recording requires high intensity auditory stimuli such as clicks, bursts, or tapping. Cervical VEMPs are short-latency inhibitory reflexes of stimuli recorded from the electrodes placed over the contracted sternocleidomastoid (SCM) muscle. 6 Cervical VEMP typically evaluates the integrity of saccular and inferior-vestibular nerve functions. 7 Today, the diagnostic value of VEMP has expanded and been utilized in the diagnosis of various diseases and even in the assessment of central otolithic pathways. 8 Furthermore, VEMP abnormalities in patients with multiple sclerosis (MS), vestibular migraine, and some neurodegenerative disorders have been documented. 9 Therefore, we aimed to evaluate the VEMP responses in patients with vitamin B12 deficiency.
Sample
In this prospective study, 37 patients between 21 and 80 years diagnosed with vitamin B12 deficiency (<220 pg/mL) and had normal vestibular tests were compared with 31 audiologically healthy participants (control group) with normal B12 levels.
Patients with a history of external and middle ear diseases; with dizziness or vertigo symptoms, otologic surgery, or intratympanic interventions; conductive hearing loss; type B or type C tympanograms; extended myorelaxant treatment; neck surgery; neuropsychiatric, metabolic, or systemic disease, and patients with hearing aids or vitamin B12 levels > 220 pg/mL were excluded from the present study. Participants were enrolled in the study after written informed consent was obtained. Complete otorhinolaryngologic, pure-tone audiometry and immitancemetric examinations were performed on all participants.
Measures
Cervical VEMP test. Vestibular-evoked myogenic potentials were recorded while patients were in a seated position and rotated to face the contralateral side to provide sufficient tension of the SCM muscle. The short-tone bursts (500 Hz, 95 dB normal hearing level, and 5.1/s duration of repetition) with rarefied polarity were presented via ICS Medical Insert Earphones (ER 3A/5A Insert Earphone 300 ohms; ICS Medical, Illinois). Electromyographic activity was recorded ipsilaterally from surface electrodes placed over the upper third of the SCM muscle. During the test, the electrode impedance was less than 5 ohms.
The first early positive (p13) and following negative (n23) peaks (waves) of the VEMP responses and amplitudes between the waves (p13-n23, interpeak amplitude, millisecond) were analyzed. Additionally, the VEMP asymmetry ratio was calculated with the following formula: where Ar is the amplitude on the right and Al is the amplitude on the left.
Laboratory measurements. The serum vitamin B12 analysis was performed via an immunodiagnostic system (Siemens, Advia Centaur XP, Germany) at a normality level of 220 pg/mL.
Statistical Analysis
All data were analyzed with SPSS (Statistical Package for the Social Sciences) software for Windows (version 21.0; IBM, Armonk, New York). Individual and aggregate data were summarized using descriptive statistics including mean, standard deviations, medians (min-max), frequency distributions, and percentages. Patients with absent VEMP responses were also included into all analyses. Normality of data distribution was verified by the Kolmogorov-Smirnov test. Comparison of the variables with normal distribution was made with a Student t test. For the variables that were not normally distributed, the Mann-Whitney and Kruskal-Wallis tests were conducted to compare the groups. Evaluation of categorical variables was performed by w 2 test. The presence of correlation was analyzed with Spearman r or Pearson tests. Values of P < .05 were considered to be statistically significant.
In our study, VEMP recordings revealed 12 patients (7 bilaterally and 5 unilaterally) and 6 healthy individuals (2 bilaterally and 4 unilaterally) with absent VEMP responses. Thus, the rate of absent VEMP responses in the patient group was numerically higher than the control group (P ¼ .064; Table 2). Additionally, there were no statistically significant differences detected according to the pure-tone audiometric results between patients and healthy controls.
In our study, the comparison of VEMP test findings between the patient group with a mean serum B12 level of 224.47 + 20.48 pg/mL and the control group with a mean serum B12 level of 525.42 + 110.92 pg/mL is presented in Table 3. The mean interpeak amplitude value in both the right (49.17 + 45.95 mv) and left (53.05 + 38.71 mv) ears measured in the patient group was statistically lower than the mean interpeak amplitude values measured in the right (68.46 + 40.77 mv) and left (79.69 + 43.52 mv) ears of the control group (P values ¼ .024 and .007, respectively). Moreover, the mean amplitude asymmetry ratio was statistically higher in the patient group (29.00 + 21.30) than the control group (18.58 + 14.70; P ¼ .050). There were no statistically significant differences detected according to the p13 or n23 latency responses measured in both the right and left ears between groups (P > .05). Furthermore, a positive, statistically significant correlation was detected between values of left P1-N1 interpeak amplitude and vitamin B12 levels (r ¼ 0.287, P ¼ .037; Table 4).
Discussion
In recent years, nutritional status has become an increasingly popular research topic in the etiology of hearing impairment. 10 According to the National Health and Nutrition Examination Survey data, the prevalence of B12 deficiency is reported as 2.9%, 10.6%, or 25.7% based on B12 cutoff values of < 148, < 200, and < 256 pmol/L, respectively. In addition, advanced age (> 60 years) and female gender are frequently associated with increased prevalence. 2,11 Karabulut et al 12 documented a high B12 vitamin deficiency in 21.6% of 466 adult premarital women. In accordance with these data, the prevalence of B12 deficiency was 3.1 times higher in female patients, and male patients were significantly older in our study. In addition, there were no statistically significant differences found between the mean ages of patients and healthy controls.
Hyperhomocysteinemia due to low levels of vitamin B12 and folate have been linked to hearing impairment in numerous published studies. 3,13 In order to prevent possible bias caused by impaired B12 absorption in advanced age, Gocer et al 14 designed a sample group with middle-aged (40-60 years) patients. Researchers reported lower B12/folate status with higher homocysteine levels in patients (n ¼ 78) with hearing impairment than in the healthy control group (n ¼ 53). There are limited published studies evaluating micronutriological risk factors with the VEMP method. Only a few studies documented VEMP recordings in vitamin D deficiency with debated results. 15,16 Moreover, to our knowledge, there is no published study available that evaluated the vestibular or auditory functions via VEMP method in patients with B12 deficiency.
Nola et al 17 observed normalized cVEMP responses without any improvement of the caloric test in 20 patients diagnosed with acute vertigo caused by vestibular neuritis (VN). Thus, the researchers concluded that the cVEMP recording is an appropriate method for screening vestibular disorders as it easily monitors the improvement of otolithic functions and contributes to the 22 However, absent VEMP responses are documented in bilateral vestibulopathy and vestibular schwannoma (up to 80%); the reliability of the VEMP method in the presence of a tumor is an uncertain and debated topic today. 19 Similarly, in our study, VEMP recordings revealed 12 patients (7 bilaterally and 5 unilaterally) and 6 healthy individuals (2 bilaterally and 4 unilaterally) with absent VEMP responses. Thus, absent VEMP responses occurred twice as often in the patient group than in the healthy control group. It was initially confirmed that there were no technical or physical causes leading to absent VEMPs during the testing procedure. Although the published data demonstrated that absent VEMPs involve a wide range of pathologies, in the present study, absent responses of VEMP were attributed to peripheral vestibular pathologies rather than to central pathologies. Additionally, in the control group, absent VEMPs in 5 participants aged > 45 years were interpreted as an advanced age effect; absent VEMPs in another control participant (aged 23 years) were attributed to possible asymptomatic saccular pathologies. Eleftheriadou et al 23 compared 46 MS patients with vestibular symptoms to 40 healthy controls and noted p13-n13 abnormalities in 50% (n ¼ 23) of patients via cVEMP recordings. Researchers concluded that demyelination of primary afferent or vestibulospinal tract axons caused absence or delay of p13-n23 latency in VEMP recordings. Moreover, lower amplitude recordings have been documented by VEMP in patients with vestibular migraine compared to the healthy controls. 24, 25 Baier et al 26 compared cVEMP responses in 63 patients with vestibular migraine to 63 gender-and agematched healthy controls and detected reduced VEMP amplitudes in 68% of patients without any pathology in latency responses. Researchers indicated possible abnormalities in saccular pathways. In our study, significantly reduced mean value of interpeak amplitude in both right and left ears without any abnormalities in latencies was demonstrated in the patient group. Moreover, the mean amplitude asymmetry ratio was statistically higher in the patient group than the control group. Furthermore, a positive, statistically significant correlation was detected between values of left p1-n1 interpeak amplitude and vitamin B12 levels. The asymmetry ratio is a more sensitive parameter, particularly in unilateral hypofunction evaluation. Therefore, our findings can be attributed to impaired function or ischemia of peripheral vestibular structures.
We conducted a prospective evaluation with a limited sample sized. In this respect, further research should be performed with larger study groups to contribute to the diagnosis and treatment of impaired vestibular functions with nutritional etiology.
In conclusion, to our knowledge, for the first time in the literature, an increased rate of absent VEMPs and decreased amplitudes with normal latencies was demonstrated in patients with vitamin B12 deficiency. These findings are interpreted as evidence of peripheral vestibular hypofunction.
|
2019-10-01T13:03:11.412Z
|
2019-09-29T00:00:00.000
|
{
"year": 2019,
"sha1": "d523cdb0b195b509a79b6c0b9436a124ca70cc79",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0145561319878952",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "52f8cc59c442b6b6bd9504dcda150f12bf79c54a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15964449
|
pes2o/s2orc
|
v3-fos-license
|
Hydrodynamic Coupling of Two Brownian Spheres to a Planar Surface
We describe direct imaging measurements of the collective and relative diffusion of two colloidal spheres near a flat plate. The bounding surface modifies the spheres' dynamics, even at separations of tens of radii. This behavior is captured by a stokeslet analysis of fluid flow driven by the spheres' and wall's no-slip boundary conditions. In particular, this analysis reveals surprising asymmetry in the normal modes for pair diffusion near a flat surface.
We describe direct imaging measurements of the collective and relative diffusion of two colloidal spheres near a flat plate. The bounding surface modifies the spheres' dynamics, even at separations of tens of radii. This behavior is captured by a stokeslet analysis of fluid flow driven by the spheres' and wall's no-slip boundary conditions. In particular, this analysis reveals surprising asymmetry in the normal modes for pair diffusion near a flat surface.
Despite considerable progress over the past two centuries [1] hydrodynamic properties of all but the simplest colloidal systems remain controversial or unexplained. For example, velocity fluctuations in sedimenting colloidal suspensions are predicted to diverge with system size [2]. Experimental observations indicate, on the other hand, that long-wavelength fluctuations are suppressed by an as-yet undiscovered mechanism [3][4][5]. One possible explanation is that hydrodynamic coupling to bounding surfaces may influence particles' motions to a greater extent and over a longer range than previously suspected [6]. Such considerations invite a renewed examination of how hydrodynamic coupling to bounding surfaces influences colloidal particles' dynamics.
This Letter describes an experimental and theoretical investigation of two colloidal spheres' diffusion near a flat plate. Related studies have addressed the dynamics of two spheres far from bounding walls [7,8], and of a single sphere in the presence of one or two walls [9]. Confinement by two walls poses particular difficulties since available theoretical predictions apply only for highly symmetric arrangements [10], or else contradict each other [11,12]. The geometry we have chosen avoids some of this complexity while still highlighting the range of non-additive hydrodynamic coupling in a many-surface system.
We combined optical tweezer manipulation [13] and digital video microscopy [14] to measure four components of the pair diffusion tensor for two colloidal spheres as a function of their center-to-center separation r and of their height h above a planar glass surface. Measurements were performed on silica sphere of radius 0.495±0.025 µm (Duke Scientific lot 21024) dispersed in a layer of water 140 ± 2 µm thick. The suspension was sandwiched between a microscope slide and a #1 coverslip whose surfaces were stringently cleaned before assembly [15] and whose edges were hermetically sealed with a uv cured epoxy (Norland type 88) to prevent evaporation and suppress bulk fluid flow. A transparent thin film heater bonded to the microscope slide and driven by a Lakeshore LM-330 temperature controller maintained the sample volume's temperature at T = 29.00 ± 0.01 • C, as measured by a platinum resistance thermometer. The addition of 2 mM of NaCl to the solution minimized electrostatic interactions among the weakly charged spheres and glass surfaces by reducing the Debye screening length to 2 nm. Under these conditions, the individual spheres' free self-diffusion coefficients are expected to be D 0 = k B T /(6πηa) = 0.550 ± 0.028 µm 2 /sec, where η = 0.817 cP is the electrolyte's viscosity [16]. The spheres' motions were tracked with an Olympus IMT-2 optical microscope using a 100× NA 1.4 oil immersion objective. Images acquired with an NEC TI-324A CCD camera were recorded on a JVC-822DXU SVHS video deck before being digitized with a Mutech MV-1350 frame grabber at 1/60 sec intervals. Field-accurate digitization was assured by interpreting the vertical interlace time code recorded onto each video field. The spheres' locations r 1 (t) and r 2 (t) in the image acquired at time t then were measured to within 20 nm using a computerized centroid tracking algorithm [14].
A pair of spheres was placed reproducibly in a plane parallel to the glass surfaces using optical tweezers [13]. These optical traps were created with a solid state laser (Coherent Verdi) whose beam was brought to a focus within the sample volume by the microscope's objective. Resulting optical gradient forces suffice to localize a silica sphere at the focal point despite random thermal forces [13]. Two optical traps were created by alternating the focused laser spot between two positions in the focal plane at 200 Hz using a galvanometer-driven mirror [17]. Diverting the trapping laser onto a beam block every few cycles freed the spheres to diffuse away from this well defined initial condition. Resuming the trap's oscillation between the two trapping points resets the spheres' positions. Alternately trapping and releasing the spheres allowed us to sample their dynamics efficiently in a particular geometry. Allowing the spheres only τ = 83 msec (5 video fields) of freedom before retrapping them for 16 msec (less than 1 video field) ensured that their out-ofplane motions, ∆z < √ 2D 0 τ = 0.4 µm, cause negligible tracking errors.
Because optical tweezers form in the microscope's focal plane, their height h relative to the coverslip's surface can be adjusted from 1 to 30 µm with 0.5 µm accuracy by adjusting the microscope's focus. For a given height, we continuously varied the spheres' initial separation between 2 µm and 10 µm at 0.025 Hz for a total of 20 minutes. This procedure yielded 60,000 samples of the spheres' dynamics in 1/60 sec intervals divided into sequences 5/60 sec long for each value of h. These trajectory data were decomposed into cooperative motions ρ = r 1 + r 2 and relative motions r = r 1 − r 2 either perpendicular or parallel to the initial separation vector, and binned according to the initial separation, r. The diffusion coefficients D ψ (r, h) associated with each mode of motion ψ(r, h, τ ) at each height and initial separation were then obtained from the Stokes-Einstein formula where the angle brackets indicate an ensemble average. Fig. 1 shows typical data for one mode of motion at one height and starting separation. Diffusion coefficients D ψ (r, h) extracted from least squares fits to Eq. (1) appear in Fig. 2 Particles moving through a fluid at low Reynolds number excite large-scale flows through the no-slip boundary condition at their surfaces. These flows couple distant particles' motions, so that each particle's dynamics depends on the particular configuration of the entire collection. This dependence is readily calculated using Batchelor's generalization of Einstein's classic argument [18]: The probability to find N particles at equilibrium in a particular configuration { r 1 , . . . , r N } depends on their interaction Φ( r 1 , . . . , r N ) through Boltzmann's distribution, P ( r 1 , . . . , r N ) = exp [−Φ/(k B T )]. The corresponding force −∇Φ = k B T ∇P/P drives a probability flux k B T b ∇P , where b( r 1 , . . . , r N ) is the particles' mobility tensor. The system reaches equilibrium when this interaction-driven flux is balanced by a diffusive flux −D∇P . It follows that the N -particle diffusivity is D = k B T b. Elements of D lead to generalized Stokes-Einstein relations [19] ∆r iα (τ )∆r jβ (τ ) = 2D iα,jβ τ. (2) describing how particle i's motion in the α direction couples to particle j's in the β direction.
b e iα,jβ is the Green's function for the flow at r i in the α direction due to an external force at r j in the beta direction. In the present discussion, it accounts for no-slip boundary conditions at all other surfaces in the system.
If the spheres are well separated, we may approximate the flow field around a given sphere by a stokeslet, the flow due a point force at the sphere's location. This approximation is valid to leading order in the spheres' radius. The Green's function for the flow at x in the α direction due to a stokeslet at r j in the β direction is [20] so that b e iα,jβ = G S αβ ( r i − r j ). In the particular case of two identical spheres, diagonalizing the resulting diffusivity tensor D yields the diffusion coefficients for two collective (C) modes and two relative (R) modes along directions perpendicular (⊥) and parallel ( ) to the initial separation [18] where the positive corrections apply to collective modes and the negative to relative. The collective diffusion coefficients D C ⊥ and D C are enhanced by hydrodynamic coupling because fluid displaced by one sphere entrains the other. Relative diffusion coefficients D R ⊥ and D R are suppressed, on the other hand, by the need to transport fluid into and out of the space between the spheres.
Introducing a planar boundary into this system adds considerable complexity. The flow field around a small sphere located a height h above a horizontal wall is most easily calculated by the method of images [21], in which the wall's no-slip boundary condition is satisfied by placing a stokeslet (S), a source doublet (D), and a stokeslet doublet (SD) a distance h below the plane of the wall [21,20]. The flow due to this image system is described by the Green's function where R j = r j − 2hẑ is the position of sphere j's image, and are Green's functions for a source dipole and a stokeslet doublet, respectively. The flow field set up by the image system (and thus by the wall's no-slip boundary condition) entrains the sphere through b e iα,iβ = G W αβ ( r i − R i ) and decreases its mobility. Two independent modes emerge from this analysis, one (z) normal to the wall and the other (xy) parallel, with diffusivities [10] Eqs. (5) and (6) should suffice for two spheres far from bounding surfaces. Similarly, the spheres' motions should decouple when the influence of a nearby wall dominates; Eqs. (10) and (11) A more complete treatment not only resolves these quantitative discrepancies but also reveals an additional surprising influence of the bounding surface on the spheres' dynamics: the highly symmetric and experimentally accessible modes parallel to the wall are no longer independent. The combination of a neighboring sphere and two im- to the mobility of sphere i in the α direction. Eigenvectors of the corresponding diffusivity tensor appear in Fig. 3. The independent modes of motion are rotated with respect to the bounding wall by an amount which depends strongly on both r and h. The experimentally measured in-plane motions clearly are not independent yet will satisfy Stokes-Einstein relations, nonetheless, with pair-diffusion coefficients D C,R α (r, h) = D 1α,1α (r, h) ± D 1α,2α (r, h), where the positive sign corresponds to collective motion, the negative to relative motion, and α indicates directions either perpendicular or parallel to the line connecting the spheres' centers. Explicitly, we obtain up to O(a 3 /r 3 ) and O(a 3 /h 3 ), where ξ = 4h 2 /r 2 . These results appear as solid curves in Fig. 2.
To gauge the success of this procedure and to quantify the range over which the presence of a wall measurably influences colloidal dynamics, we computed the error-weighted mean-squared deviation of the predicted diffusivities from the measured values, dr. Typical results appear in Fig. 4. The lowest-order stokeslet presented here analysis agrees well with measurement over the entire experimentally accessible range. Deviations from the linear superposition approximation's predictions, on the other hand, are evident out to h = 15 µm or 30 radii.
The present study demonstrates that a confining surface can influence colloidal dynamics over a large range of separations. This influence is inherently a many-body effect, as demonstrated by the linear superposition approximation's failure. Quantitative agreement between our measurements and a leading-order stokeslet analysis offers hope for future progress in understanding confinement's effects on colloidal dynamics.
David Altman developed the transparent thin film heater with support from the MRSEC REU program at the University of Chicago. Work at the University of Chicago was supported by the National Science Foundation through grant DMR-9730189, through the MRSEC program of the NSF through grant DMR-9888595, and by the David and Lucile Packard Foundation. Theoretical work was supported by the A.P. Sloan Foundation, the Mathematical Science Division of the National Science Foundation, and a NDSEG Fellowship to TS.
|
2015-03-19T23:44:59.000Z
|
2000-03-17T00:00:00.000
|
{
"year": 2000,
"sha1": "5f26c71e1580b17046b15b17fd5ced450c73fa6c",
"oa_license": null,
"oa_url": "https://authors.library.caltech.edu/1311/1/DUFprl00.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "5f26c71e1580b17046b15b17fd5ced450c73fa6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
253458938
|
pes2o/s2orc
|
v3-fos-license
|
Picoliter Droplet Generation and Dense Bead-in-Droplet Encapsulation via Microfluidic Devices Fabricated via 3D Printed Molds
Picoliter-scale droplets have many applications in chemistry and biology, such as biomolecule synthesis, drug discovery, nucleic acid quantification, and single cell analysis. However, due to the complicated processes used to fabricate microfluidic channels, most picoliter (pL) droplet generation methods are limited to research in laboratories with cleanroom facilities and complex instrumentation. The purpose of this work is to investigate a method that uses 3D printing to fabricate microfluidic devices that can generate droplets with sizes <100 pL and encapsulate single dense beads mechanistically. Our device generated monodisperse droplets as small as ~48 pL and we demonstrated the usefulness of this droplet generation technique in biomolecule analysis by detecting Lactobacillus acidophillus 16s rRNA via digital loop-mediated isothermal amplification (dLAMP). We also designed a mixer that can be integrated into a syringe to overcome dense bead sedimentation and found that the bead-in-droplet (BiD) emulsions created from our device had <2% of the droplets populated with more than 1 bead. This study will enable researchers to create devices that generate pL-scale droplets and encapsulate dense beads with inexpensive and simple instrumentation (3D printer and syringe pump). The rapid prototyping and integration ability of this module with other components or processes can accelerate the development of point-of-care microfluidic devices that use droplet-bead emulsions to analyze biological or chemical samples with high throughput and precision.
Introduction
Droplet microfluidics uses devices with channels dimensions tens or hundreds of microns wide to generate and manipulate discrete µL or less volumes. Dividing a sample of interest into fL to µL scale volumes reduces reagent usage, increases the sensitivity of chemical analyses, and provides enhanced control over reagent delivery, mixing, and chemical interactions [1]. There are many applications of droplet microfluidics in chemistry, biology, and biomedical engineering, such as therapeutic agent delivery, biomedical imaging, biomolecule synthesis, diagnostic chips, drug discovery, cell culture, biochemical characterization, and single cell analysis [2]. The implementation of droplet microfluidics in these applications are accomplished through lab-on-a-chip devices. These lab-on-a-chip devices may require droplet manipulation processes such as mixing, fission and/or fusion, sorting, and transportation of droplets [3,4], which can be accomplished via electrowetting, magnetic actuation, dielectrophoresis, surface acoustic waves, optical methods, or thermal methods [3][4][5][6][7]. However, due to the complicated processes used to fabricate channels that are tens or hundreds of microns wide, most droplet microfluidic methods are limited to research in laboratories with cleanroom facilities and complex instrumentation (e.g., photolithography with silicon wafers [8][9][10] or wet etching [11][12][13]). The few droplet generation technologies commercially available for diagnostic use are expensive ($89 k-$100 k for an instrument and $24-$240 per disposable cartridge) and not integrated with other assay steps such as chemical reaction incubation and droplet analysis [14].
To make the droplet generation process simpler, less time-intensive, and less expensive, many innovative methods have been created. Some researchers have used glass capillaries to generate pL-scale droplets. For example, Li et al. bonded microscope glass slides to pulled glass capillaries to generate monodisperse multiple emulsions [15], Gu et al. created and manipulated pL droplets for single cell assays with a 75 µm fused-silica capillary [16], and Li et al. used an asymmetrical beveled capillary to generate pL to nL droplets and execute a digital PCR assay [17]. While the instrumentation costs for these devices are lower than for photolithography, devices made from glass capillaries are difficult to integrate into other upstream or downstream modules and not amenable to rapid prototyping due to the intricate procedures for fabricating capillaries <1 mm in diameter. Other groups rely on micromachining to generate droplets, such as direct milling of polycarbonate [18,19] or micromachining in PMMA [20]. These methods have demonstrated consistent and controllable droplet generation; however, the droplet sizes are large (>1 nL) or when plysized droplets are achieved, a centrifuge is needed to create the droplets in a reaction tube, which precludes its ability to be integrated into other microfluidic modules [20].
3D printing is now commonly used to create molds for PDMS devices, which eliminates the need for cleanroom facilities, photolithography, or etching and enables rapid prototyping and fabrication [21][22][23][24]. Researchers have also used 3D printers to build monolithic devices out of resin for droplet generation, albeit with larger channel dimensions and therefore larger droplets (>1 nL) [25][26][27][28][29][30][31]. Picoliter-scale droplets are important for several applications, such as increasing the precision, sensitivity and dynamic range of digital PCR [32], or preventing cross contamination and target dilution in single cell analysis [16,33]. The small channel sizes required for pL-scale droplets are typically fabricated with complex processes insides a cleanroom, usually photolithography [8][9][10], and have not been made with 3D printed molds or 3D printed monolithic devices. The methods described above have significantly advanced droplet generation for the picoliter scale via photolithography or glass capillaries, and the nanoliter scale via 3D printing; however, there remains a need for <100 pL droplet generation from a rapid prototyping method (e.g., 3D printed molds) that can be easily integrated into other sample preparation, analysis, and detection modules.
An important area of investigation in droplet microfluidics are methods that encapsulate a single bead in a droplet (BiD). These BiD platforms have enabled exciting advancements in biomedical research and diagnostics, including genome sequencing [34], enzyme evolution and screening [35,36], detecting rare genetic mutations [37,38] single cell analysis [39], and molecular diagnostics [40]. While these devices have high throughput and multiplexing capabilities, they are limited to laboratories with sophisticated instrumentation for photolithography and bead encapsulation. Additionally, they have shown Poisson or better distributions of BiDs for particles with a similar density to water, such as gel particles [41], polystyrene beads [42][43][44], agarose beads [39], or biological cells [39,44,45]. Particles with a higher density than water sediment to the bottom before being encapsulated in droplets and cause the first fraction of droplets to have more than 1 bead per droplet and the remaining fraction to not have any beads. To use beads of varying densities in BiD platforms, this sedimentation effect must be overcome.
The purpose of this work is to overcome current limitations of droplet microfluidic devices by creating a droplet generation device with the following features: (i) a simple and inexpensive fabrication process that is amenable to rapid prototyping and integration with other modules, (ii) droplet volumes <100 pL, and (iii) the ability to encapsulate dense beads in aqueous droplets with a Poisson-like distribution. We found that using 3D printing to create a mold instead of photolithography or etching is a suitable fabrication method to accomplish this purpose. Our device generated monodisperse droplets as small as~48 pL and we demonstrated the usefulness of this droplet generation technique in biomolecule detection by quantifying nucleic acids via digital loop-mediated isothermal amplification (dLAMP). We also designed a mixer that can be integrated into a syringe to overcome dense bead sedimentation and found that the BiD emulsions created from our device had less than 2% of the droplets populated with more than 1 bead when the average input concentration was 0.15 beads/droplet, in line with Poisson statistical projections. This study will enable researchers to create devices that generate pL-scale droplets and encapsulate dense beads with inexpensive and simple instrumentation (3D printer and syringe pump). The rapid prototyping and integration ability of this method can accelerate the development of pointof-care microfluidic devices that generate droplet-bead emulsions and analyze samples with high throughput and precision.
Device Fabrication
3D models of the master molds were designed using SolidWorks CAD software (Dassault Systems, Velizi-villacoublay, France) to have flow channel dimensions of 100 µm × 100 µm and inlet/outlet ports of 750 µm ( Figure 1A). Stereolithography (SLA) files were prepared for 3D printing by orienting them at a 45 • angle and avoiding cups and overhangs in Form Labs' Preform software. The models were then printed using the Form3 SLA 3D printer (Form Labs) in Clear resin (FLGPCL04) at a layer thickness of 25 µm. The printed master molds were thoroughly cleaned with isopropyl alcohol to remove excess resin, then UV-cured for 30 min.
Droplet Generation
Droplets were generated using the designed flow-focusing PDMS microfluidic devices described above. The oil phase consisted of mineral oil (Sigma Aldrich M3516-1L), 0.1 wt% Triton X-100 (Fisher Scientific, Waltham, MA, USA), and 3 wt% ABIL EM 90 (Evonik, Essen, Germany), and was pumped at various volumetric flow rates (20,25,50,75, 100 µL/min). The aqueous phase (DI water) was maintained at a volumetric flow rate of 1 µL/min. The oil and aqueous phases were pumped to an intersection in the device by syringe pumps (KD Scientific, Holliston, MA, USA), at which point droplets were generated and subsequently collected from the outlet in Eppendorf tubes. A fraction of the droplets were imaged using confocal imaging (Leica SP5, Wetzlar, Germany) and the respective planar areas of the droplets were deduced using ImageJ software after segmentation processing. The spherical diameter of each droplet is calculated from the deduced area. To make polydimethylsiloxane (PDMS), SYLGARDTM 184 Silicone Elastomer Base and SYLGARDTM 184 Silicone Elastomer Curing Agent (Dow Corning, Midland, MI, USA) are combined at 10:1 w/w ratio to make up~3 gm needed to fill each mold. Prior to pouring the mixture into the mold, it is degassed in a Cole Parmer Diblock oven at room temperature until no bubbles can be seen in the PDMS mixture. After filling the molds with the degassed PDMS, the degassing process is repeated to ensure complete filling of the corners of the channels before curing at 65 • C for 45 min. Once cured, the PDMS is gently peeled from the master mold and bonded onto glass microscope slides (Amscope BS-72P 100S-22) after surface activation using flame treatment as an alternative to oxygen plasma bonding [46] ( Figure 1B). The device is then placed in an 85 • C oven overnight to allow the PDMS to harden. Next, the devices are examined for binding strength of the PDMS by gently prying at them. They are also checked for channel dimensions under a microscope. A ± 10% tolerance is allowed for the channel widths measured from micrographs prior to the attachment of the flow tubing (Scientific Commodities, Lake Havasu City, AZ, USA, BB31695 PE/3). The tubing is attached to the chip by plumbing them into the inlet and outlet ports, making sure to leave a clearance space between the tubing nozzle and the slide surface. The tubing is further held in place using cold weld steel-reinforced epoxy (JB Weld, Marietta, GA, USA).
Droplet Generation
Droplets were generated using the designed flow-focusing PDMS microfluidic devices described above. The oil phase consisted of mineral oil (Sigma Aldrich M3516-1L), 0.1 wt% Triton X-100 (Fisher Scientific, Waltham, MA, USA), and 3 wt% ABIL EM 90 (Evonik, Essen, Germany), and was pumped at various volumetric flow rates (20,25,50,75, 100 µL/min). The aqueous phase (DI water) was maintained at a volumetric flow rate of 1 µL/min. The oil and aqueous phases were pumped to an intersection in the device by syringe pumps (KD Scientific, Holliston, MA, USA), at which point droplets were generated and subsequently collected from the outlet in Eppendorf tubes. A fraction of the droplets were imaged using confocal imaging (Leica SP5, Wetzlar, Germany) and the respective planar areas of the droplets were deduced using ImageJ software after segmentation processing. The spherical diameter of each droplet is calculated from the deduced area.
Droplet Digital Loop-Mediated Isothermal Amplification for DNA Quantification
Lactobacillius acidophilus (L. acid.) obtained from MicroKwik vials (Carolina Biological Supply, Burlington, NC, USA) was cultured in de Man, Rogosa and Sharpe (MRS) agar formulated in-house using Millipore-Sigma formulation (CCW4691). The QuickExtractTM one-step DNA extraction kit (Lucigen, Middleton, WI, USA) was used to extract DNA from the colonies. Extracted genomic DNA was quantified via absorbance measurements from a Nanodrop One instrument (ThermoFisher Scientific, Waltham, MA, USA) and diluted in nuclease-free water to concentrations ranging from 0 to 9.5 × 106 copies/mL. The LAMP mix + L. acid DNA samples were infused into a droplet generation device as described in "Droplet Generation", with oil flow rate 75 µL/min and aqueous flow rate 1 µL/min. Droplets from the microfluidic devices were collected in amber SepCap vials (Thermoscientific, Waltham, MA, USA C4015-99) and incubated at 68 • C for 60 min using a Multi-Therm shaker (Benchmark Scientific, Sayreville, NJ, USA). After incubation, the droplets were imaged using a Leica SP5 confocal microscope, and images were analyzed with Image J to determine the relative fluorescence intensity (RFI) of each droplet. A threshold was determined by computing µ NTC + 3·σ NTC , where µ NTC is the mean and σ NTC is the standard deviation of the RFI of the 0 cop/mL sample droplets. Droplets with RFI greater than the threshold were classified as positive while the droplets less than or equal to the threshold are classified as negative. One can then use Poisson statistics with the number of positive and negative droplets to calculate a concentration for each sample [47].
Bead Mixer
A blind hole with a diameter of about 9 mm was drilled into the side of a 3 mL plastic syringe (CareTouch, Westminster, CO, USA) at the 0.5 mL mark. A small DC motor with a plastic impeller which was originally designed for a bead-beating sample preparation device (Claremont Bio 01.340.48 OmniLyse®Kit) was retrieved and carefully positioned into the syringe through the blind hole. The motor with the impeller was affixed to the syringe with cold weld steel-reinforced epoxy (JB Weld, Marietta, GA, USA) such that the blind hole was completely sealed and airtight. The epoxy was allowed to set for 48 to 72 h. The impeller mixer was powered by a 1.5 V DC power supply (SI, Figure S1).
Bead-in-Droplet Emulsions
Hard shell Polymethyl Methacrylate (PMMA) beads (PolyAn Microshperes Po-105 00 020 and Alpha Nanotech colloidal PMMA) of 20 µm in diameter were used in the bead encapsulation experiment. A mixture of the beads and 0.1 %v/v Tween 20 in nuclease-free water at working concentrations of 0.15, 0.2 and 0.3 beads/droplet (λ) were used as the dispersed phase for the experiments. A mixture of mineral oil (Sigma Aldrich-M3516-1L), 0.1 wt% Triton X-100 (Fisher Scientific, Waltham, MA, USA) and 3 wt% ABIL EM 90 (Evonik, Essen, Germany) was used as the continuous phase. The dispersed phase (bead suspension) was aspirated into a modified syringe and loaded onto a syringe pump (KD Scientific, Holliston, MA, USA, KDS100). A 1.5 V DC power supply was connected to the mixer to keep the beads solution homogenous. The continuous phase was put into a 10 mL plastic syringe (CareTouch, Westminster, CO, USA) and loaded onto a syringe pump. The continuous and dispersed phases were introduced into the droplet generation device using syringe pumps at flow rates of 30 µL/min and 1-7 µL/min, respectively. A period of about 5 min was allowed for the cartridge to be primed and for the droplet generation to be stabilized. The droplets were collected from the cartridge into 1 mL amber SepCap vials (Thermoscientific, Waltham, MA, USA, C4015-99). The excess oil from the continuous phase was poured off and the droplets were put onto a microscope slide and mounted onto a microscope (Omax microscope 3152102) for imaging. Micrographs of the droplets were taken using the Amscope microscope camera md35 and Amsocpe software version 4.
Image Analysis
The images were opened in Image J. The scale was set according to the scale bar on the images and the unit was set to µm. The images were converted to 8-bit gray scale images and speckles and noise were filtered from the images. The threshold of the images was adjusted to convert them to binary images. The images were converted to mask to invert the black to white, making the droplets appear white. The droplets were then analyzed to calculate the area of each droplet. The diameter and volume of each droplet were calculated from the area of the droplets. The droplets containing beads were manually counted and the number of beads in each droplet was recorded. The data were compiled in Excel (Microsoft Office) and parsed into Python 3.0 for further analysis and visual presentation.
Picoliter-Scale Droplet Generation
The physics of droplet generation via flow focusing has been well documented with theory and experiments showing an inverse logarithmic relationship between Capillary number (Ca = µ ave (2Q o + Q w )/σhw) and non-dimensionalized droplet diameter, D d /D h , where µ ave is the average viscosity of the two fluids, Q o is the oil flow rate, Q w is the water flow rate, σ is the surface tension, h is the channel height, w is the channel width, D d is the diameter of the droplet, and D h is the hydraulic diameter of the channel, 2hw/(h + w) [48,49]. These flow focusing studies demonstrate that <100 pL droplets can theoretically be generated with Ca > 0.001 (faster flow rates (Q o ,Q w ) relative to channel dimensions (h,w)) and 144 µm > D h > 39 µm, or with Ca < 0.001 (slower flow rates (Q o ,Q w ) relative to channel dimensions (h,w)) and 14 µm < D h < 39 µm [48] (SI, Section S2). Experimentally, the authors test devices with maximum channel heights of 27 µm [48] or widths of 71 µm [49]. In these studies and others [8][9][10][11][12][13], pL droplets are generated by using small channel widths (<100 µm) facilitated by photolithographic processes in cleanrooms. As our objective was to develop a device that generates pL droplets without complex fabrication processes, we were limited to the channel widths 100 µm or greater that an SLA 3D printer is capable of printing in a mold. Therefore, our device design would need to be in the Ca > 0.001 regime with faster flow rates relative to channel dimensions.
With the limits on our device's physical features established, we 3D printed a mold and made a PDMS cast of 100 µm channel width and 100 µm channel height without a cleanroom, photolithography processes, or complex instrumentation (Figure 1). We chose oil and water flow rates such that the droplet generation device would have Ca 0.001, with Q o = 25 to 100 µL/min and Q w = 1 µL/min (SI, Section S2), which resulted in droplets of diameters 45 to 112 µm (48 to 736 pL) (Figure 2). The droplets generated from this device are monodisperse ( Figure 2B, coefficient of variation (CV) from 2-12%), which is in the range of droplets generated from other devices [50,51]. As expected, there is an inverse power relationship between droplet volume and oil flow rate [49], showing that devices fabricated with 3D printed molds give similar consistency and expected performance at the picoliter scale as devices made with photolithography in a cleanroom. Because this device is made from a 3D printed mold, researchers can iterate prototypes rapidly without undergoing the time and resource-consuming processes of photolithography; additionally, the droplet generation module can be part of a larger 3D printed mold that includes modules for executing other upstream or downstream assay processes. number ( = (2 + )/ ℎ ) and non-dimensionalized droplet diameter, / ℎ , where is the average viscosity of the two fluids, is the oil flow rate, is the water flow rate, is the surface tension, ℎ is the channel height, is the channel width, is the diameter of the droplet, and ℎ is the hydraulic diameter of the channel, 2ℎ /(ℎ + ) [48,49]. These flow focusing studies demonstrate that <100 pL droplets can theoretically be generated with > 0.001 (faster flow rates ( , ) relative to channel dimensions (ℎ, )) and 144 µm > ℎ > 39 µm, or with < 0.001 (slower flow rates ( , ) relative to channel dimensions (ℎ, )) and 14 µm < ℎ < 39 µm [48] (SI, Section S2). Experimentally, the authors test devices with maximum channel heights of 27 µm [48] or widths of 71 µm [49]. In these studies and others [8][9][10][11][12][13], pL droplets are generated by using small channel widths (<100 µm) facilitated by photolithographic processes in cleanrooms. As our objective was to develop a device that generates pL droplets without complex fabrication processes, we were limited to the channel widths 100 µm or greater that an SLA 3D printer is capable of printing in a mold. Therefore, our device design would need to be in the > 0.001 regime with faster flow rates relative to channel dimensions.
With the limits on our device's physical features established, we 3D printed a mold and made a PDMS cast of 100 µm channel width and 100 µm channel height without a cleanroom, photolithography processes, or complex instrumentation (Figure 1). We chose oil and water flow rates such that the droplet generation device would have ≫ 0.001, with = 25 to 100 µL/min and = 1 µL/min (SI, Section S2), which resulted in droplets of diameters 45 to 112 µm (48 to 736 pL) (Figure 2). The droplets generated from this device are monodisperse ( Figure 2B, coefficient of variation (CV) from 2-12%), which is in the range of droplets generated from other devices [50,51]. As expected, there is an inverse power relationship between droplet volume and oil flow rate [49], showing that devices fabricated with 3D printed molds give similar consistency and expected performance at the picoliter scale as devices made with photolithography in a cleanroom. Because this device is made from a 3D printed mold, researchers can iterate prototypes rapidly without undergoing the time and resource-consuming processes of photolithography; additionally, the droplet generation module can be part of a larger 3D printed mold that includes modules for executing other upstream or downstream assay processes.
Droplet Digital Loop-Mediated Isothermal Amplification
To explore the utility of this droplet generation device in molecular diagnostic applications, droplet digital loop-mediated isothermal amplification (ddLAMP) was performed to detect and quantify a DNA target. Digital LAMP is an emerging nucleic acid (NA) amplification method that can quantify the NA concentration of a sample with high accuracy and precision, even in the midst of temperature, reaction time, or imaging variance [52]. NA quantification via dLAMP is useful in several applications, such as viral load measurements for HIV [53], hepatitis C virus genotyping [54], and rapid antibiotic susceptibility testing [55]. Current dLAMP methods partition the sample into pL to nL droplets with microfluidic devices made using photolithography [56,57], wet etching [52][53][54][55], or fused-silica capillaries [58]. Our droplet generation device made from a 3D printed mold could make dLAMP more accessible by eliminating the need for complex facilities or instruments and enabling integration with other amplification or detection modules.
We tested the feasibility of encapsulating LAMP reagents with target DNA and primers into droplets with our device (Materials and Methods). After generation, the droplets were incubated at 68 • C for 60 min for amplification of DNA via LAMP and SybrGreen fluorescence was measured to indicate the presence or absence of amplification product within each droplet ( Figure 3A). Five DNA dilutions were tested, and the positive droplet percentage was plotted against the prediction from Poisson statistics (Figure 3B), assuming a 10% LAMP efficiency and 300 pL droplet volume (SI, Section S3).
droplets with microfluidic devices made using photolithography [56,57], wet etching [52][53][54][55], or fused-silica capillaries [58]. Our droplet generation device made from a 3D printed mold could make dLAMP more accessible by eliminating the need for complex facilities or instruments and enabling integration with other amplification or detection modules.
We tested the feasibility of encapsulating LAMP reagents with target DNA and primers into droplets with our device (Materials and Methods). After generation, the droplets were incubated at 68 °C for 60 min for amplification of DNA via LAMP and SybrGreen fluorescence was measured to indicate the presence or absence of amplification product within each droplet ( Figure 3A). Five DNA dilutions were tested, and the positive droplet percentage was plotted against the prediction from Poisson statistics ( Figure 3B), assuming a 10% LAMP efficiency and 300 pL droplet volume (SI, Section S3).
Dense Bead-in-Droplet Emulsions
Interest in using microparticles as delivery systems in various technologies has been widely researched, especially in combination with microdroplets for biological applications [59][60][61][62]. This is due to the high surface-to-volume ratio and the ease of immobilizing biorecognition molecules on them, as well as the potential for compartmentalized singlemolecule assays [63][64][65]. Single particle encapsulation in droplets, however, faces two major challenges: sedimentation due to particle density [62], and mechanistic single particle encapsulation [41,66].
Particle density poses a challenge when loading microparticles into encapsulation devices because the higher density particles (>1 gm/mL) sediment in the syringe and delivery tubing, causing nonhomogeneous distribution of microparticles in droplets (SI, Figures S1A and S2). This can be solved by the dissipation of the bead density by suspending them in denser fluids such as glycerol [62]; however, such fluids may not be compatible with the intended bio-application. For example, glycerol at 50% v/v inhibits NA amplification, thereby defeating the purpose of using microbeads for NA applications (SI, Figure S4). To circumvent this challenge, researchers used gel beads with similar density to water, which ensured a binary distribution of beads in the droplets [65,[67][68][69]. However, this method is time-consuming, requiring a particle velocity of ~50 µm/h [41] to achieve singleparticle encapsulation; furthermore, some multiplexed nucleic acid detection methods are not compatible with beads made in gel form [70][71][72].
Dense Bead-in-Droplet Emulsions
Interest in using microparticles as delivery systems in various technologies has been widely researched, especially in combination with microdroplets for biological applications [59][60][61][62]. This is due to the high surface-to-volume ratio and the ease of immobilizing biorecognition molecules on them, as well as the potential for compartmentalized singlemolecule assays [63][64][65]. Single particle encapsulation in droplets, however, faces two major challenges: sedimentation due to particle density [62], and mechanistic single particle encapsulation [41,66].
Particle density poses a challenge when loading microparticles into encapsulation devices because the higher density particles (>1 gm/mL) sediment in the syringe and delivery tubing, causing nonhomogeneous distribution of microparticles in droplets (SI, Figures S1A and S2). This can be solved by the dissipation of the bead density by suspending them in denser fluids such as glycerol [62]; however, such fluids may not be compatible with the intended bio-application. For example, glycerol at 50% v/v inhibits NA amplification, thereby defeating the purpose of using microbeads for NA applications (SI, Figure S4). To circumvent this challenge, researchers used gel beads with similar density to water, which ensured a binary distribution of beads in the droplets [65,[67][68][69]. However, this method is time-consuming, requiring a particle velocity of~50 µm/h [41] to achieve singleparticle encapsulation; furthermore, some multiplexed nucleic acid detection methods are not compatible with beads made in gel form [70][71][72].
Price et al. presented a potentially simple solution by exploiting the sedimentation potential of the beads using a hopper system [62]. They, however, showed that it took 0.8 h (17 µm Tetangel resin beads) and 3.8 h (2.8 µm magnetic beads) for bead introduction before achieving single bead encapsulation. Kim et al. successfully developed a pneumatic system which is capable of trapping and releasing beads, thus creating a deterministic encapsulation of a defined number of beads per droplet [62]. This system is not simple to develop or operate, thus, unfit for low-cost point-of-care devices that can integrate with other modules.
Our goal was to present a simple, easy-to-fabricate method to encapsulate single dense beads in droplets that can be used for further downstream analysis. It is important to encapsulate single beads as opposed to multiple beads to avoid cross-contamination or confusion of which target molecule or bead is in the droplet. The idea is to vertically orient the syringe pump while keeping the beads suspended by mechanical agitation (which prevents loss of beads due to sedimentation in the flow tubing and in the syringe) ( Figure S1B), then pump the contents directly into the droplet generation cartridge ( Figure 1B). Using this principle, we set up bead encapsulation with the droplet generation device such that λ ≈ 0.15, 0.2 and 0.3 beads/droplet, where λ represents the average number of beads per droplet input into the device (Figure 4). We observed that our dense bead encapsulation method agreed well with Poisson predictions ( Figure 4B). Importantly, the droplet generation device resulted in <2% of droplets containing more than 1 bead at λ ≈ 0.15, <4% of droplets containing more than 1 bead at λ ≈ 0.2, and <6% of droplets containing more than 1 bead at λ ≈ 0.3.
Using this principle, we set up bead encapsulation with the droplet generation de such that ≈ 0.15, 0.2 and 0.3 beads/droplet, where represents the average numbe beads per droplet input into the device (Figure 4). We observed that our dense bead capsulation method agreed well with Poisson predictions (Error! Reference source found.B). Importantly, the droplet generation device resulted in <2% of droplets cont ing more than 1 bead at ≈ 0.15, <4% of droplets containing more than 1 bead at ≈ and <6% of droplets containing more than 1 bead at ≈ 0.3.
Conclusions
Using design principles from droplet microfluidic device literature, we designed developed a microfluidic device fabricated without complex equipment or cleanroom cilities that can generate sub-100 pL droplets and encapsulate dense beads with a Pois like distribution. Because the device is made from a 3D printed mold, researchers ca erate prototypes rapidly without undergoing the time and resource-consuming proce of photolithography; additionally, the droplet generation module can be part of a la 3D printed mold that includes modules for executing other upstream or downstream say processes, such as sample preparation, NA amplification, or single cell analysis.
Conclusions
Using design principles from droplet microfluidic device literature, we designed and developed a microfluidic device fabricated without complex equipment or cleanroom facilities that can generate sub-100 pL droplets and encapsulate dense beads with a Poissonlike distribution. Because the device is made from a 3D printed mold, researchers can iterate prototypes rapidly without undergoing the time and resource-consuming processes of photolithography; additionally, the droplet generation module can be part of a larger 3D printed mold that includes modules for executing other upstream or downstream assay processes, such as sample preparation, NA amplification, or single cell analysis.
While simple instrumentation was used to fabricate the microfluidic device, we still needed a syringe pump for operation of the device to generate consistent and controlled droplet sizes. Further improvements need to be made to our design to make it more amenable to point-of-care settings, such as a pumping lid [73], or other equipment-free pumping mechanisms [74]. Another limitation is that due to the 3D printer's minimum channel dimension (~100 µm), the lowest droplet diameter achieved was 45 µm (48 pL). Lower sizes could be possible in the future with the next generation of 3D printers that print channels down to 15 µm [75].
Other microfluidic devices have encapsulated beads in a non-random distribution and thus have a much higher percentage of droplets with a single bead [41,43], though the beads in those studies have a similar density to water. While the phenomenon for the non-random distribution is unexplained, similar designs could potentially be used with the dense bead mixing method studied here for higher percentages of droplets with single beads. In its current form, this device enables research and innovation into assays or methods that need to use beads with a density greater than water and thus overcome the sedimentation effect, such as PMMA or magnetic beads. Because it can easily be printed and combined with others as part of a larger device, microfluidic sorting mechanisms can also be used to concentrate the beads downstream if desired [76].
Future research directions from this work can include: eliminating the need for a syringe pump for <100 pL droplet generation, adapting the device to other biological assay applications beyond digital LAMP, beating Poisson encapsulation statistics for dense beads to reduce the waste of empty droplets, or adapting the BiD method for tagging multiple biomarkers. Due to the simple instrumentation used, this work enables rapid prototyping for a variety of biological applications of droplet microfluidic devices and dense bead encapsulation.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/mi13111946/s1, Section S1: Syringe with mixer to overcome sedimentation effect of dense beads; Section S2: Capillary number calculations for picoliter-scale droplet generation design; Section S3: Poisson prediction of positive droplet percentage; Section S4: Effect of Glycerol on LAMP Amplification; Section S5: Pitalls of 3D Printing Fabrication of Microfluidic Cartridges. Figure S1: (A) Tube connecting syringe containing bead suspension to the droplet generation cartridge; red arrow shows region of bead sedimentation. (B) Syringe design for mechanical resuspension and homogenization of dense particles for vertical delivery. The DC motor is powered using a 3V battery. Figure S2: Without the syringe mixer in Figure S1, bead sedimentation happens in the syringe and tubing, leading to the encapsulation of multiple beads per droplet. Figure S3: Denser fluids, such as glycerol, may improve bead buoyancy but it inhibits LAMP amplification (blue trace vs. red trace). Bead Density = 1.18 g/ cm 3 , Glycerol Density = 1.26 g/cm 3 . Figure S4: Microcapillary lines imprinted by 3D printed mold. This is often due to printer-head misalignment that often occurred during prolonged prints. Figure S5: Micrograph showing curved vertices imprinted from 3D-printed mold. Figure S6: Irregularities in chamber dimensions due to myriad factors, including incompletely cured PDMS and build-up PDMS deposit due to mold reuse. Note that the displayed images contain channels designed to have widths of 50 and 100 µm. Figure S7: Frosted PDMS molded on improperly cleaned 3D printed mold. Figure
Conflicts of Interest:
The authors declare no conflict of interest.
|
2022-11-12T06:18:18.342Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "a1a4005796b11ff017cadb10f02f735eaf3c53d1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-666X/13/11/1946/pdf?version=1668083598",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2ad9a2d8355d384c13e7d23177baec15c830ec8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
270113709
|
pes2o/s2orc
|
v3-fos-license
|
Effects of Increasing Glycerin Levels in Broiler Chickens
Glycerin contributes to the animal’s energy metabolism as an important structural component of triglycerides and phospholipids. The present study was carried out to evaluate the effect of replacing corn with 0, 5, 10, and 15% of glycerin in terms of performance, digestibility, carcass yield, relative weights of gastrointestinal tract (GIT) organs, and nutrient metabolism. Four hundred chickens (40.0 g ± 0.05 g) were distributed in a completely randomized design with four treatments and five replicates. Growth parameters were measured at 7, 14, 21, and 42 days. Digestibility of crude protein and fat, carcass yield, relative weights of GIT organs, and biochemical blood profile were measured. The results were subject to an analysis of variance by Tukey’s HSD test (p > 0.05). The inclusion of 5%, 10%, or 15% of glycerin did not influence performance or affect the crude protein and fat digestibility in broilers (p > 0.05) when compared to that of the basal (0%) diet. Similarly, the supplementation of glycerin levels showed no significant influence (p > 0.05) on the relative GIT organ weights, carcass yield, or nutrient metabolism. Thus, we concluded that glycerin may be included in the broilers’ diets in rations of up to 15%.
Introduction
Glycerin resulting from biodiesel production can be used to replace part of the cereal grains in the animal diet, as an alternative source of energy for farm animals in biofuel-producing countries [1].Biodiesel is a product resulting from the transesterification reactions of triglycerides, both of animal and vegetable origin.In this reaction, alkaline catalysts are used.The transesterification process is an alcoholysis reaction where the final products are three molecules of methyl or ethyl esters of fatty acids (biodiesel) and one molecule of glycerol or glycerin [2], as can be observed in Figure 1.Glycerin applies to purified commercial products, usually containing at least 95% glycerol.Its composition influences characteristics such as color, odor, and impurities.When purified, glycerin presents a glycerol content of up to 99.5%.
In the chemical constitution of glycerin, there is a significant amount of glycerol, a molecular component that is easily absorbed through the intestinal wall and has a low molecular weight, being absorbed instantly without the mediation of micelles.After absorption, it is transported in the blood to the liver via aquaglyceroporins.In the liver, most of it is metabolized.Reabsorption and metabolization also occurs in the kidneys to avoid loss in the urine [3,4]. of it is metabolized.Reabsorption and metabolization also occurs in the kidneys to avoid loss in the urine [3,4].Glycerin contains glycerol, a major component synthesized from fatty acids known as triacylglycerols and the source of chemically storable energy as it is a precursor for the metabolization of glycerol-3-phosphate, a typical molecule of the glycolytic pathway.That is where glycerol is then converted into glucose, participating in the Krebs cycle or via gluconeogenesis, impacting the energy supply to the animal's body [5].
Therefore, glycerin plays a fundamental role in animal and human energy metabolism because the glycerol present in their composition is an important structural component of triglycerides and phospholipids.The glycerol provided through the inclusion of glycerin in the diet is one of the compounds that are metabolized in the liver, which can be used in the lipogenesis or gluconeogenesis pathways.In a situation of excess energy, glycerol acts as a precursor for the synthesis of triglycerides, while in a phase of energy shortage, glycerol is used to provide carbon skeletons for gluconeogenesis or to produce energy through glycolysis and the Krebs cycle [6,7].
Glycerin is recognized as safe for use in animal feed [3].However, it has a variable composition, affecting the nutritional value due to the characteristics of the compound used in the biodiesel production process that adopts the process of transesterification, resulting from the action of methanol on the vegetable oil.Salt is then added and this salt, when in excess, is a special concern for the use of glycerin in poultry production [8].
Up to 8% of sodium or potassium salts can be present in glycerin.In this case of chicken feed, as excretion increases, the quality of the litter worsens.As such, a diet formulated with 15% crude glycerin can present with around 1.26% sodium, where the amount of sodium in a basal ration is usually lower, at around 0.3% sodium [9].
The production of biodiesel in Brazil, due to the type of processing, can lead to glycerin containing around 7% salt.And thus, when adding glycerin into the diet, there may be an excess in relation to the nutritional needs of broiler chickens, as according to the Brazilian Table of Nutritional Requirements, approximately 0.183% to 0.227% of sodium is indicated in the feed.The chemical quality of Brazilian glycerin originating from biodiesel processing is a factor of concern for nutritionists when formulating glycerin-containing diets for animals [9,10].
The concern about excess salt in the diet of chickens arises due to the possibility of resulting in metabolic imbalance.For example, salt stimulates water consumption, and increased water intake results in higher litter moisture, which, in turn, increases ammonia excretion.This can cause respiratory problems in the birds, affecting their health and consequently the performance of the chicken [8,11].
In animal nutrition, studies have shown the benefits of glycerol in the diets of production animals [12].When evaluating the ingestive behavior of sheep supplemented with increasing levels of glycerol, it was observed that the inclusion of up to 12% would not significantly interfere with nutrient intake, feed efficiency, or even dry matter and ingested fiber rumination.Glycerin contains glycerol, a major component synthesized from fatty acids known as triacylglycerols and the source of chemically storable energy as it is a precursor for the metabolization of glycerol-3-phosphate, a typical molecule of the glycolytic pathway.That is where glycerol is then converted into glucose, participating in the Krebs cycle or via gluconeogenesis, impacting the energy supply to the animal's body [5].
Therefore, glycerin plays a fundamental role in animal and human energy metabolism because the glycerol present in their composition is an important structural component of triglycerides and phospholipids.The glycerol provided through the inclusion of glycerin in the diet is one of the compounds that are metabolized in the liver, which can be used in the lipogenesis or gluconeogenesis pathways.In a situation of excess energy, glycerol acts as a precursor for the synthesis of triglycerides, while in a phase of energy shortage, glycerol is used to provide carbon skeletons for gluconeogenesis or to produce energy through glycolysis and the Krebs cycle [6,7].
Glycerin is recognized as safe for use in animal feed [3].However, it has a variable composition, affecting the nutritional value due to the characteristics of the compound used in the biodiesel production process that adopts the process of transesterification, resulting from the action of methanol on the vegetable oil.Salt is then added and this salt, when in excess, is a special concern for the use of glycerin in poultry production [8].
Up to 8% of sodium or potassium salts can be present in glycerin.In this case of chicken feed, as excretion increases, the quality of the litter worsens.As such, a diet formulated with 15% crude glycerin can present with around 1.26% sodium, where the amount of sodium in a basal ration is usually lower, at around 0.3% sodium [9].
The production of biodiesel in Brazil, due to the type of processing, can lead to glycerin containing around 7% salt.And thus, when adding glycerin into the diet, there may be an excess in relation to the nutritional needs of broiler chickens, as according to the Brazilian Table of Nutritional Requirements, approximately 0.183% to 0.227% of sodium is indicated in the feed.The chemical quality of Brazilian glycerin originating from biodiesel processing is a factor of concern for nutritionists when formulating glycerin-containing diets for animals [9,10].
The concern about excess salt in the diet of chickens arises due to the possibility of resulting in metabolic imbalance.For example, salt stimulates water consumption, and increased water intake results in higher litter moisture, which, in turn, increases ammonia excretion.This can cause respiratory problems in the birds, affecting their health and consequently the performance of the chicken [8,11].
In animal nutrition, studies have shown the benefits of glycerol in the diets of production animals [12].When evaluating the ingestive behavior of sheep supplemented with increasing levels of glycerol, it was observed that the inclusion of up to 12% would not significantly interfere with nutrient intake, feed efficiency, or even dry matter and ingested fiber rumination.
So, the aim is to use glycerin appropriately, considering its chemical composition, with the purpose of it being an ingredient that contributes to the animals' energy needs and thus being able to replace food sources such as cereals [13].
Studies on the levels of glycerol in the diet of non-ruminant animals, for example, pigs have been conducted, where the inclusion of 2.76% crude glycerin was found to have a significant effect on increasing weight gain in piglets during the nursery phase [14].The use of glycerin in poultry diets was reported as early as 1976 [15].They used glycerin inclusion levels of between 5% and 10% [16,17].Taken together, these experiments indicate that glycerin can be added to broiler diets without adverse effects.However, gaps in knowledge remain.
The effects of including glycerin into the diet of chickens is still unknown, including their levels in the chickens' diet, as well as the effects on their development during production in relation to nutrient metabolization and on weight gain during their rapid growth.Therefore, the present study was carried out to evaluate the effect of 0%, 5%, 10%, and 15% glycerin on performance, protein metabolism, fat and dry matter, carcass yield, and the relative weight of organs from the gastrointestinal tract of broiler chickens.This would provide a possible indication of glycerin produced from processed biodiesel in Brazil as a suitable ingredient in the feed of non-ruminant animals, such as broiler chickens, raised in the country to feed people all around the world.
Birds and Housing
The present study was conducted in the Poultry Research Farm of Federal Institute Goiano (Goiás, Brazil).The care and handling of all experimental birds were performed under protocols approved by the Institutional Animal Care and Use Committee of Federal Institute Goiano, protocol number 1329210317.
A total of 400 one-day-old male Cobb ® chicks were used, weighed individually and randomly distributed into 20 boxes measuring 1 m × 2 m × 0.60 m.With 20 chickens in each treatment group, they were subjected to one of the four treatments and had five replications of each test diet.The chicks had an initial weight of 40 ± 0.26 g, housed in a two-meter square stall, and the floor was covered with dry grass and clippings.Food and water were provided ad libitum throughout the experimental period and each box had a trough-type feeder and waterer.The chickens were transferred to metabolic cages at 35 days of age and after adaptation, the size of the cages was 0.90 m × 0.60 m × 0.40 m.The feed and excreta samples were collected for 5 days from the 34th day.A light program was adopted for 24 h, with natural and artificial light, while temperature and humidity were measured using a thermo hygrometer.The morning and afternoon averages were recorded, as can be seen in Table 1.Feed and clean drinking water were provided ad libitum throughout this experimental period.In Table 1, the data of temperature and relative humidity observed during the experiment are presented.
Experimental Design and Dietary Treatments
The chickens were distributed in a completely randomized design with four treatment groups and five replicates each.Therefore, each group had 20 chickens each, representing the four treatments and five replications of each test diet.The treatments consisted of four levels of dietary supplementation of gross energy glycerin (basal diet, 5, 10, and 15%).
The glycerin used in this experiment was obtained from a local industry and contained the following by composition analyses: 80.8% of glycerol, 111.0 mg/kg of methanol, and 4.015 kcal AME/kg of gross energy.
The experimental diets were formulated to have similar nutritional value according to [10].The quantity of ingredients used to formulate the chicken diet was calculated according to the nutritional chemical analysis of each one before starting the experiment.A program was used to calculate the amount of each ingredient according to the animals' nutritional requirements at each stage of chicken development.The initial diets are described in Table 2 below, where their nutritional composition is presented according to the needs of animals aged 1-21 days.
Measurements 2.3.1. Performance Traits
Body weight was recorded by weighing the birds per treatment group on days 7, 14, 21, and 42.Feed consumption for each treatment was measured by measuring feed residue on the same days that the chickens were weighed.The consumption was then estimated by the average of the five replicates representing the four treatments/test diets.The feed conversion rate (FCR, %) was calculated as the feed intake divided by the body weight gain (BWG, g) of the chickens.All performance parameters were calculated cumulatively according to the number of birds distributed (with 20 chickens in each of the four studied treatment groups), while considering the average of the five repetitions, to obtain the results for discussion from each evaluation period (7, 14, 21, and 42 days).
Metabolism Analysis
For this analysis, five birds from each replicate were transferred to metabolic cages at 35 days of age.After adaptation (3 days), feed and excreta samples were collected for 3 days from the 41st day.The excreta collected on trays were identified and stored in plastic bags in a freezer until laboratory analysis.To mark the beginning and end of collection, 1% ferric oxide was used as an indicator.
Diets and excreta samples were stored at −20 • C in preparation for further analyses.After a period of storage, the chicken diets and excreta were thawed, weighed, homogenized, and pre-dried in a forced circulation oven at 55 • C. The excreta aliquots were crushed, and the analysis of dry matter (DM), crude protein (CP) and ether extract (EE) were performed following the methodology of Silva and Queiroz [13] at Federal Institute Goiano.
Carcass Yield
At the end of the experiment (42 d), five broiler chickens from each repetition were chosen from the group of twenty individuals, according to the average weight and subjected to 8 h of food restriction.All were males as only male chickens were raised in this study since they represent the category of broilers for meat production.Furthermore, in Brazil, meat production from chickens commonly involves the rearing of only males, due to the better conformation of the commercial cuts demanded by the local consumer.After food restriction, the selected chickens were sacrificed by cervical dislocation and bled immediately.Subsequently, they were scalded in hot water (55 • C) in an electric bath for two minutes, plucked, and eviscerated manually.The breast, thigh, and abdominal fat were manually removed from the carcasses and weighed, then the percentages relative to body weight were calculated.
Formula used to calculate the carcass yield of slaughtered animals: = (sample weight/live weight) × 100 The samples are represented by the collected cuts, that is, portions of the body of the broiler chickens that were evaluated.
Organ Weight, Blood Sampling, and Serum Biochemistry
On days 7, 14, 21, and 42, five broilers from each treatment were euthanized to assess the relative weights (%) of the GIT organs.The gizzard (Giz, %), pancreas (Pan, %), liver (Liv, %), small intestine (SI, %) (from the end of the muscular stomach to Meckel's diverticulum) and large intestine (LI, %) (from Meckel's diverticulum to rectum, including ceca) were removed and weighed following the methods described by Stringhini [18].The relative weights of the GIT organs were calculated according to the following formula: organ relative weight % = (organ weight/body weight) × 100.
To assess the serum biochemical profile, the blood from the euthanized animals was collected by cardiac puncture from the five broilers per treatment at day 42.Blood samples were processed according to the method described by Minafra [19].The serum levels of Calcium (Ca, mg/mL), phosphorus (P, mmol/L), chlorine (Cl, mmol/L), sodium (Na, mmol/L), potassium (K, mmol/L), total protein (Prot, g/dL), cholesterol (Chol, mg/dL) and triglyceride (Trig, mg/dL) were analyzed.Blood serum analyses were performed in the Laboratory of Animal Biochemistry and Metabolism (Federal Institute Goiano).All analyses were determined in triplicate and were measured spectrophotometrically (Spectrum ® -Celer, Belo Horizonte, MG, Brazil) using a commercial kit protocol (Doles ® , Goiania, GO, Brazil).
Statistical Analysis
Performance data, carcass yield, relative weights of GIT organs and nutrient metabolism were the serum biochemical parameters that were subjected to an analysis of variance (ANOVA), while the treatment means were evaluated by Tukey's test at 5% significance per the SISVAR ® statistical software (5.6), 2019 [20].
Results
The results of the effects of glycerin supplementation on performance in broiler chickens are shown in Table 5.No significant differences in BWG (g), FI, and FCR (g/g) were observed (p > 0.05) across the increasing glycerin levels in the diet throughout experimental period.The diets containing 5, 10, or 15% level of inclusion of GLY had statistically similar digestibility of crude protein and fat when compared to the basal diet (p > 0.05, Table 6).No significant differences were observed in the yield of carcass, breast, thigh, drumstick, and abdominal fat among the different treatments (p > 0.05, Table 7).The relative weights of the GIT organs are shown in Table 8.No differences were found in the organ weights among broilers fed with the 0, 5, 10, or 15% glycerin diets (p > 0.05).The serum levels of Chol and Trig were not affected (p > 0.05) by the increasing levels of glycerin in the diet (Table 9).Levels of Calcium (Ca), phosphorus (P), chlorine (Cl), sodium (Na), potassium (K), total protein (Prot), cholesterol (Chol) and triglyceride (Trig) were also not affected by the increase in glycerin in the diets.
Discussion
The increases in feed costs and raw material prices have become necessary to formulate the most economical and balanced feeds with the available co-products.Our study demonstrated that the inclusion of 5, 10, or 15% glycerin in broiler diets does not promote adverse effects on performance, providing an indication that glycerin may be a viable alternative to partially replace other ingredients for energy.
As birds regulate their feed consumption depending on the level of AME in the diet [21], the glycerol present in glycerin can be used in poultry nutrition as it has a high energy value [1,22].Glycerin also contains variable amounts of substances, which could limit its use as an ingredient in animal diets as it presents variability in its components and consequently diversity in the AME values.The AME content for glycerin used in this research was 4015 kcal.As the experimental diets assumed the same AME at different ages, we found that the FI in the basal diet was similar to the increasing levels of the glycerin dietary groups.On the other hand, [23] showed that during the development of the pre-initial phase (1-10 d), there was higher FI in the groups fed with levels of 6, 9, and 12% of crude GLY compared to the control, which is different from Romano's findings [17] which observed higher AI in broilers aged 8 to 12 days fed with 2.5 and 7.5% of crude glycerin than in those fed with 10%.However, the same study indicated that as the birds aged, there were no differences in feed consumption.
In the studies carried out on the feeding of broiler chickens that included up to 10% of crude glycerin in the feed in the pre-initial phase, they observed that it was favorable to the performance of the chickens in the initial phase; however, as the animals grew, the 5% level positively affected the development of animals instead [24].
Fontinele et al. [25] and Sehu et al. [26] suggested that a 10% concentration is the optimal level of inclusion of crude glycerin in the diet to have beneficial effects on broiler performance.Despite this, [27] reported that the addition of 10% glycerin to the feed resulted in feed flow problems that may have caused a reduction in FI and consequently in BW.In this study, the inclusion of 5, 10, or 15% resulted in similar body weight gain between the groups.Furthermore, no feed consumption problems were observed, even in the 15% glycerin treatment, indicating to us that glycerin can be included at levels of up to 15% for broiler chickens without any negative effects on performance parameters.
Glycerin can affect the physical structure of the feed in relation to texture and consistency, encouraging consumption by animals.Despite the increase in consumption, to date, no interference was observed in weight gain and carcass quality.However, in meat quails, there was an increase in abdominal fat, especially when fed with glycerin of a vegetable or mixed origin, a fact not observed in quails fed with glycerin (semi-purified using a cutting machine) from the manufacture of biodiesel [28].
In the case of semi-purified glycerin at a level of up to 16%, it did not negatively interfere with feed consumption and weight gain, despite the worsening feed conversion as the level of the diet increased, a fact that may be due to the sodium levels in the diet that can lead to increased moisture in excreta and thus interfere with energy metabolism [29,30].
As a performance enhancer for broiler chickens, it is reported in the literature that glycerin levels of 5% and 10% can positively influence animal performance, with an emphasis on increasing feed conversion.Furthermore, it is important to highlight the possibility of reducing daily weight gain when glycerin is being produced.This is because the metabolites can be lost in the urine when in excess and thus the ability to increase energy is lost.Consequently, the potential for weight gain is reduced, especially in the final phase of production where the chicken's organs are mature and are capable of recycling nutrients or excreting excess, when necessary, as is the case with the kidneys [31,32].
Considering the lack of effect, this may be due to the type of glycerin, raw, mixed, or semi-purified, as they do not present significant differences in the effect on chicken performance.As already published in a study on the use of different types of glycerin in chickens aged of up to 21 days, that is, in the structural development phase of the animals' organism, despite the need for more nutrients at this stage, the feed consumption or weight gain of the animals remains without interference from the inclusion of glycerin [33].
Glycerol is known as an easily digestible source of energy [34,35].The bioavailability of glycerol in broiler chickens is believed to be greater than other energy sources except oils and glucose.Studies have reported the impact of glicerin inclusion on energy digestibility, but only a few of them have shown effects on protein and fat digestibility.
In glycerol metabolism, we have this molecule as an energy source because it undergoes lipase action and is then phosphorylated by glycerol kinase.It is thus oxidized until it forms dihydroxyacetone phosphate, a molecule which, in turn, is oxidized and enters the glycolytic pathway to form ATP (Adenosine triphosphate).This reaction can be observed in Figure 2. The glyceraldehyde phosphate formed is oxidatively phosphorylated, producing NADH and ATP, resulting in 1,3-bisphosphoglycerate, which is then dehydrated to form phosphoenolpyruvate, yielding more ATP.Finally, the pyruvate molecule can be fermented to produce ethanol and lactate or can also enter the glycolytic pathway.In glycolysis, pyruvate releases carbon and becomes Acetyl-CoA, which enters the citric acid cycle [36].
In addition to the energy that can be produced in the animal through the metabolism of glycerol, the pyruvate obtained can have its excess stored in the form of fat.That is, when pyruvate in the glycolytic pathway is in the form of Acetyl-CoA, a molecule resulting from the transformation of excess pyruvate acted upon by the enzyme acetyl-CoA carboxylase, this molecule (Acetyl-CoA) then participates in the fatty acid synthesis pathway.Upon synthesis, the molecule converts into triacylglycerols, which are deposited in adipocytes.This reaction is reversible for gluconeogenesis when low glucose levels occur [36].
Therefore, the inclusion of fat in the diet increased the digestion retention time in chickens, which subsequently increased the use of energy, protein, and fat [37,38].Noting that the inclusion levels influence bird health and indicate that glycerin can be added up to levels of 5% to broiler diets without an effect on performance [39]; this observation led to the hypothesis that a 10% increase in dietary glycerin may influence nutrient utilization from protein and fat in diets, since the chemical components in glycerin are like the enzymatic hydrolytic products of dietary fat metabolism.However, we found that the addition of 15% glycerin to diets did not influence the protein and fat digestibility of broiler chickens.Therefore, the inclusion of fat in the diet increased the digestion retention time in chickens, which subsequently increased the use of energy, protein, and fat [37,38].Noting that the inclusion levels influence bird health and indicate that glycerin can be added up to levels of 5% to broiler diets without an effect on performance [39]; this observation led to the hypothesis that a 10% increase in dietary glycerin may influence nutrient utilization from protein and fat in diets, since the chemical components in glycerin are like the enzymatic hydrolytic products of dietary fat metabolism.However, we found that the addition of 15% glycerin to diets did not influence the protein and fat digestibility of broiler chickens.
Our data indicated that chickens fed with up to 15% glycerol experienced a slight numerical increase in carcass, breast, thigh, and drumstick yield, but there were no statistically significant differences between treatments.Furthermore, ref. [27] stated that the inclusion of 10% glycerin in the chicken diet can cause a reduction in the yield of whole carcasses, without affecting the yield of cuts such as breasts and thighs.Our data indicated that chickens fed with up to 15% glycerol experienced a slight numerical increase in carcass, breast, thigh, and drumstick yield, but there were no statistically significant differences between treatments.Furthermore, ref. [27] stated that the inclusion of 10% glycerin in the chicken diet can cause a reduction in the yield of whole carcasses, without affecting the yield of cuts such as breasts and thighs.
Wattanachant et al. [40] provided 2.5%, 5%, 7.5%, and 10% glycerin in pellets ad libitum to one-day-old broilers until 42 days of age.No significant difference was found in live weight variation and feed intake between those receiving diets with and without glycerin supplementation.However, low growth performance and carcass yield were observed when the glycerin levels reached 10%.Similarly, ref. [23] observed no difference in the rate of fat deposition when the chickens were fed with glycerin levels of 3%, 9%, 12%, or 15%.However, our results disagree with [27], who reported that birds fed with 2.5 or 5% glycerin had significantly higher breast yield than the control diet.Available data suggest that 15% GLY supplementation is not harmful to the carcass, breast, thigh, and drumstick yield and does not lead to the deposition of abdominal fat.
There was no effect of adding 5, 10, or 15% glycerin to the diet on the relative weights of the GIT organs of broiler chickens when compared to the control group.This result agreed with that of Topal and Ozdogan [35], who concluded that the inclusion of glycerol in diets did not affect the weight of the internal organs of male and female broilers, except the heart weight of males in the 8% glycerin group.An excess of glycerol in the diet can induce anatomical, physiological, and biochemical adaptation of organs [37].
There is considerable concern about the use of glycerin as a dietary approach because the enzymatic capacity for glycerol metabolism is limited.When there is a high level of glycerol in the diet, consumption can exceed the metabolic capacity, consequently increasing blood levels [39].Glycerol is an important structural component of triglycerides and phospholipids.Dietary glycerol is metabolized in the liver, which can be used in the lipogenesis or gluconeogenesis pathways.In a situation of excess energy, glycerol acts as a precursor for the synthesis of triglycerides, while in a situation of energy scarcity, it is used to provide carbon skeletons for gluconeogenesis or to produce energy through glycolysis and the energy cycle-Krebs [41,42].
Our results show that fat parameters at levels of 5, 10, or 15% glycerin in the diet remained similar to those in the control group, clearly indicating that 15% glycerol can be used without negative effects on fat metabolism.In contrast, [43] found that the estimated plasma glycerol level increased 2 h after feeding with the 5% glycerin supplementation, from 0.65 (control) to 4.36 mmol/l.
According to [44], glycerol can affect the hydration status and nutrient metabolism, or both.Water consumption measurements were not performed in the present experiment; however, serum electrolytes, Ca/P ratio, and metabolism parameters [45,46] were performed in another study to determine whether the addition of increased glycerin to diets could disrupt nutrient balance.Another concern about the use of glycerol is related to the variety of residual levels of methanol, sodium, potassium, and moisture that can also lead to electrolyte imbalance.Our results show that the addition of glycerin levels to broiler diets does not negatively influence serum biochemical parameters.
Conclusions
Our results indicate that glycerin from the processing of biodiesel of plant origin can be included in the diets of broiler chickens in feed of up to a level of 15% as it does not present negative effects for breeding.This is a feed of interest while corn is on the rise, due to the low supply and competition of the grain as food for people, and the fact that glycerin is in great supply, as well as the possibility of using it on a large scale for feeding animals as there is no high food competition for this ingredient.
National Council for the Control of Animal Experimentation (CONCEA) and was APPROVED by Ethics Committee on the Use of Animals of the Instituto Federal Goiano (CEUA/IF Goiano).The experiment was approved by the Ethics Committee, protocol number 1329210317.
Informed Consent Statement: Not applicable.
Figure 1 .
Figure 1.Transesterification reaction of oil, illustrating the resulting molecules: biodiesel and glycerol.Source: Adapted by the author based on [2].
Figure 1 .
Figure 1.Transesterification reaction of oil, illustrating the resulting molecules: biodiesel and glycerol.Source: Adapted by the author based on [2].
Table 2 .
Ingredients and the calculated nutritional composition of starter diets (1-21 d) according to increasing levels of glycerin.
Table 3
contains the nutritional composition of the diet according to the needs of the animals in the growth phase, which covers the period from 22 to 35 days of age.
Table 3 .
Ingredients and the calculated nutritional composition of grower diets(22-35 d)according to increasing levels of glycerin.
Table 4
presents the nutritional composition of the diets used to meet the nutritional needs required in the period from 36 to 42 days of life.
Table 4 .
Ingredients and the calculated nutritional composition of finisher diets (35-42 d) according to increasing levels of glycerin.
Table 5 .
Body weight gain (BWG), feed intake (FI), and feed conversion ratio (FCR) in broilers fed with increasing levels of glycerin throughout experimental period.
SEM: Standard error of mean; CV: Coefficient of variation.
Table 6 .
Metabolism coefficients of crude protein, ether extract, and dry matter in broilers fed with increasing levels of glycerin in phase finisher.
SEM: Standard error of mean; CV: Coefficient of variation.
Table 7 .
Carcass yield of broilers of broilers at 42 days of age, fed with increasing levels of glycerin.
Table 8 .
Relative weights of gastrointestinal tract organs in broilers, fed with increasing levels of glycerin.
SEM: Standard error of mean; CV: Coefficient of variation.
Table 9 .
Serum biochemical parameters of broilers at 42 days of age, fed with increasing levels of glycerin.
|
2024-05-30T15:26:52.527Z
|
2024-05-28T00:00:00.000
|
{
"year": 2024,
"sha1": "6c88cbae299d2bb2f961f619334b98deec198bf7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2218-1989/14/6/308/pdf?version=1716875930",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "36d21921169c84ac2c8e0351627ae5ee8847de72",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3939236
|
pes2o/s2orc
|
v3-fos-license
|
Skin Endometriosis at the Caesarean Section Scar: A Case Report and Review of the Literature
Cutaneous endometriosis is one of the rare gynecological conditions. Endometriosis is defined as the presence of endometrial glands and stroma outside the endometrial cavity. It commonly occurs in pelvic sites, such as the ovaries, cul-de-sac, bowel, or pelvic peritoneum. Endometriosis at the incisional scar is difficult to diagnose because of nonspecific symptoms. Usually, patients complain of pain at the site of the incision during menstruation. The main causes in most of the reported cases are obstetrical and gynecological surgeries. Endometrial tissues may be directly implanted in the scar during operation and, under hormonal stimulation, proliferate and form scar endometriosis. Diagnosis is usually made following histopathology. A wide excision is recommended to prevent recurrence. We report a case of a 33-year-old woman presenting with a brownish mass on the lateral aspect of the Pfannenstiel incision from a previous cesarean section scar. The symptoms appeared two years after her operation. The patient had cyclical pain and brownish discharge from the lesion during menstruation. Excision of the skin lesion with underlying subcutaneous tissue showed multiple, minute, firm hemorrhagic foci. Histopathology was performed and revealed a benign endometrial gland and stroma in the tissues, confirming the diagnosis of scar endometriosis. Cutaneous endometriosis is an uncommon gynecological condition and difficult to diagnose because of the nonspecific symptoms. Usually, it is confused with other dermatological and surgical diseases and delays the diagnosis and management. Surgical scar endometriosis following obstetric and gynecological procedures is more frequent recently due to an increase in the number of caesarian sections worldwide. Health care providers should suspect cutaneous endometriosis in any women with pain and a lump in the incisional scar after pelvic surgery.
Introduction
Abdominal wall endometriosis is a rare gynecological condition and highly related to a previous history of abdominal surgery. Abdominal wall endometriosis is rare, but it is the most common site for extrapelvic endometriosis. Surgical scar endometriosis following an obstetric and gynecological procedure is more frequent recently due to an increase in the number of caesarian sections worldwide. discharge during menstruation from the lateral aspect of the Pfannenstiel incision of a previous cesarean section scar. A review of her past medical and surgical history revealed that she had undergone three lower segment cesarean sections and one laparoscopic surgery for a painful right ovarian cyst during her first pregnancy. The mass started to appear two years after her last caesarian section. The patient had cyclical pain and brownish discharge from the lesion during menstruation.
A physical examination revealed a palpable, tender, subcutaneous mass, measuring 2x3 cm, located on the lateral aspect of the Pfannenstiel incision, with multiple tiny orifices protruding on the skin (Figure 1). The postoperative period was uneventful, and periodic follow-up for 11 months yielded no recurrence.
Discussion
Endometriosis is a chronic inflammatory reaction characterized by the presence of endometriomas outside the uterine cavity. It mainly causes painful symptoms and infertility while some women don't experience symptoms at all. The prevalence in the general female population is 2% to 10% but reaches up to 50% in infertile women [1]. The main etiology of endometriosis is not clear, but many studies suggest the hematogenous or lymphatic spread of stem cells from bone marrow or coelomic metaplasia [2].
It commonly occurs in pelvic sites, such as the ovaries, cul-de-sac, bowel, or pelvic peritoneum. Extrapelvic endometriosis can also occur but less commonly, including sites such as the abdominal wall, lung, pleura, bladder, omentum, or bowel. It is estrogen-dependent and commonly affects women during their reproductive age.
Abdominal wall endometriosis is one of the major extrapelvic sites and usually is highly related to abdominal surgeries and obstetric and gynecological procedures [3] like a cesarean section, hysterectomy, amniocentesis, tubal ligation, appendectomies, umbilical hernioplasties, and laparoscopic trocar tracts [4]. Primary cutaneous endometriosis in the umbilicus was reported in a patient without any history of abdominal surgery [5].
Cutaneous endometriosis is known as the presence of endometrial tissues that were implanted in the skin. This is estimated to occur at approximately less than 1% of ectopic sites. It is classified as primary cutaneous endometriosis that occurs spontaneously without any previous operation and secondary cutaneous endometriosis. It most commonly occurs after abdominal surgery [5].
Most of the cases are referred to general surgeons for evaluation because they are commonly misdiagnosed as granuloma, hematoma, incisional hernia, keloid, or malignancy [4][5]. Surgical scar endometriosis following cesarean section is more frequent recently mainly due to an increase in the number of cesarean sections.
The Pfannenstiel incision of a cesarean section scar is the most common site of abdominal wall endometriosis with an incidence of approximately 0.03% to 0.4%. Some published cases suggest a high incidence after a hysterectomy due to high cellular replication from early decidua that has pluripotential effects [6].
The pathogenesis of scar endometriosis is very complicated, and the main causes in most of the reported cases are obstetric and gynecological surgeries. Endometrial tissue may be directly implanted in the scar during operation and, under hormonal stimulation, proliferate and form scar endometriosis [7]. During surgery, the uterus is opened, and cells easily move into the pelvic cavity through amniotic fluid and are transported into ectopic sites, such as the skin, subcutaneous tissues, or muscles of the abdomen and pelvis, near the scar [8]. Implanted endometrial cells at the new site are capable of proliferating due to a bloody environment and hormonal effects allowing them to grow and form a mass that leads to clinical symptoms.
Clinically the main complaint is a cyclic or non-cyclic pain, abdominal swelling around the wound, or brownish or blood-like discharge during menstruation, as in our case. In a casecontrol study comparing abdominal wall endometriosis to a control group, there was a significant increase in parity and body mass index with cyclic localized abdominal pain and absent dysmenorrhea in patients with a history of surgery [9]. Scar endometriosis has also been described as a painful swelling of the scar that is worse during menses. The cyclical hemorrhage that results from hormonal stimulation during the menstrual cycle is a diagnostic criterion of scar endometriosis [4]. Spontaneous endometriosis or primary cutaneous endometriosis can also lead to a brownish-colored painful mass with spontaneous bleeding during menstruation [5].
Khan et al. performed a study at the Mayo Clinic, including 2539 women with endometriosis. Of these, 34 women (1.34%) had abdominal wall endometriosis with 41% of the cases diagnosed clinically [9]. Among these cases, 59% had endometriosis at the cesarean section scar.
Diagnosis can be reached after a careful history and physical examination. Ultrasonography (US), computed tomography (CT), magnetic resonance imaging (MRI), and Doppler sonography can be used for preoperative diagnosis. The ultrasound is the first choice to evaluate abdominal and pelvic lesions. Scar endometriosis is usually described as a solid, heterogeneous hypoechoic mass, as found in our case [4]. CT and MRI can help to diagnose and exclude other lesions in the abdominal wall like a hernia, lipoma, granuloma, or tumor. Fine needle aspiration cytology (FNAC) can also be used as a preoperative diagnosis and for excluding malignancy [4]. The most accurate diagnosis is a postoperative histopathology of the specimen, showing stroma and glands in the resected tissues.
Cutaneous endometriosis is mainly treated by a wide local excision with clear margins to prevent recurrence. Medical management with hormones can relieve the clinical symptoms. The recurrence rate after surgery is generally low. In many reported studies, there was no recurrence [10].
Surgery with a wide local excision is the main method of preventing recurrence. A peritoneal wash with saline and isolation of the surgical scar may have some role in the prevention of scar endometriosis. During the closure of the superficial layer of the abdomen, change needles and replace instruments to prevent iatrogenic cellular transport to the scar [10].
Conclusions
Cutaneous endometriosis is a rare condition and difficult to diagnose because of its nonspecific symptoms. Usually, it is confused with other dermatological and surgical diseases, delaying diagnosis and management. A few cases have been reported and these are single case reports or case series. There is no clear guideline for the diagnosis and management of such cases. Health care providers should suspect cutaneous endometriosis in any woman with pain and a lump in the incisional scar after pelvic surgery. Complete excision of the lesion is recommended. Histopathology is important to confirm the diagnosis and to exclude malignancy.
Additional Information Disclosures
Human subjects: Consent was obtained by all participants in this study.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2018-04-03T00:34:02.832Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "ba1ed53541dad006ac6ec5eb244e01ef08d32063",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/10358/1566922640-20190827-3435-181v0jb.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f500c7afb984294d6f1b3bc3bc40ffb948626011",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
257460142
|
pes2o/s2orc
|
v3-fos-license
|
BIOMASS SORGHUM SILAGES WITH SUGARCANE
The storage of forage to be offered at different times of the year are viable alternatives for all production systems, and sorghum biomass has been highlighted for this purpose. As preserved forage, it was hypothesized that sugarcane can contribute to the fermentation process. The objective was to evaluate the inclusion of different levels of sugarcane (0, 20, 40 and 60%) in the silage of three biomass sorghum genotypes (B012, B017 and B018). The material was ensiled using PVC silos and after 60 days the silos were opened and the contents of dry matter, mineral matter, organic matter, crude protein, neutral detergent fiber, acid detergent fiber, hemicellulose, lignin, and hydrogen potential were determined. The experiment was conducted in a completely randomized design, in a factorial scheme with four replications. The data were analyzed through the analysis of variance followed by multiple comparison by Tukey's test (α < 0.05) and linear regression. The biomass sorghum genotypes responded satisfactorily to the fermentation process, resulting in quality silages. However, the inclusion of sugarcane did not improve the quality of the silages, and its inclusion in the silage of the genotypes evaluated is not recommended.
INTRODUCTION
In tropical countries, cattle production systems are predominantly on pasture, and the study of forage plants becomes rather relevant (1). However, seasonal forage production is one of the main factors that limits animal performance (2). Seasonality impairs the uniform development of forage plants throughout the year, resulting in periods of forage abundance, in contrast to periods of food shortage associated with the reduction of their nutritional value (3). In order to mitigate the instability of forage production, silage stands out as one of the main strategies used to store roughage so that they can be supplied in periods of shortage (4; 5).
Biomass sorghum (Sorghum bicolor [L.] Moench) stands out for its potential to produce a large amount of green mass, about 50 t ha -1 of dry matter (DM) per cycle that lasts an average of six months, tall structure and fibrous culm. It has a high capacity to adapt to tropical and temperate climates. It is tolerant to drought, efficient in the use of water and high potential for the production of lignocellulosic biomass (6). The characteristics of sorghum biomass make it a fully mechanized crop, from sowing to harvest. Sowing occurs in the spring, as it coincides with the beginning of the rainy season, while the harvest occurs during the inter-crop period of sugarcane (7). Biomass sorghum is a promising crop, due to its qualitative and quantitative characteristics in the supply of raw material for the production of cellulosic ethanol and cogeneration of energy through the burning of biomass for its high lignin values, also known as second-generation biofuel energy (7; 8).
Given the context, biomass sorghum can be an alternative of roughage for supplementation in the period of food shortage, and it can also be used in the feeding of ruminants throughout the year. Despite being considered a promising alternative of forage for animal feeding, the biomass sorghum genotypes are not available for commercialization, as they are still undergoing tests to prove the potential use in the feeding of ruminants.
In order to improve the nutritional quality of biomass sorghum, additives are included to the forage at the time of ensiling, which can be composed by acids, salts, fermentable carbohydrates or lactic bacterial culture, besides enzymes. In addition, the additives have the purpose of influencing the fermentation process and help the food preservation (9). However, it is important to remember that the use of additives does not eliminate daily care to obtain a good silage, such as the adequate cutting season, the compaction of forage and the silo sealing (10).
Thus, the use of sugarcane in the diet of ruminants has been justified by its easy cultivation, high energy production and low cost of dry matter produced per unit of cultivated area. In addition, its harvest period coincides to the shortage of forage and the maintenance of nutritional value for a long time after maturation (11; 12). However, the biggest limitation of sugarcane ensilage is the high production of ethanol. The ensiled material presents a fast proliferation of yeasts that, in anaerobiosis, ferment carbohydrates and produce ethanol (13; 14). On the other hand, the presence of ethanol can reduce the consumption of silage by animals and increase losses due to its volatilization (15).
In times and/or places where water restriction exists, sorghum silage with sugar cane, when available, can be an alternative (16) guaranteeing roughage food for the animals in number. Therefore, the objective of this study was to evaluate the inclusion of different levels of sugarcane (0, 20, 40 and 60%) in three sorghum biomass genotypes (B012, B017 and B018) in silage production.
Acquisition of ensiled material
The biomass sorghum genotypes (Sorghum bicolor [L.] Moench) were sown, cultivated and managed at the Empresa Brasileira de Pesquisa Agropecuária (Embrapa) Milho e Sorgo, located in Sete Lagoas, in the state of Minas Gerais, Brazil. Embrapa is located in the rural area of Sete Lagoas, with geographic coordinates 19°28' south latitude and longitude 44°15ʼ08ˮ W GrW and an altitude of 732 meters. The sugarcane (Saccharum officinarum L.) was supplied by Fazenda Resplendor, located in Datas -Minas Gerais, Brazil, with geographic coordinates 18°32ʼ south latitude and longitude 43°38ʼ west Cwb and 1130 meters of altitude.
The biomass sorghum genotypes were cut close to the soil, presenting average height and green matter production of 3.04 centimeters and 56.95 tons per hectare, respectively. The next day after cutting, the sorghum and sugar cane genotypes were properly identified and transported to the Federal University of the Jequitinhonha and Mucuri Valleys (UFVJM), Campus JK, Diamantina -Minas Gerais, Brazil, for tests and laboratory analyzes.
Experimental trial
An experimental trial was conducted at the Animal Science Department in UFVJM. The experiment was performed on a completely randomized design in a 3x4 factorial scheme in which three biomass sorghum genotypes (B012, B017 and B018) and four levels of sugar cane inclusion (0, 20, 40 and 60%) were evaluated with four replications. Oliveira
Silage
The material to be ensiled was previously cut into particles measuring between 1.0 and 2.0 centimeters (cm), using a previously regulated mincer, brand Nogueira, model EN1146600. The chopped material was weighed on an electronic bench scale, brand SCOUT, model SJX6201BR/E, with an accuracy of 0.01 g and stored in experimental silos. Before the ensiling, a 500-gram sample of the material was collected in order to evaluate the chemicalbromatological composition of the ensiled material (Table I). The experimental silos were made with PVC with 100 mm of diameter and 450 mm in length, giving a density of 500 kg/m³. The ensiling process of sorghum and sugar cane was carried out shortly after being chopped. The material was compacted manually, with the aid of wooden sockets, to expel oxygen from the ensiled material.
After filling the silos, they were sealed with PVC caps provided with Bunsen-type valves, and subsequently sealed with adhesive tape and identified. The experimental silos were organized randomly and remained sealed for 60 days, protected from indirect sunlight and humidity.
Laboratory Analysis
After opening the silos, the material was homogenized, and 350g of the material from each silo was pressed using a hydraulic press, brand Reinalab, model TE-097, for the extraction of the silage juice to determine the hydrogen potential (pH), using a Tecnopon mPA 21 potentiometer with expanded scale. Another 500-gram sample of silage was collected and then pre-dried in a forced ventilation oven at 55°C for 72 hours (17). Then, the samples were ground in a Wiley mill, which had a 1mm sieve (17), and placed in previously identified plastic bags for further laboratory analysis.
The chemical-bromatological analysis of both the material before ensiling (Table I) and the silages referring to the contents of dry matter (DM), mineral matter (MM) and crude protein (CP) were carried out according to (17) and the content of organic matter (OM) was obtained by difference (% OM = 100 -MM). The components of the cell wall, neutral detergent fiber (NDF), acid detergent fiber (ADF), hemicellulose (HEM) and lignin (LIG) were measured sequentially by the method proposed by (18). Oliveira
Statistical analysis
The statistical analyzes were conducted using the software R (19), always adopting a level of 5% of significance. The data were subjected to analysis of variance (ANOVA), using the following statistical model: which Yijk is the observed value for the variable response in its k-th repetition of the combination of the i-th genotype of biomass sorghum with the j-th level of inclusion of sugarcane, µ is the general average, Gi is the effect of the i-th genotype of biomass sorghum (G = B012, B017 and B018), Cj is the effect of the j-th level of inclusion of sugarcane (C = 0.00; 0.20; 0 , 40 and 0.60), (GC)ij is the effect of the interaction of the i-th genotype of biomass sorghum with the j-th level of inclusion of sugarcane and εijk is the experimental error N (0, σ 2 ).
The assumptions of normality, homoscedasticity and independence of the residues were evaluated sequentially using the Shapiro-Wilk, Bartlett and Durbin-Watson tests, respectively. Once the assumptions were met, the multiple comparison of the genotypes averages by the Tukey test and linear regression for the levels of inclusion of sugarcane was carried out. Were evaluated first and second degree models, as the final model chosen based on its significance compared to a nullity model.
The Box-Cox transformation was utilized in cases that the ANOVA's assumptions did not met. The assumptions were then evaluated again and, if met, the multiple comparison of the means of the genotypes was carried out by the Tukey test on the transformed data and for the levels of inclusion of sugarcane a weighted linear regression by the inverse of the variance was carried out.
In conditions where the assumptions were not met even after the transformation, a nonparametric approach was adopted in the analysis of variance through the Aligned Rank Transformation, this being specific to experimental data from a factorial scheme (20). In those cases, the Dunn test, adopting the Benjamini-Hochberg adjustment, was used in the multiple comparison of the means of the genotypes and the quantile regression (semiparametric) was used to assess the levels of inclusion of sugarcane. In a similar way, first and second degree models were evaluated, the final model being chosen based on its significance in comparison to a nullity model.
RESULTS
The silages of the three tested sorghum genotypes presented statistically different dry matter (DM) contents (P < 0.05), being the highest value observed on the silage from the B012 genotype (Table II). In addition, a negative linear effect (P < 0.05) of the inclusion levels of sugarcane in the DM content was observed (Table II).
Regarding the contents of organic matter (OM), Figure 1 displays the unfolding of the interaction (P < 0.05) between the genotypes and the levels of sugar cane (Table II), in which there was no effect of sugar cane levels in the OM content of silage of genotype B012 (P > 0.05). The positive quadratic effect (P < 0.05) of the sugar cane levels in the OM of the silages of the genotypes B017 ( Figure 1b) and B018 (Figure 1c) was observed. Oliveira In Figure 1B, there was a minimum value of 7.63% estimated by the model, in which, with the inclusion of 0% to 7.63% of sugarcane, the OM content decreased, and from 7.63 % the inclusion of sugarcane increased the OM content of the silage in the genotype B017. As shown in Figure 1c, the minimum point estimated by the model is 1.28%, in which the inclusion of sugarcane reduced the OM content up to 1.28%, from this point on, the inclusion of sugarcane increased the OM in the silage of the genotype B018.
The levels of mineral matter (MM) presented a statistically significant effect of the interaction (P < 0.05) between the genotypes and the levels of sugarcane, with no effect of the levels of sugarcane on the genotype B012 (P > 0.05, Figure 2). There was a negative quadratic effect of sugarcane levels on the MM of silages of genotypes B017 and B018 (P < 0.05). In Figure 2b, the maximum point was noted, in which the MM increased up to 7.62% and from that point on, there was a reduction in the MM content as the level of inclusion of sugarcane increased in the silage of the genotype B017. In Figure 2c, the maximum point of the genotype B018 was 5.04%. After that point, the MM content decreased as the inclusion of sugarcane increased. In Figure 3, the interaction between the different genotypes and levels of inclusion of sugarcane on the levels of crude protein (CP) can be seen. In the silage of genotypes B012 and B018, there was no effect of the inclusion of sugarcane (P> 0.05). It can be seen that the genotype B017 was the only genotype where sugarcane had a significant effect and negative linear behavior (Figure 3b), showing a higher value compared to the other genotypes at levels 0 and 20% of inclusion of sugarcane (Figure 3a). Regarding the cell wall constituents of the sorghum genotypes, referring the content of neutral detergent fiber (NDF), acid detergent fiber (ADF), hemicellulose (HEM) and lignin (LIG), a statistically significant effect was observed (P < 0.05) in the levels of inclusion of sugarcane in the HEM content of the silages, in which the effect showed a negative linear behavior (Table I).
A statistically significant effect (P < 0.05) of the genotypes was observed on the pH of the silage juice, in which the genotype B012 had the highest pH value compared to the others. (Table I).
DISCUSSION
Data analysis assessed that the silages of the three biomass sorghum genotypes with the inclusion of sugarcane showed significant differences for the studied variables. It was observed that sugarcane presented a significant effect when added to the sorghum genotypes. Oliveira A good quality silage must have a DM content between 30% to 35%, recommended by (21), as the ideal value. High levels can hinder compaction and benefit the action of fungi and yeasts. However, (22) recommends DM values between 28% to 38%. In this study, the DM content of the genotype B012 was within the range recommended by (22). However, a negative effect with the inclusion of sugarcane on the DM content of the silage was observed, regardless the genotype, which may be a consequence of alcoholic fermentation by yeast causing loss of DM and reducing digestibility (23; 24).
Regarding the average contents of OM and MM, there was a significant effect of the interaction and the factor. Neumann et al (25) and Neumann; Restle; Brondani (26), found similar contents to this study, with average OM contents of 94.7 and 95.79%.
For MM contents, the results obtained were lower than those reported by (27) (average MM of 5.30%) when evaluating forage and dual-purpose sorghum silages, and by (28) (average MM of 4.88%), for silage of forage sorghum without additives. Junqueira (29) found a content of 3.6% MM in sugarcane silages. According to (30), the lower content of MM is an indicative of better conservation of forage, because, when there is inadequate fermentation, losses of OM occur, increasing the relative participation of MM in DM. As there were no changes in the contents of DM and OM, the variation in the contents of MM may be related to a possible contamination with the soil at the time of harvesting the material.
In diets of ruminants, according to (31), the recommendation for CP content is 6 to 8% in order to provide adequate development of ruminal microorganisms, because lower contents can affect ruminal fermentation in a negative way, causing the reduction of microbial activity. In this study, the CP contents of the genotypes were lower than that reported by (31), which may be related to the crop variety and vegetative stage of the plant at the time of ensiling which impair the use silage from biomass sorghum as the only source of food in animal feeding (32). In addition, the CP content is considered an obstacle to the adoption of sugarcane as the only forage food. The value found in the current study was lower than what is found in the literature, of 1.91 to 3.81% CP (33; 34). Thus, if the evaluated silages are used to feed the animals, protein supplementation is necessary in order to complement the nutritional demand of the animals.
The NDF measures the total insoluble fiber content of the food and it is the most used parameter for balancing diets since it interferes with their quality (35). Van Soest (31) reports that for satisfactory digestibility, NDF levels must be between 55 and 60%, and that higher values are negatively correlated with voluntary consumption by animals and very low contents might jeopardize the optimal conditions for ruminal fermentation. As reported by (36), the increased NDF content throughout fermentation is relative and occurs due to the loss of cellular content during fermentation.
On average, the NDF content obtained in this study was 53.7%, similar to that observed by (37) (55.54% NDF), in which the authors studied silage of sorghum genotypes. According to (38), silages with NDF content lower than 50% are more desirable, because a minimum amount of fiber is required to have adequate concentrations of microorganisms in the rumen, in order to promote the fermentation process, saliva production and ruminal movements. Therefore, in this study, the NDF content of the silages was similar to the above-mentioned authors.
The ADF content determines the quality of the cell wall and expresses the insoluble and less digestible fraction of it (cellulose and lignin-insoluble or remaining in acid detergent), and it is directly associated with the digestibility of the food (31), in other words, the higher the ADF content of a given food, the greater the indigestible fraction and, as a consequence, a lower the digestibility.
On average, the ADF content obtained in this study was 28.4%, lower than what was found by (37) respectively). The fractions of ADF and lignin have a negative relationship with the apparent digestibility of the material and with the ingestion, because the lower the ADF, the greater the energy value of the food. In this sense, high and low fractions of NDF and ADF (above 60% and below 30%, respectively), compromise the consumption and usage of forage (39). So, the ADF content of the silages was similar to the above-mentioned authors.
The HEM contents in this study, on average, 25.3%, were lower than the 36.6% reported by (40) and similar to what was found by (41) (25.0% HEM). HEM is part of the cell wall and, in the absence of substrate, it can be used in the fermentation process (36). Regarding the significant effect with negative behavior of the inclusion of sugarcane, it can be related to the hydrolysis of the hemicellulosic fraction by microorganisms in the medium, which can generate by-products of its metabolism, including ethanol by yeast (14; 42; 43). As sugarcane is rich in sucrose, probably, there was no lack of substrate for the microorganisms in the medium, suggesting that there was a dilution effect of the HEM fraction in the silage of biomass sorghum with sugarcane. Therefore, in this study, the HEM content of the silages is within the range recommended by the above-mentioned authors.
Van Soest (31) states that the LIG acts in a negative way in the digestibility of food, being able to limit the digestion of nutrients and other components of the cell wall. Lignin is made up of macromolecules with a complex structure and high molecular weight. With the development of forage, this compound replaces the spaces in the matrix of the cell wall occupied by water, increasing the rigidity of the cell wall (44). Thus, lignin affects DM digestibility by binding to cell wall components, and the occurrence of this interaction increases with forage maturity, either by increasing LIG concentration and/or by modifying the composition of phenolic compounds (45). According to (46), contents below 7.3% of lignin in silage of sorghum favor the increase of consumption and the digestibility of fibrous fractions.
The average LIG content found in the study was 5.3%, which is higher than what was found by (27) (3.36 to 4.94% LIG) in silage of different sorghum hybrids after 43 days of fermentation. Thus, sorghum genotypes B012, B017 and B018 had an average LIG content within the suggested range, which possibly would not hamper the digestibility of the silages.
Regarding the pH values of the silage juice of the evaluated genotypes, the values found are within the recommended by (36) with pH values between 3.8 and 4.2, the desirable for the silage to be well conserved. The authors (47) and (48) considered adequate pH values between 3.6 and 4.2 for a good quality silage. In the current study, the genotypes B012, B017 and B018 presented average pH levels within the suggested range, which indicates that the evaluated silages were well conserved during the storage period of 60 days.
In view of the above, the biomass sorghum genotypes evaluated had a satisfactory response to the fermentation process resulting in good quality silages. Although, the inclusion of sugarcane did not improve the quality of the silages produced, and its inclusion in the ensilage of the evaluated genotypes is not recommended.
ACKNOWLEDGMENT
The authors would like to acknowledge "Embrapa Milho e Sorgo" for the biomass sorghum used during the research, Fazenda Resplendor for the sugar cane, the Federal University of Lavras and the Federal University of Viçosa for providing the laboratories for the analysis of protein, the Federal University of the Jequitinhonha and Mucuri Valleys and the Animal Science Department.
|
2023-03-12T15:33:45.397Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "a79f1b2277cddb0511a721acbcbcfef92debe393",
"oa_license": "CCBYNC",
"oa_url": "https://rvz.emnuvens.com.br/rvz/article/download/1052/779",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7c7d8791c52fea0068dcc07b99cdf1a285d38ad0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
224782786
|
pes2o/s2orc
|
v3-fos-license
|
The role of sensory and olfactory pathways in multiple chemical sensitivity
Multiple chemical sensitivity (MCS) is characterised by non-specific and recurring symptoms affecting multiple organs and associated with exposure to chemicals, even at low concentrations, which are, under normal circumstances, harmless to the general population. Symptoms include general discomfort, cardiovascular instability, irritation of the sensory organs, breath disorders, hypersensitivity affecting the skin and epithelial lining of the gut, throat and lungs, anxiety, and learning and memory loss. Chemical intolerance is a key distinguishing feature of MCS, limiting considerably patients’ lifestyle with serious social, occupational and economic implications. Since no specific diagnostic markers are currently available for chemical intolerance, the diagnosis relies on clinical symptoms. Despite the formulation of several hypotheses regarding the pathophysiology of MCS, its mechanisms remain undefined. A person-centred care approach, based on multidisciplinary and individualised medical plans, has shown promising results. However, more definite treatment strategies are required. We have reviewed the main experimental studies on MCS pathophysiology, focusing on the brain networks involved, the impact of environmental pollution on the olfactory system and the correlation with other pathologies such as neurodegenerative diseases. Finally, we discuss treatment strategies targeting the olfactory system.
Introduction
Multiple Chemical Sensitivity (MCS) is a multisystem and poly-symptomatic syndrome. MCS subjects display a complex symptomatology due to intolerance to chemical agents [1]. Haanes and coworkers [2] have defined MCS as "symptoms associated with environmental factors". The authors claim that MCS symptoms may not be linked to pathophysiological mechanisms, further underlining the lack of consensus in the scientific community. From a diagnostic point of view, MCS is difficult to recognise clinically and patients risk marginalisation [1,3]. MCS pathogenesis can be traced back to an exaggerated response to oxidative and nitrosative stress, chronic neurogenic and systemic inflammation [4], altered blood brain barrier (BBB) permeability, abnormal xenobiotic metabolism and insufficient detoxifying capacity [5]. The resulting hyper-activation of sensory receptors has an impact on metabolic pathways, the immune system and the central nervous system (CNS), linked to oxidative stress [6,7], and can result in multiple pathological manifestations. In MCS, contaminants, such as pesticides, biocides, heavy metals, metabolites, mycotoxins, perfumes, detergents, volatile organic compounds, such as formaldehyde, 2-ethylhexanol and dust [7,8], play a role in triggering the symptoms. The difficulty in identifying the specific profile of MCS symptoms is strengthened by their overlapping with other chronic disease manifestations, which hinder the ability of the clinician to give a definite diagnosis at early stages. Based on our clinical experience and clinical practice-based knowledge of MCS pathophysiology, we hypothesise that MCS etiology is based on exposure to neurotoxic substances that, through inhalation, can reach the cerebral circulation leading to the pathology and manifestation of symptoms. From the nasal cavities, substances continue through the lamina cribrosa of the ethmoid bone in the olfactory bulb (OB) and beyond, e.g. to the limbic system, the diencephalon and up to the cerebral cortex. With this in mind, we have reviewed the existing literature, to gather state of the art information on MCS and prompt further studies in this area, focusing on the olfactory system. We have summarised the clinical studies relevant to MCS in Table 1.
Review search criteria
We searched Pubmed/Medline using the keywords "multiple chemical sensitivity", "environmental intolerance", "smell", "olfactory stimulation", "olfactory bulb", "intranasal therapy" alone or combined. We chose papers from 1987 to 2020 based on their content and relevance to the aim of this review.
MCS diagnostic criteria
Currently, the diagnosis of MCS is based on the Cullen's inclusion criteria related to the patient's clinical history and their score from the Quick Environmental Exposure and Sensitivity Inventory (QEESI). According to the criteria developed by Cullen in 1987 and confirmed by a multidisciplinary evaluation in 1999 [9,10], MCS is a chronic state involving more than one organ or system and its symptoms are triggered by even low-level exposures to chemicals and environmental substances of different classes and mode of action. No single test is currently available for diagnosis of MCS. MCS diagnosis is based on the following: -Careful anamnesis based on a questionnaire to analyse the symptoms with specific reference to environmental exposure to micro-organisms and chemical substances, especially in relation to the patient's professional life. -Application of QEESI and Environmental Exposure and Sensitivity Inventory (EESI) evaluation tests using a scale of reference to benchmark the results against what would be expected in a physiological/healthy state [11,12]. QEESI, also known as the "TILT Test", is a multistep questionnaire that determines levels of chemical sensitisation to environmental triggers, marking the type, location and severity of symptoms after exposure and the consequent impact on life quality. This test evaluates the patients' answers and the cumulative score is a good indicator of patient's life quality.
Reduced detoxification capacity in MCS
Fabig [13] showed that MCS is triggered by a reduced detoxification capacity of xenobiotic substances. This is observed in MCS patients who express genes responsible for a reduced functioning of the enzymes involved in the metabolism of chemical substances [13]. Therefore, genetic testing approaches could be valid to aid diagnosis of MCS subjects. Xenobiotics are lipophilic and lack of electrical charges at physiological pH, which facilitates their absorption, but hinders their elimination. The purpose of the metabolism of contaminants is therefore to convert lipophilic substances into hydrophilic and facilitate excretion through two phases. During phase I, xenobiotics undergo hydrolysis, oxidation, reduction and methylation. During phase II, conjugation occurs by adding a polar compound to a functional group, thus facilitating the excretion of the final metabolite from the cell [14]. Catalytic enzymes regulate the pathway and speed of reactions. The catalysts of the first phase are P450 enzymes, e.g. CYP2D6 [15,16]. The second phase relies on enzymes such as glutathione-S-transferase (GST) [17]. Relying on the compromised detoxification system, studies on chemically hypersensitive populations have been focused mainly on the genetic panel of these patients. The allelic variants of cytochrome P450 isoforms (CYP2C9, CYP2C19, CYP2D6, and CYP3A5), glutathione S-transferases (GSTP1, GSTM1 and GSTT1), and antioxidants [catalase, superoxide dismutase (SOD)] were studied in MCS subjects compared to healthy controls (HC) [17] and catalase and GST enzyme activities were found to be lower in MCS [6,17]. GST polymorphisms may reduce glutathione conjugation, a key protective mechanism against oxidative damage. Reactive oxygen species generated as by-products of phase I reactions are rapidly reduced to non-toxic "physiological" levels by antioxidant enzymes such as SOD, catalase, glutathione peroxidase and by low-molecular-weight antioxidants, such as glutathione. Therefore, complex symptoms can arise when levels of reduced and oxidised glutathione decrease [17].
Sensory mechanisms in MCS
The sense of smell can be defined as perception of a stimulus by the CNS, which activates the olfactory receptors (ORs) [18]. The neurons of the olfactory system are exposed to the external environment. Therefore, the olfactory epithelium (OE) is particularly vulnerable to environmental neurotoxicants. The olfactory nerve can also act as a vector for neurotoxic agents to be transported into the CNS, bypassing the BBB [19]. Smell is controlled by specialised sensory cells localised in the main OE within the nasal cavity [20]. Olfactory sensory neurons (OSNs) or [20]. Olfactory signal transduction begins with the activation of an OR in the ciliary membrane. This leads to an increase in cyclic AMP (cAMP) synthesis through the activation of the adenylyl cyclase type III (ACIII) enzyme via a G protein-coupled cascade [21]. The increase in cAMP concentration causes the cyclic nucleotide-gated ion channels to open, leading to an increase in intracellular Ca 2+ concentration and depolarisation of the cell membrane by the Ca 2+ -activated Cl − channel. The ORs, olfactory marker protein, G olf protein α-subunit, and ACIII are specific to the olfactory pathways and therefore could represent suitable therapeutic targets for MCS [21]. Chemosensory perception and autonomous responses during chemical exposure have been reported in MCS patients [22]. For instance, the aim of a study by Andersson and co-workers [22] was to investigate hyper-reactivity in MCS during whole-body exposure to low n-butanol concentrations. Participants with MCS and HC were exposed to the odorant n-butanol at increasing concentrations, using an exposure chamber. The MCS group displayed greater intensity in the perception of odours, higher heart rate and increased symptoms over time compared to controls. No differences were found in the two groups in regards to respiratory rate and tonic electrodermal activity responses. Therefore, MCS patients differ from HC in terms of autonomous responses and chemosensory perception during chemical exposure [22].
Brain networks involved in MCS
Genetic, infectious, and neurological factors have been implicated in MCS [23]. From a neurological perspective, the trigger may be a neurological reflex mechanism, a stimulus to emotional memory, or a conditioned response to olfactory stimuli [24]. Physiologically, the pathway that joins the olfactory region to the orbito-frontal cortex through the thalamus is a control area for olfactory stimuli [24]. Furthermore, the olfactory region is also connected to the limbic system, which is responsible for vegetative responses and smell-related emotions [24]. Alterations to this pathway could give rise to symptoms that confuse the observer, such as those presented by MCS patients after odour exposure. Several studies have investigated the activation and involvement of cerebral networks in MCS to dissect the underlying mechanisms. For example, Azuma and co-workers observed significant activation of the prefrontal cortex (PFC) during olfactory stimulation in MCS patients [25]. In this study, the recovery process of regional cerebral blood flow (rCBF) was examined after olfactory stimulation in MCS patients and HC using near infrared spectroscopy imaging. This study showed that olfactory stimulation induced significant activations in the left and right PFC and even more evident activations in the orbitofrontal cortex (OFC) in MCS patients compared to controls [25]. The OFC is associated with response to stimuli, emotions and preferences in the decision-making processes. These results suggest that a strong exposure to irritating chemicals activates the PFC during olfactory stimulation in MCS patients, and the OFC remains activated even after stimulation [25]. In a further study, Azuma and co-workers [26] investigated the association of odour thresholds and changes in rCBF during olfactory stimulation at odour threshold levels in MCS. Two different odorants were used for olfactory stimulation, sweet and faecal, and were employed at three different concentrations (zero, odour recognition threshold, and normal level of perceived odour) in patients with MCS and controls. MCS patients displayed stronger brain responses at the recognition threshold (faecal odour) and normal perceived levels (sweet and faecal odours) compared to controls. These responses may involve cognitive and memory processing systems during past exposure to chemicals prompting further research in this area [26]. Several neuroimaging studies showed a correlation between odours and cortical activation in MCS. For instance Alessandrini and coworkers [27] investigated the subcortical metabolic changes during neutral (NC) and pure (OC) olfactory stimulation using F-2-fluoro-2-deoxy-D-glucose (FDG) with a tomography procedure in MCS patients and HC. This study showed a higher metabolism in the bilateral olfactory cortex during NC in MCS patients compared to HC. In addition, the odour pleasantness scale positively correlated with the MCS subjects' bilateral putamen FDG uptake in OC. This study also described a metabolic index of behavioural and neurological aspects of MCS complaints [27]. Another study carried out by Andersson [28] aimed to investigate whether brain responses in presence of low levels of olfactory and trigeminal stimuli differ in individuals with and without idiopathic environmental illness (IEI), and how they occur. The authors suggest that sensitised responses in the limbic system are crucial to symptom manifestation. Thus, brain responses to isoamyl acetate and carbon dioxide administered intranasally were evaluated in IEI patients and HC using functional magnetic resonance imaging. The IEI group had a higher blood oxygenated level signal (BOLD) compared to the controls in the thalamus and in the parietal areas and a lower BOLD signal in the superior frontal gyrus. In conclusion, the above results point towards a limbic hyper-reactivity and an inability to inhibit salient external stimuli in IEI subjects. IEI responses were not characterised by hyperreactivity in sensory areas [28]. A fundamental element in the theoretical explanations behind chemical intolerance (CI) is that olfactory sensitisation implies greater reactivity to odour stimulation, however, empirical evidence is scarce. In another study reported by Andersson [29], it is stated that olfactory sensitisation involves brain networks relevant to pain processing. Subjects who are sensitive to olfactory stimulation, express a higher BOLD in regions relevant to pain processing, as well as primary and secondary olfactory projection areas [29]. It has been speculated that CNS limbic pathways involved in anxiety are altered in MCS individuals due to the nature of MCS symptomatology. As limbic structures are most susceptible to kindling-induced seizures (kindling is defined as "a model of synaptic plasticity whereby repeated low-level electrical stimulation to a number of brain sites leads to permanent increases in seizure susceptibility"), it is possible that MCS may occur via a kindling-like mechanism [30].
Environmental pollution and MCS
MCS has been linked to environmental and construction pollution [31]. For instance, in the Västerbotten and Österbotten study [31], two questionnaires focusing on factors such as lifestyle, general health, symptom frequency and the emotional and behavioural impact of the building-related intolerance were administered to ∼5,000 participants. The participants were mostly women who reported avoidance behaviour and required medical assistance. Building-related intolerance with broadspectrum symptoms has been associated with somatic and psychiatric diseases and functional somatic syndromes. Similar multi-morbidity has been reported for environmental intolerance (EI), regardless of the type of exposure under investigation, and for CI, electromagnetic hypersensitivity and sound intolerance. In particular, in MCS, psychiatric comorbidity is commonly reported, however, somatic comorbidity and concurrent functional somatic syndromes have also been observed [31]. Cleason and co-workers designed a study to determine the chemical and physical sources in the environment that can trigger symptoms among individuals with different EIs [32]. Participants in the Västerbotten environmental health study answered 40 specific questions regarding the environment and exposure to chemicals, buildings, electromagnetic fields and sounds [32]. The EI groups reported more symptoms from the different sources than the group with building-related intolerance. In addition, individuals with chemical and sound intolerance reported symptoms from building-related trigger factors, and individuals with electromagnetic hypersensitivity reported symptoms from chemical triggers [32]. In a subsequent study, Cleason and co-workers [33] studied the impact of heptane and a mixture of heptane and acrolein on the plasma levels of oxylipins, endocannabinoids and related lipids in healthy individuals and individuals affected by CI. No relevant variation in bloodstream oxylipins or endocannabinoids was observed in CI subjects, suggesting a limited role in CI-related inflammation [33].
Hyperosmia and MCS
The majority of MCS patients suffer from hyperosmia, an increased olfactory acuity consisting in heightened sense of smell, usually caused by a lower threshold for odour perception. This perceptual disorder arises from an altered signal between the ORs and the olfactory cortex. Prolonged olfactory stimulation on the olfactory nerve and the olfactory cerebral areas is the key cause of this symptom. Haehner and co-workers showed that mutations in the sodium channel Nav1.7, encoded by the SCN9A gene, causes high olfactory sensitivity [34]. Through various tests performed on a 50-year-old woman with this mutation, it was found that she displayed high olfactory acuity and intranasal sensitivity, very low thresholds for thermal, tactile and pain detection in the trigeminal area and hyperalgesia to the lower legs [34]. This case report illustrates gain of function in olfactory and pain sensation associated with a Nav1.7 channel mutation. Nevertheless, the genetic basis of olfactory variations in human olfactory thresholds, and in particular in hyperosmia, remains largely unknown. OR segregating pseudogenes are useful candidates to study odorant-specific variations in human olfactory thresholds [35]. To explore this hypothesis, Menashe and coworkers [35] investigated the association among olfactory detection threshold phenotypes of four odorants and segregating pseudogene genotypes of 43 ORs genome-wide. They found a strong association between the variants of the single nucleotide polymorphism in OR11H7P and sensitivity to isovaleric acid. These findings suggest a functional role of OR11H7P in isovaleric acid sensitivity [35].
Potential treatments targeting the olfactory system in MCS MCS is currently orphan of treatment, however, several approaches have been proposed based on antioxidant therapy and intranasal substance administration.
Intranasal pathways offer an efficient alternative for the administration of drugs to the CNS. The anatomical structures involved in the transport of drugs administered intranasally include the trigeminal nerve, the olfactory nerve and the rostral migratory stream [36]. To test the efficiency of therapy administration through the intranasal route, a study evaluated the role of the rostral migratory stream following intranasal administration [36]. In this study, intranasal administration of fluorescent tracers and iodinated peptides in mice showed distribution throughout the OB, hippocampus, cortex and cerebellum, suggesting that this system is suitable for efficient drug transport within these CNS structures without affecting peripheral tissues such as lungs and blood [36]. In a further study, intranasal administration of hyaluronic acid (HA) was used to improve the olfactory performance in MCS [37]. The effect of HA dosed intranasally, on the odour threshold and the quality of life in MCS patients were investigated on two groups of MCS patients treated with an HA or a saline nasal spray. Both groups were analysed using the Sniffin Battery Stick Test (SST), the questionnaire on olfactory disorder (QOD) and the Zung Anxiety Scale (SAS) before and 30 days after treatment. The authors showed a reduction in odour threshold and improvement in QOD and SAS after one month in the HA group. Therefore, intranasal administration of HA may represent a valid treatment option to alleviate olfactory symptoms in MCS [37].
Furthermore, intranasal administration of reduced glutathione, the most abundant endogenous antioxidant and a key regulator of oxidative stress and immune function, may also represent a valid therapeutic option in MCS patients. Glutathione depletion has been reported in several pathological states such as MCS and Parkinson's disease [38][39][40]. In addition, glutathione deficiency perpetuates oxidative stress, mitochondrial dysfunction and cell death [38]. Glutathione can be administered as an intranasal spray to reach CNS tissues [38][39][40] and has been investigated in Parkinson's disease and MCS [38][39][40]. For instance, intranasal reduced glutathione resulted in increased brain glutathione levels, which persisted for at least 1 h, as observed in 15 subjects with mid-stage Parkinson's disease and determined by magnetic resonance spectroscopy [38]. In addition, a second study showed safety of intranasal reduced glutathione (maximum dose administered was 6000 mg/day) in PD patients [39].
A further study assessed patient-reported outcomes (tolerability, adverse events and health benefits) after intranasal reduced glutathione administration using a survey administered to 70 patients [41]. Reported indications for reduced glutathione prescriptions were MCS, allergies/sinusitis, Parkinson's disease, Lyme disease and fatigue. In this study, 80% of patients considered reduced glutathione to be effective without significant adverse effects [41]. In conclusion, intranasal administration of glutathione may need further evaluation as a treatment for respiratory and CNS diseases where oxidative stress is a contributor to disease pathophysiology [41]. Carnosine (β-alanyl-L-histidine) is synthesised in the olfactory system and has been identified as a potential therapy for oxidative stress-related olfactory dysfunction due to its antioxidant and neuroprotective properties. For instance, the neuroprotective effect of carnosine was investigated in a mouse model of vanadium inhalation [42]. Vanadium generates olfactory dysfunction and increases malondialdehyde (MDA) levels, loss of dendritic spines and necrotic neuronal death in granule cells, which can be modulated by carnosine, which improvs olfactory function, increasing dendritic spines and decreasing neuronal death and MDA levels. Further evidence shows that carnosine can modulate zinc and copper, which could represent one of the mechanisms underlying its neuroprotective and neuromodulatory action [43]. Therefore, carnosine warrants further studies in MCS since it inhibits the production of free radicals and reactive aldehydes suppressing protein glycation, and has already shown potential benefits in other CNS disorders [44].
Conclusions
The pathogenesis of MCS is still unknown although several hypotheses have been formulated in regards to the role played by alterations in several CNS regions. MCS diagnosis is challenging because current protocols are purely based on the patient's clinical history and assessment of exposure to chemicals and their biological and physiological effects, which can be misleading. Monitoring and control of environmental and chemical hazards are at the basis of health and safety practices in the workplace. Thus, validated and harmonised guidelines, clarifying the maximum average chemical concentration to which workers can be exposed in a specific time-period, need to be implemented and enforced. We support an approach based on the stratification of subjects based on clinical symptoms to identify high-risk individuals and design a personalised therapeutic strategy tailored to the patient's need and clinical symptomatology. In our review, we have highlighted that pesticides, metals and pollution play an important role in MCS, particularly in regards to their effect on the olfactory system. For this reason, beyond an approach based on prevention of the environmental exposure, locally treating the olfactory mucosa with antioxidants or other active substances such as carnosine, reduced glutathione and HA, may be a valid treatment strategy although further studies to analyse their mechanism of action in this system may be required. These treatments, alone or combined, may support the restoration of the neurotransmitter balance in the olfactory area, which may affect the subcortical and cortical areas connected via the olfactory system.
|
2020-10-20T13:05:28.404Z
|
2020-10-19T00:00:00.000
|
{
"year": 2020,
"sha1": "29dbea5b86d7e360d3ee872e396412f8ac27188f",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/reveh-2020-0058/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "28a7a3d0a80d666d3896fa00ed67c693be5b8c59",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
135609753
|
pes2o/s2orc
|
v3-fos-license
|
Microtexture of constituent phases in a heavily warm- rolled and annealed duplex stainless steel
Evolution of microtexture during isothermal annealing of a heavily warm-rolled Fe- 0.08%C-24.18%Cr-10.5%Ni duplex stainless steel (DSS) having approximately equal volume fraction of ferrite and austenite was investigated in the present work. The DSS was warm-rolled to ∼90% reduction in thickness at three different temperatures, namely, 225°C, 425°C and 625°C followed by isothermal annealing at 1175°C for different length of time. Austenite showed pure metal or copper type texture at different warm-rolling temperatures. In contrast, the texture of ferrite in different warm-rolled DSS revealed the presence of RD (RD//<110>) and ND (ND//<111>) fibers. The annealing texture of austenite showed retention of the deformation texture components while ferrite revealed strong RD-fiber.
Introduction
The evolution of microstructure and texture during recrystallization of heavily deformed two phase alloys has been reported extensively. However, amongst the two-phase alloys the main focus has been on those materials having the second phase present in the form of dispersoids or precipitates. In contrast, the recrystallization behavior of duplex alloys in which both the phases have grain structure, such as (α+β) brass or duplex stainless steels (DSS), have been studied to a much lesser extent. The focus of the present work is to study the recrystallization behavior of heavily warm-rolled DSS having approximately equal volume fraction of ferrite and austenite. Thermo-mechanical processing involving cold-rolling and annealing of DSS has been reported extensively [1][2][3][4], however, the evolution of microstructure and texture during warm-rolling has been clarified only recently by the present authors [5]. The present investigation is aimed to clarify the evolution of microstructure and texture during subsequent annealing which should be of interest for designing novel thermo-mechanical processing routes for DSS alloys.
Experimental
The DSS alloys were obtained in the form of an as-cast ingot having a chemical composition shown in Table 1. The as-cast ingot was initially processed through a series of thermomechanical treatments to obtain a hot-rolled slab which was homogenization annealed at 1175°C to obtain nearly equal volume fraction of the two phases [5]. The homogenization annealed slab was subjected to multipass warm-rolled up to ~90% reduction in thickness at three different temperatures, namely 225°C, 425°C and 625°C using a laboratory scale rolling equipment (SPX Precision Equipment, USA). In order to minimize the sudden quenching effect the rolls were pre-heated to 250°C during warm-rolling. The warm-rolled samples were immediately water quenched after every warm-rolling pass. The 90% warm-rolled samples were subsequently isothermally annealed in a conventional horizontal cylindrical furnace at 1175°C for different time intervals varying from 2 minutes to 120 minutes and immediately water-quenched. The microstructure and texture of the deformed materials were characterized using electron back scatter diffraction (EBSD) system (Oxford Instruments, UK) attached to a FEG-SEM (Make: Carl Zeiss, Germany, Model: SUPRA-40).The acquired EBSD dataset were analyzed using TSL-OIM TM software.
Results and discussion
Evolution of texture during warm-rolling The texture of the two phases in the starting homogenization annealed material is shown in Fig.1. Figure 1(a) shows the φ 2 =45° section of the ODF of ferrite indicating the presence of ND (ND//<111>) along with a rotated cube component shifted along the φ 1 by 15°. The φ 2 =45° section of the ODF of ferrite in DSS warm-rolled at 225°C ( Fig.1(b)) and 625°C ( Fig.1(d)) shows the presence of both RD and ND fibers. However, φ 2 =45° section of the ODF of ferrite in DSS warm-rolled at 225°C (Figure 1(c)) shows remarkably stronger RD-fiber as compared to the ND-fiber. The strongest component along the RD-fiber is {001}<110>. The evolution of strong RD-fiber texture in ferrite in DSS warm-rolled at 425°C is explained on the basis on dynamic strain aging effect due to which carbon atoms preferentially lock dislocations and restricts slip on the {110}<111> system [5]. Further deformation by slip occurs preferentially on the {112}<111> system. This leads to a stronger RD fiber than ND fiber. (111) pole figure (PF) of the austenite in the starting homogenized DSS (Fig.1(e)) shows the presence of weak pure metal (or copper type) texture characterized by S ({123}<634>), Cu ({110}<112>) and the brass ({110}<112>) component. The (111) PFs of austenite in DSS warm-rolled at different temperatures ( Fig.1(f)-(h)) shows the development of predominantly pure metal type texture. The mechanism for the above observation has been recently reported by the present authors [5].
Evolution of recrystallization texture during annealing
Isothermal annealing of the different 90% warm-rolled DSS is carried at 1175°C out for 2 minutes to 120 minutes. However, specimens annealed for 2, 30 and 120 minutes are only discussed here as major microstructural transformations happen in these temperatures. These microstructural transformations are clearly shown in Fig.2 for the DSS warm-rolled at 625°C but also holds good for DSS warm-rolled at other temperatures. The ultrafine microstructure obtained after warm-rolling ( Fig.2(a)) is transformed to a bamboo type morphology for short isothermal holding time (Fig.2(b)). A globular morphology is gradually evolved due to the breakdown of the lamellar structure. Any significant changes could not be observed during prolonged annealing (Fig.2(c)). Despite the apparent differences in the texture of ferrite in as warm-rolled conditions as discussed above, the texture of ferrite in the three warm-rolled and annealed DSS (Fig.3(a)) shows much stronger RD-fiber than ND-fiber. The development of stronger RD-fiber components is attributed to recovery behavior of these components [1] as opposed to the typical recrystallization type behavior of the ND-fiber components. Warm-rolling of DSS results in diminished driving force for recrystallization and results in the formation of much stronger RD-fiber. The recrystallization texture of austenite in different warm-rolled and isothermally annealed DSS is represented the by (111) pole figures (PFs) in Fig.3(b). The (111) PFs of austenite in annealed materials appear to be very similar to those of the as warm-rolled DSS ( Fig.1(f)-(h)) indicating the retention of deformation texture even after prolonged annealing. Thus, annealing textures for the three warm-rolling conditions show retention of the deformation texture components. The retention of deformation texture during recrystallization indicates discontinuous recrystallization without preferential orientation selection. Interestingly brass recrystallization component ({236}<385>) is not found strong after annealing. Since Brass is not the dominant component of the warm-rolled texture, the preferential growth advantage of {236}<385> grains would be greatly reduced, preventing it from emerging a strong texture component after annealing.
Conclusions
• The deformation texture of the two phases in DSS is significantly affected by the warmrolling temperature, although, the recrystallization texture remains unaffected. • RD-fiber strengthens due to strong recovery behavior of ferrite in DSS during annealing.
• Austenite in DSS shows retention of deformation texture components after annealing which is attributed to discontinuous recrystallization without preferential orientation selection.
|
2019-04-28T13:07:59.402Z
|
2015-04-24T00:00:00.000
|
{
"year": 2015,
"sha1": "857881ed1691b6cc8590abd3f8fb76fa7a484028",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/82/1/012046",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "7f225ee6a86dd295d7a7b9be9678bb586b8c9368",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
56404977
|
pes2o/s2orc
|
v3-fos-license
|
Who needs oral care? A study of potential factors associated with the need for oral care among medical patients acutely admitted to hospital
Introduction: Oral health is important in relation to illness prevention, treatment and rehabilitation. The aim of this study was to explore potential risk factors for impaired oral health and subsequent need for oral care among medical patients acutely admitted to hospital. Methods: Patients were assessed using the Revised Oral Assessment Guide and additional data were collected retrospectively. Logistic regression analysis and Chi square test were used to examine factors associated with the need for oral care. Results: Age and smoking were significantly associated with the overall need for oral care. Being at nutritional risk, above the age of 79 years and receiving oxygen therapy were significantly related to specific clinical oral areas. Discussion: The results indicate that medical patients acutely admitted to hospital should routinely receive oral care.
diagnoses in order to recognise conditions that are associated with the need for oral care. Especially as oral care has to be initiated immediately. It has been demonstrated that oral health is significantly lower when teeth need cleaning by nurses [13] , that not all patients who need assistance with oral hygiene get the actual assistance needed [14] , and that nurses need to prioritize this area of care [15] . Oral care could probably be improved if we were able to identify patients at risk of impaired oral health and initiate timely oral care for these patients.
The aim of this study was therefore to explore potential risk factors for impaired oral health and subsequent need for oral care.
Design
This retrospective cross-sectional study took place in a medium sized university hospital in Denmark. During a 5-week period from March to April 2011, all patients with acute medical health problems admitted to hospital on a weekday were included in the study.
The patients were admitted to one of the two different acute wards at the hospital, either the pulmonary medical ward or the acute medical ward. The most common diagnoses were dehydration, chronic heart disease, gastroenteritis, chronic pulmonary obstructive disease, lung cancer or tuberculosis. The patients came from either their own home or a nursing home. Inclusion criteria were; admitted on a weekday, expected hospitalisation for more than 48 hours, able to understand Danish. A multidisciplinary conference, which took place every morning, discussed every patient, and the consensus here concerning the patient's illness trajectory, including length of hospitalisation, were used to evaluate eligibility for inclusion in the study. Patients unable to give consent because of their medical or mental condition were excluded. Within 24 hours of hospitalization, included patients had their oral health status assessed.
Data collection
During the 5-week period, 251 patients fulfilling the inclusion criteria were hospitalised, and 161 (64%) of these participated in the study. 39 patients were excluded because of their inability to provide consent, 30 refused to participate and 21 attended medical examinations. Oral assessment took place using the Revised Oral Assessment Guide (ROAG) for patients in a geriatric rehabilitation ward [16] . Three experienced nurses did the assessments, within 24 hours of admission. ROAG has previously been translated and validated in a Danish version [7] . The ROAG consists of eight categories: voice, lips, mucous membranes, tongue, gums, teeth, dentures, saliva and swallowing. Each category is graded on a scale ranging from 1 (normal) to 3 (server oral health problem). Any patient who scored 2 or 3 in one or more category was considered to need oral health care. The nurses doing the initial oral assessment collected additional data retrospectively from patients' medical journals. Additional clinical variables were age, gender, oxygen supply (at the time of oral assessment), smoking (registered as positive if the patients were smoking daily at the time of hospitalization), fever (was assessed at the time of hospitalization, and fever was registered if the temperature (measured by ear or rectal) was above 38℃). Furthermore, nutritional status was assessed using Nutritional Risk Screening 2002 (as described by Kondrup et al.) [17] , assessing recent loss of weight, Body Mass Index, nutritional intake during the last week and degree of illness. The nutritional risk of each patient was scored at admission. An nutritional risk score from 4 to 6 indicate risk of malnutrition during the hospital stay.
Ethical considerations
The study was approved by the Danish Data Protection Agency (number 2007-58-015) and the Ethics Committee of the Capital Region (number H-1-2011-021). All participants were asked to give written consent.
Statistical analysis
Data were registered and analysed using SPSS version 19.0. In order to analyse factors defining the need for oral health care, a logistic regression analysis was conducted. All patients were divided into two groups, those having no need for oral care and those having 1 or more oral health problems. In this way the outcome was dichotomised using 1 problem as the cut-off point. This was considered the dependent variable.
At first all independent variables (oxygen, smoking, fever, nutrition, gender and age) were tested in univariate analyses, secondly in the multivariate model calculating Odds Ratio and their 95% Confidence Intervals. A p value < .05 was considered statistically significant. Thirdly, the impact of age, gender, oxygen therapy, smoking, fever and being at nutritional risk on the need for oral care within individual clinical oral areas comprising the ROAG was tested using the Chi Square test.
Results
Patients' characteristics are shown in Table 1. The number of patients having one or more oral problem is shown in Table 2.
Results of the univariable and multivariable analyses are shown in Table 3. In the univariate analyses, no variables were significantly associated with the overall need for oral care; however smoking was borderline significant (p = .07). In the multivariate analysis, age and smoking were significantly associated with the need for oral care (for smoking OR 4.55 (95% CI 1.1 -18.6) and for age OR 1.05 (95% CI 1.0 -1.1)). In Table 4, the impact of age, gender, oxygen therapy, smoking, fever and being in nutritional risk on the need for oral care within individual clinical oral areas comprising the ROAG is shown. Being at nutritional risk, being above 79 years of age and receiving oxygen therapy impacted the need for oral care within individual clinical oral areas.
Discussion
This study examined potential factors for the need for oral care among medical patients acutely admitted to hospital. Within 24 hours only age and smoking were significantly associated with overall impaired oral health and the need for oral care.
Retrospective collection of data from patients' journals may have influenced the validity of data in different ways. The variables gender and age are considered valid data. Smoking and nutritional status are subjective measurements reported by the patient, and a possible variation in data has to be taken into consideration. Oxygen therapy and fever are highly situational dependent data; the exact status may fluctuate over a short period of time. As this was a retrospective study, variables were limited to those available, meaning those comprising the standard patient assessment in the acute medical ward. A prospective study design would enable collection of more valid data and a wider variety of variables both of which would have strengthened the study. Variables to be considered could be the patients' ability to perform oral self-care and the patients pre-hospital oral care routines and oral health status. In this study, relatively small subgroups of patients did not need oral care, and this might have affected reliability of the results.
Examination of the impact of retrospectively collected baseline characteristics and clinical data on individual areas of oral health was done using relatively small numbers. These results lack sufficient statistical power and are therefore explorative and possibly useful for generating new hypotheses.
Another limitation to consider is the reliability of the oral examinations; as a study has shown only moderate inter-rater reliability when nurses assess the oral cavity using ROAG [18] .
Several studies have shown that oral health was significantly related to age, such as edentulousness [8] and xerostomia [19] . As age was not found to be a univariate predictor in this study, the explanation might be that age is a predictor of certain oral conditions, but not a strong predictor of the need for oral care. Not surprisingly, when testing individual oral clinical areas, we confirmed that age was significantly related to teeth/dentures. Furthermore we found significant relations to teeth and voice.
In this study no relation was found between gender and need of oral care. In other studies a relation between endentulousness and being female has been found [8] . This difference might be caused by the same mechanism as in relation to age. Being edentulousness is an oral health status, and is not necessarily the same as having a need for oral care.
Smoking was not found to be a significant predictor of oral health in the univariate analysis. However, in the multivariate analysis, smoking and age were significant risk factors for need for oral care. Earlier studies have demonstrated that smoking is a predictor of several oral conditions such as tooth loss [20] , and caries [21] . The effects of smoking increase with the amount of cigarettes smoked and the number of years of smoking [22,23] . This is consistent with the finding in the present study, indicating that the influence of smoking on oral health is both dependant on dose and length of exposure. As this study was cross-sectional and merely assessed whether the patient was smoking at the time of hospitalization, the study did not examine the effect of total smoking exposure or the effect of having recently stopped smoking.
In this study, being at nutritional risk was not found to be significant predictor for an overall need of oral care. This has also been found among long-term institutionalised elderly [24] . Whereas a study including elderly rehabilitation patients found that undernourishment was a predictor of changes in tongue and saliva [10] . The findings in our study fall in line with this, as we found significant relations between being at nutritional risk and a need for oral care related withun the following areas; voice, mucus, tounge, gum, teeth, saliva and swallowing.
Poor oral health has been found to be related to undernourishment [16,25] . Likewise persistent oral health problems have shown to be associated with impaired dietary intake [26] . Again, due to the cross-sectional design of our study, the reasons for being at nutritional risk in the present study were explored.
The authors were surprised that having a need for oxygen therapy did not impact on the overall need for oral care. It is often stated in nursing textbooks that oxygen dries the oral cavity. It should be noted that oxygen therapy did significantly impact on oral status in regard to specifically voice and saliva. Clinical guidelines recommend that only oxygen delivered above 4 l/m needs humidifying [27,28] . This despite an earlier study showing a relation between receiving humidified oxygen and developing a dry mouth in conscious patients [29] . Oxygen did not increase the need for oral care in our study. An explanation might be that patients in this study had a relatively high age which relates to xerostomia. Patients may themselves have experienced a dry oral cavity and therefore drank more to constantly humidify their mouth or perhaps there is simply no relation whatsoever.
Conclusion
In this study, possible indicators of the need for oral care among patients acutely admitted to hospital with a medical condition were examined. Only age and smoking were significantly associated with the overall need for oral care. As the need for oral care is high among this patient subgroup, the findings of this study might not indicate that only old and smoking patients need oral care. Being at nutritional risk, above the age of 79 years and receiving oxygen therapy increased the need for oral care within specific clinical oral areas. The results indicate that the identification of factors that increase the need for oral care is complex and that all acutely admitted medical patients might benefit from routine oral care.
|
2019-03-11T13:11:12.010Z
|
2014-08-28T00:00:00.000
|
{
"year": 2014,
"sha1": "56d314b6133aef24a0ceedf5aa314b74064c30e6",
"oa_license": null,
"oa_url": "https://doi.org/10.5430/jnep.v4n11p16",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ebedbda74b5a6a30d409af6ec50716fc1bd1e77c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269460311
|
pes2o/s2orc
|
v3-fos-license
|
Reinforcement learning-based SDN routing scheme empowered by causality detection and GNN
In recent years, with the rapid development of network applications and the increasing demand for high-quality network service, quality-of-service (QoS) routing has emerged as a critical network technology. The application of machine learning techniques, particularly reinforcement learning and graph neural network, has garnered significant attention in addressing this problem. However, existing reinforcement learning methods lack research on the causal impact of agent actions on the interactive environment, and graph neural network fail to effectively represent link features, which are pivotal for routing optimization. Therefore, this study quantifies the causal influence between the intelligent agent and the interactive environment based on causal inference techniques, aiming to guide the intelligent agent in improving the efficiency of exploring the action space. Simultaneously, graph neural network is employed to embed node and link features, and a reward function is designed that comprehensively considers network performance metrics and causality relevance. A centralized reinforcement learning method is proposed to effectively achieve QoS-aware routing in Software-Defined Networking (SDN). Finally, experiments are conducted in a network simulation environment, and metrics such as packet loss, delay, and throughput all outperform the baseline.
Introduction
Software-Defined Networking (SDN) routing separates the routing decision process from hardware devices, such as routers, allowing routing decisions to be made through a centralized software controller.This provides greater flexibility and stronger control capabilities.SDN routing (Tu et al., 2021(Tu et al., , 2022;;He et al., 2024) plays a crucial role in application scenarios that require high-quality network service, such as online video game and video conferencing.
The routing problem has been widely abstracted into graph theory model, where routers and links in the network are represented as nodes and edges in the graph.This model allows us to determine the optimal transmission path for data packets in the network through path selection algorithms.Early routing methods, such as distancevector algorithms and link-state algorithms, had significant limitations in terms of computation and communication overhead, slow convergence speed, and poor network scalability.Heuristic algorithms can be used for routing optimization, but they have high computational complexity and increased the computational load on the SDN controller.
In recent years, there have been numerous studies attempting to optimize routing using machine learning techniques (Xie et al., 2019;Su et al., 2023;Teng et al., 2023;Xiao and Zhang, 2023), particularly through reinforcement learning (RL) methods.By maximizing rewards through continuous interaction with the environment, the agent is able to find optimal strategies.Causal inference can help understand the causal relationship between network events, identify cause of problems, and guide control decisions.These machine learning techniques can achieve more intelligent SDN routing, enhancing network performance and stability.
In this study, a QoS-aware network routing scheme that combines deep reinforcement learning, causal inference, and GNN is designed to improve routing performance.Real-time network state is collected by the reinforcement learning agent, which aggregates neighborhood information using CensNet (Jiang et al., 2019;Jiang et al., 2020), to obtain representations of nodes and edges as input to the deep reinforcement learning (DRL) model.The agent outputs link weights and generates routing policies based on the Dijkstra algorithm.Causal inference is used to measure the causal impact of actions on the network environment and guide the agent to explore the action space more effectively.Finally, the routing performance of the network topology is tested in a simulation environment.The innovation points of this study are mainly listed as follows: • This study is the first to effectively combine causal inference and reinforcement learning, resulting in significant performance improvement in network routing problems.
Related work
. Reiforcement learning DRL-based methods (Bernardez et al., 2021;Casas-Velasco et al., 2021;Dong et al., 2021;Liu et al., 2021Liu et al., , 2023;;Sun et al., 2022;He et al., 2023) deploy the agent within SDN controller and generate control signals based on reward feedback after interacting with the data plane.Distinguishing from supervised learning algorithms, DRL methods do not require labeled datasets and can converge to optimal policies through continuous iteration with the environment, achieving automated network operation (Sun et al., 2022).Bernardez et al. (2021) combined traffic engineering with multi-agent RL to minimize network congestion and optimize routing.Casas-Velasco et al. (2021) introduced the concept of knowledge plane into SDN and applies DRL for routing decisions.Dong et al. (2021) employed a generative adversarial network to learn domain-invariant features for DRL-based routing in various network environments.Liu et al. (2021), He et al. (2023), andLiu et al. (2023) proposed multi-agent RL approaches for hop-by-hop routing.
. Causal inference
Causal inference is a method used to determine and quantify causal relationship by analyzing observed data and credible hypotheses, inferring causal connection between causes and effects rather than just correlation.Causal reinforcement learning is an umbrella term for RL approaches that incorporate additional assumptions or prior knowledge to analyze and understand the causal mechanism underlying actions and their consequences, enabling agents to make more informed and effective decisions.The four key applications of causal inference to RL include improving sample efficiency, enhancing generalization and knowledge transfer, eliminating spurious correlations, and studying interpretability, safety, and fairness in RL (Deng et al., 2023).Research studies such as Sontakke et al. ( 2021
. Graph neural network
Supervised learning algorithms (Rusek et al., 2019;Xie et al., 2019;Ferriol-Galm's et al., 2023) rely on labeled training datasets, where the model take network and traffic information as input to generate routing scheme.One major challenge of supervised learning methods is feature extraction, and the existing extraction methods generally perceive the network topological structure based on Graph Neural Network (GNN).Rusek et al. ( 2019) and Ferriol-Galm's et al. ( 2023) predicted network performance metrics (e.g., packet loss, delay, and jitter) for quality-of-service routing only through GNN.
Problem formulation
The network traffic considered in this study originates from any node and terminates at other nodes, represented as a discrete-time model (Liu et al., 2021), where traffic arrives in a predetermined time sequence.Each traffic flow is represented as a source node and a destination node.Additionally, the network topology is modeled as a bidirectional graph consisting of a collection of routers or switches and links.A DRL agent is deployed in the SDN controller, which takes network state as input and output routing control
FIGURE
The system framework for SDN routing.
signals.The SDN controller creates routing tables and deploys them to the data plane to achieve traffic forwarding.
The objective of this study is that each traffic flow is successfully routed from the source node to the destination node, avoiding congested or failed links and maximizing the average reward for all traffic flows.It is important to note that once a routing policy is implemented on a specific traffic flow, the routing policy for that flow remains stable.
The proposed SDN routing scheme . System framework for SDN routing
In this study, the network state and global topology are obtained by the SDN controller and used as inputs for the RL agent.The scheme utilizes soft actor-critic (Haarnoja et al., 2018) integrated with nodes and links co-embedding (Jiang et al., 2019;Jiang et al., 2020) and applies causal action influence modeling (Seitzer et al., 2021) to reward feedback, called SAC-CAI-EGCN.Routing control policies are generated as outputs and prune action through a safe learning mechanism (Mao et al., 2019).The final routing strategy is then generated using the Dijkstra algorithm and deployed to the data plane, as shown in Figure 1.
. Design of SAC-CAI-EGCN
SAC-CAI-EGCN includes an actor net, two critic nets and two target critic nets.The structure of actor and critic nets is shown in Figure 2. CeGCN part is employed to represent nodes and links, and then, the resulting link embedding and node embedding are concatenated as input to the actor and critic nets.
. . CeGCN part
To achieve simultaneous embedding of nodes and links, a three-layer network structure called CeGCN is designed, as shown in Figure 2. It consists of two edge-wise layers and a node-wise layer.The node-wise layer updates node embedding by combining the updated link embedding with propagation process referring to Equation (1).The edge-wise layer updates link embedding based on the input data with information propagation referring to Equation ( 2).
The node-wise propagation process of node features is shown in Equation ( 1).
The edge-wise propagation process of edge features is shown in Equation (2).
In which, T is a transformation matrix and T i,m represents whether node i connects edge m.
denotes the diagonalization operation of a matrix.P e and P v , respectively, represent the learnable weights of edge and node feature vectors.⊙ denotes the element-wise product operation.W v is the network parameter in the node-wise propagation process, so as W e .
In Equations (1, 2), A e and A v are calculated as Equation (3).A i represents the adjacency matrix of nodes or edges, and i represents the node or edge.I N i is an identity matrix and D i is the diagonal degree matrix of A i + I N i .
DRL model
The agent is trained based on a quadruple data structure < S, A, R, S ′ >, which is defined in detail as follows: • State S: the current state mainly includes (1) the representation of nodes and links generated by the CeGCN part; (2) the topology of network; and (3) the flow request.Specifically, the raw features for representation include the remaining available bandwidth and packet loss rate of each link, the number of flows, and the total size of data packets of each node.
FIGURE
The structure of SAC-CAI-EGCN.
• Action A: the weights of links in the network which are decimals and belong to the interval (0, 1].• Reward R: for comprehensive calculation of packet loss rate, delay, and throughput, a reward function is designed as follows: In Equation ( 4), let x t represents the packet loss rate, y t represents the delay, and z t represents throughput at t time slot, respectively.γ 1 , γ 2 , γ 3 , and γ 4 , respectively, represent the weight of packet loss rate, delay, throughput, and causal influence in the reward function.In this study, the reward function assigns weights to prioritize packet loss rate, delay, and throughput in the following order: γ 1 , γ 2 , γ 3 , and γ 4 are assigned to be 2, 1.5, 1, and 1, respectively.In Equations (5, 6), S ′ j represents the j-th component of S ′ , D KL denotes the KL divergence, and C j (s) quantifies the causal influence of action A on S ′ j given the state S = s t .In specific scenarios, packet loss rate, delay, and throughput are not on the same scale, so normalization is required.The normalization operation for packet loss rate and delay is as follows: Frontiers in Computational Neuroscience frontiersin.org Between Equations ( 7) and ( 8), y base represents the average delay of all links in the network, which is set to be 5 ms.x t and y t denote the average loss rate and delay of the recent flows with the same source and destination as the flow at time slot t, which are approximated by the following equation.
ǫ is a constant used to control the update rate of Equation ( 9).In this study, ǫ is 0.8.For the throughput, after normalizing according to the bandwidth requirement z demand , it needs the logarithmic change.The process is presented as Equation ( 10): In Equation ( 10), in order to avoid abnormal values in the log operation, the parameter b = 0.5 is added.• State S ′ : after executing action A, state S ′ is acquired and it contains the same type of information as state S.
The experience of interacting with the environment is stored in the replay buffer and sampled by prioritized experience replay.The policy π θ (s) is updated by the temporal difference method, in which θ represents the parameter of policy network.The loss function of actor net and critic net is as follows: As shown in the above equations, ǫ t is a random noise variable sampled from the unit gaussian distribution N. a t is obtained through the reparameterization trick of Equation ( 11).The loss of actor net is calculated based on Equation ( 12). a t+1 is obtained by π θ (•|s t+1 ), and the loss of any critic net is calculated by Equations (13,14).
As shown in Algorithm 1, lines 1-3 are to initialize actor net, two critic nets, two target critic nets, and replay buffer.Lines 4-10 collect experience, line 8 calculates the causal action influence r cai t , then lines 9-10 calculate reward r t and store them to the replay buffer R. Line 13 updates two critic nets Q ω 1 (s, a), Q ω 2 (s, a), and then lines 14-15 update actor net π θ (s).Moreover, line 17 softly synchronizes parameter to the two target critic nets
. . Safe learning mechanism for routing
To prevent the degradation of performance caused by unsafe strategies, such as passing through failure or heavily congested update Target Critic Networks by: links, a safe learning mechanism is designed.As shown in Figure 3, for each decision-making process, the control plane will determine whether the safe condition is met.For the current action and status s, the safe condition containing the following two items needs to be met simultaneously: (1) not passing through failure links and (2) not going through heavily congested links.If the safe condition is satisfied, the action will be output directly.If not, a fallback stable action will be output.Specifically, the weights of failure links or heavily congested links will be modified to the maximum value, which is 1.At the same time, an extra reward penalty will be fed back to guide the actor net to generate safer routing policies.
Experiments . Simulation setup
A public network topology is used, namely, GEANT2.It has 24 nodes and 37 bidirectional links.In the simulation environment, The bold values indicate optimal performance in the current column.
most of the links have a data rate of 10 Mbps, while there is two bottleneck links in GEANT2 with a data rate of 2.5 Mbps.Overall, 10% packet loss is added to each bottleneck link.Moreover, each link has a transmission delay of 5 ms.In this study, the shortest path routing (SPR) is selected as the typical method for comparison, which only calculates the shortest hop number without considering the network state.The other one is SAC with causal action influence modeling called SAC-CAI.
. Experiment results
. . Performance under given network
For a given network topology, the starting and ending nodes of flows are generated randomly, and the exact same traffic is used to test the three methods.For the GEANT2 network, the duration of flows is set to be 35 time slots, the global steps for the three methods are, respectively, set to be 30,000, 100,000, and 100,000.
Table 1 presents a comparison of three methods in terms of performance metrics including packet loss rate, latency, and throughput under the GEANT2 network topology.From the model reward curves, SAC-CAI and SAC-CAI-EGCN converge ∼100,000 steps, while SPR exhibits congestion and latency at ∼30,000 steps, so the SPR method only runs 30,000 steps.SAC-CAI-EGCN outperforms SPR and SAC-CAI significantly in all metrics under the same network topology, flows, and traffic intensity.First, the superior performance of SAC-CAI over SPR indicates the positive impact of causal inference in guiding action exploration for network routing.Second, SAC-CAI-EGCN exploits link and node co-embedding to effectively aggregate neighborhood features, thereby enhancing network routing performance in comparison with SAC-CAI.
. . Performance under di erent tra c intensities
To investigate the performance of SAC-CAI-EGCN, SAC-CAI, and SPR under different traffic intensities, an additional experiment with 25 time slot (light-load) flows was conducted.However, due to the poor performance of SPR and its significant difference in data scale compared with the other two methods, only the experimental results of SAC-CAI-EGCN and SAC-CAI are presented, as shown in Figure 4. First, as the traffic intensity increases, the packet loss rate and latency increase, while the throughput decreases.Second, from light to heavy traffic intensity, SAC-CAI-EGCN demonstrates superior performance in terms of packet loss rate, latency, and throughput compared with SAC-CAI.
Conclusion
In this study, based on action influence quantification and GNN a reinforcement learning method is proposed, enabling efficient SDN routing.Experimental results on publicly available network topology and different traffic intensities demonstrate significant improvement in QoS metrics, such as packet loss rate, latency, and throughput compared with baselines.This validates the effectiveness of SAC-CAI-EGCN in quantifying the causal impact of actions on the environment and simultaneously embedding edges and node features, guiding the generation of efficient SDN routing policies.In the future, we will continue exploring the application of causal reinforcement learning in improving network service quality, such as leveraging counterfactual data augmentation to improve sample efficiency and addressing confounding bias in RL.
) and Huang et al. (2022) enhance sample efficiency in RL by conducting causal representation learning.Seitzer et al. (2021) improves the efficiency of the agent, exploring the action space by measuring the causal impact on the environment based on conditional mutual information.Pitis et al. (2020) explores counterfactual data by studying local independence condition in the environment, enriching the sample dataset and enhancing the generalization capability of the agent.Lu et al. (2018) eliminate decision bias of agent and improve decision accuracy by studying confounding bias in RL.
FrontiersFIGURE
FIGURE Performance results of three methods under di erent tra c intensities (light-load and heavy-load) in the GEANT network.(A) Loss rate under light-load.(B) Delay under light-load.(C) Throughput under light-load.(D) Loss rate under heavy-load.(E) Delay under heavy-load.(F) Throughput under heavy-load.
experience < s t , a t , r t , s t+1 > into R add TABLE Performance results under GEANT network.
|
2024-05-01T15:38:04.484Z
|
2024-04-29T00:00:00.000
|
{
"year": 2024,
"sha1": "59b4a0ce4f71d9f7a555501edf89e8b2415e5158",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncom.2024.1393025/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9e05348668ab73f4aa8e31d342064883646a89f",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
233640834
|
pes2o/s2orc
|
v3-fos-license
|
CUSTOMER PATRONAGE INTENTIONS AND MODERATING EFFECT OF CUSTOMER MOOD ON RETAILSCAPE ELEMENTS AND CUSTOMER JOY: A STUDY OF GROCERY RETAIL STORES IN RIYADH
This research aimed to explore the retailscape components and identify the mediation impact of customer mood between retailscape elements and customer joy evidence from Retail Stores in Riyadh. The researcher identifies the research variables based on a critical review of previous literature. The research sample consists of 289 respondents from the population which includes customers of grocery retail stores in Riyadh. SEM by using smart PLS wad conducted as an analysis tool. The research concludes that retailscape elements can influence customer joy and customer patronage. As well as, the results indicate that mood of customer mediate the relationship between customer joy and retailscape. Finally, the researcher recommended that more future research may conduct and address more factors such as behavioral intentions and customer satisfaction.
INTRODUCTION
It could be a well-accepted concept in previous research that states that customers buy items for two reasons including beneficial from functional advantages and enjoyable experiences (Pine and Gilmore, 1999). A lot of previous studies have seen that retailers and services organizations suppliers pulled in buyers through different guarantees and a wide combination of stock accessible in a single area. The research trends in the field of marketing in particular have begun to change and develop into more specialized research with the presence of many pieces of research and studies in this field. However, Kotler as one of the most prominent scholars of modern marketing is the first to explain and describe the retail "atmosphere," which he argued, "should be consciously designed to influence shoppers". Whereas Baker et al. (1987) defined and explained atmosphere as the physical environment which influences customer judgment of services.
Moreover, the term of "services cape" was utilized as an elective for atmosphere by Bitner (1992), who defined it as "a man-made physical environment in which the delivery of service products takes place." The concepts of atmosphere and services cape are both linked to the physical environment capacity in order to encourage the accomplishment of the goals of businesses and marketing in particular. The physical environment linking to people attributes in order to decide their reaction and advances contended that these natural factors impact on the customers and can serve as a source of customers delight and consequent repatronage (Turley and Miliman, 2000). In addition, an appealing physical environment ought to result in positive feelings of customers, which reflect on create customer pleasure and their future re-patronage. As further evidence, the previous studies (e.g.: Rai and Medha, 2013;Mellroy and Barnett 2000;Chiguvi, 2016) clarify and support the relationship between customer pleasure on the one hand and customer care on the other hand. This Journal is licensed under a Creative Commons Attribution 4.0 International License
Elements of Retailscape
Retailscape known as an alternative for the physical atmosphere or environment of the retail business. The present study suggests three retailscape elements: ambiance, social interaction, and exterior. These elements are discussed below.
2.1.1. Ambience Ambient features are the physical and characteristics of the retail shops environment, such as layout, design, colors, music and lighting and others. Generally, these features influence on all five senses (Bitner, 1992). Normally, customers differ in responding to the product and services (Kotler, 1973). Many researchers find out that ambient conditions as a driver influencing customers to interact and response to the environment (Baker, 1987). The ambiance or background conditions of an environment can include various features such as layout design, organizing, sorting, colors, sound and temperature. All elements are reliant on on the features of the services that offered to customers. Turley and Miliman (2000) point out that service environments such as retail stores, comprise a large number of atmospheric elements such as layout, design, color and music, to which customers show interactive responses including the sales, quantity, time spent, and impulse purchases in the stores. It is very common for retailers to revise their locations' ambiance to attract customers; for example, consumers will notice if the temperature is too high or too low or if a fragrance is too strong. Chebat and Michon (2003) found that environmental cues of ambiance directly affected shoppers' perceptions and that customers' pleasure was increased by more positive ambient conditions. Similarly, Cho and Lee (2017) recognized a optimistic relationship between environmental prompts such as ambient condition and the level of customers' pleasure and purchase intention. Further literature also supports that ambiance evokes consumers' positive behavioral responses. Dongseong and Jinwoo (2004) point out that interaction could be a communication behavior with two or more objects particularly when they are successful for each other. As well as, Ali and Omar (2014) state that the encounter of customer with any organization may be surrounded as a work of two distinctive points of view, the physical and social situations. Accordingly, the physical environment contains of tangible and intangible features (e.g., layout, color, organizing, sorting, temperature (Bitner, 1992). whereas the social environment of an organization indicates to the interrelation between and among customers and employees. However, Choi and Kim (2013) argue that customer behavior directly affected by customer-to-staff and customer-to-customer interaction. The research that conducted by Cetin and Dincer (2014) state that there a strong significant relationship between customer satisfaction and customer behavior. Moreover, previous studies have examined the relationship between employees and customers; for example, Bitner (1992) explain the impact of physical objects and physical atmosphere on employees and customer behavior and they affecting on the quality and customer interaction. Loudon and Bitta (1993) argue that as a rule, customer incline toward to shop at areas where they can anticipate great benefit and offer assistance from the benefit suppliers. In addition, customers by and large want to trade-in areas, such as shopping centers, where organizational work force such as sales representatives are seen as accommodating, neighborly, and affable. The environment of market including design, color, lights and all these factors play a critical role in motivate customers to re-visit more and more. As well as, Beatty et al. (1996) find out that positive social interaction, customers and employees reinforce their loyalty to the business, and customers consider their visit in terms of the trust and friendship they feel toward employees, causing customers to feel pleasure in the interaction. Furthermore, the interaction with customer by employees support organization to creates a optimistic image of the business, and customers feel confidence which lead more purchasing behavior in the future (Jang et al., 2015). Berman and Evans (1995) figure out that service industry does not give enough attention to exterior element despite of the significant important of these elements especially for stores that requires leading set of elements that are normally seen by a customer. In case the exterior and outside of a store isn't overseen well, the other components might not matter (Turley and Milliman, 2000). Berman and Evans (1995) argue that the outside must be alluring and well-formulated to energize positive customer behaviors for retail stores within the benefit industry to be effective and successful (Berman and Evans, 1995). However, the exterior contains the front of the trade area, entrances, marquee, displays, the building design, the surrounding area, car parking, and other elements (Turley and Milliman, 2000). Accordingly, there is lake and limit of research that address store's atmosphere. Turley and Milliman (2000) confirm this scarcity of research and they argued that a limit number of previous literatures investigate the impact of the exterior of a store on buyer behavior. The previous research tries to investigate other factors; the prototypicality of a store design; the impacts of external window display by (Edwards and Shackley, 1992); the influence of car parking and location on quality perceptions. On the other hand, (Berman and Evans, 1995) address atmospheric elements through four sub categories namely design and layout, internal décor, interior and exterior elements. He also recommended that the organizations or store should care and pay more attention to the external and exterior elements which is the first thing that seen by customers.
Exterior
All of the above-cited studies recommended that exterior factors have great relationship inspirational with the behavior of retail consumers. Many research investigated many external factors that related to the macro-environment, a shopping district, and shopper behavior. Their study reveals that "while customers form definite perceptions of a large complex macro-environment these perceptions vary among individuals. As noted in the above discussion on exterior variables, there is a lack of research that addressed the relationship between exterior elements and retail industry. In contrast, there are many previous studies dealing with interior factors and customers' perceptions of these factors. According to the above discussion and recommendations of previous research the exterior elements need and deserve more consideration and attention. That because of the significant and critical impact of exterior on the customer perception and behavior.
Retailscape elements and customer patronage
The term "retailscape", coined by Bitner (1992), can be defined as the internal environment of a location in the service industry which is framed to combine a visual set of elements in traditional shopping centers and which includes visual displays, different color schemes, areas that facilitate comfortable movement, different features of decorations and some sense of stimulation (e.g., air temperature, odor, different lighting schemes) that stimulates an aesthetic responses from customers. The past literature in the service industries has explored the influence of environmental factors and their effects on customer emotions and behaviors, and these factors have traditionally received attention from geographers, architects and environmental psychologists (Wang and Mattila, 2015). Winter and Chapleo (2017) and Mari and Poggesi (2013) state that More than two and half decades after they were first discussed in the literature, these distinctions are still seen to be significant and an exact reflection of a physical benefit environment.
Many researchers have considered retailscape in their studies; for instance, Baker (1987) explored tangible and intangible elements including ambient conditions, the design of the business, and social interaction and explain how customers affected by environmental issues in retail locations and service industry. As well as, Bitners (1992) illustrated more absolutely the environmental components in benefit settings and named the measurements a servicescapes.
The term servicescape has been utilized within the investigation of cross-national markets and examination of vital components of the service industry. Accordingly, servicescape defined as set of components that related to inside design, architecture, colors, organizing products, lighting style, and music (Baker, 1987). All theses factors influence on customers' re-patronage intentions. We can say that there are full agreements that all these factors consider a critical key for customers' future re-patronage. Moreover, customer re-patronage intentions eagerly lead customers to shop at one put instead of another. Shoppers will select stores for a few reasons, counting retailscape component, comfort of area, benefit quality, social interaction, sterile reasons, put commitment, and passionate connection. The significant and critical role of the environmental issues and their interrelation with customer re-patronage addressed by many researchers in different context including: (Baker, 1987;Wakefield and Blodgett, 1994;Wakefield and Blodgett, 1996) focus on leisure service environment, whereas (Edwards and Gustafsson, 2008;Han and Ryu, 2009;Harris and Ezeh, 2008;Heung and Gu, 2012) focus on food industry; whereas (Ali et al., 2015;Newman, 2007) investigated airport sevice.
Accordingly, Past studies recommend that service suppliers organize and oversee in-store boosts to create them alluring to customers for future re-patronage; encourage; benefits suppliers are committed; to show their administrations and products to customer and to make mental reaction that include buyers within the act of acquiring through, for case; convictions; thoughts, interest, recollections, and so on (Bustamante and Rubio, 2017).
According to the forementioned discussion; the following hypothesis has been generated: H 1 : There is a positive relationship between retailscape and customer patronage.
Retailscape elements and customer joy
A lot of researchers explain the significant of retailscape elements and its role in customer pleasure and quality. Bitner (1992) state that "fine dining" needs especially ambient conditions, but "fast food" may need another factor. Wakefield and Blodgett (1996) argue that facility aesthetics and cleanliness consider as the influential and critical factors on the customers' perception of quality. Bitner (1992) point out that layout design and functionality are mostly obvious in multipart self-serve situations and that design in discount shops simplifies and encourage the fulfillment of useful needs. Essentially, stimulating and mood-altering design have been creating to increase customers' pleasure in the service industry (Wakefield and Blodgett, 1996).
Whereas music definitely moderates customers' perception of service attitude, and precursor to service assessment. The previous studies in retailscape recommends that services include numerous interactions between customers and employees which lead to separate needs concerning physical surrounding thoughts. As well as, these studies recommends that the retailscape itself can cause particular customer replies, which directly support categorize the service in a specific technique. The retailscape elements have significant important in the planning, design, and execution of services within any industry and should be measured always (Ostrom et al., 2010).
These thoughts are central since they all propose that a relationship exists between how a business's procedure has formed the customers' biases of a suitable retailscape for the trade and customers' joy with the benefit they eventually get. These central thoughts gives a beginning point for the thought of how benefit firms may best adjust and coordinated benefit offerings (Ostrom et al., 2010) and coordinated plan considering into benefit hones forms and frameworks. According to the forementioned discussion; the following hypothesis has been generated: H 2 : There is a positive relationship between retailscape elements and customer joy.
Customer joy and customer patronage
In the marketing filed, customer pleasure and customer satisfaction have seen as a critical and essential element for profits. All organizations seek to please their customers for two main reasons; to guarantee customers' re-patronage and to survive and success in the dynamic market and environment. However, Chiguvi (2016) argue that customer satisfaction and feeling the product meet with their needs is mainly called customer pleasure. Additionally, Chiguvi (2016) defined customer patronage as commitment to proceed obtaining a favored item or benefit reliably notwithstanding of situational variables and showcasing endeavors that may result in exchanging behavior. Customer patronage is a customer's obligation with company and support them through allow organization, to buying its commodities or services regularly and recommend them to others (Mellroy and Barnett, 2000). This particular relationship help customer and give them good feeling with perceived better value for long term. Rai and Medha (2013) argue that customer patronage is shaped by supported and maintained customer pleasure and joy with demonstrative formed and the service provider that makes readiness and consistence in the relationship with preference, patronage and premium. According to the forementioned discussion; the following hypothesis has been generated: H 3 : There is a positive relationship between customer joy and customer patronage. Gardner (1985) mentioned that customer mood has addressed and explained by previous research as an exposure to recover the information that might be influence on customer's capabilities for the information to recall. Specifically, customer mood defined as the designation for the appropriate affects which states about not specific and or/ everything about in the world in general. In addition, moods or feeling may be considered long-lasting affective states which are knowledgeable without simultaneous mindfulness of the customer's elicitors (Lord and Kanfer, 2002;Frijda, 1993). Moreover, moods can have downplayed influence on the data processing. Isen (2001) linking between positive moods and the tendency toward experimental processing. In the contrast, there is linkage between negative moods and additional cautious and organized processing. From another perspective, Gohm and Clore (2000) argue that positive moods inspire the utilize of scripts and social data handling, while negative mood increment the utilize of person data. Isen (2000) point out that positive moods lead to more enjoyment and support create new ideas which reflect on enhance the creativity.
Customer mood as a moderator
Furthermore, in the previous marketing research, customer mood has not address as a unique term especially in the services business, while this sector implements different strategy and approaches in order to impact positively on the customers' mood. Verity of store features generate several customer reactions and layout of store, design, might be affected on the customer mood to achieve more customer re-patronage. Osman et al. (2014) explained that these characteristics incorporate the environment of the service industry and the generally plan of the store's atmospheric environment for customers, where the service suppliers are certain in their capacity to make a positive disposition amid the customer's visit; in reaction, customer create favorable in-store behavior (e.g., time, sum of cash went through, re-patronage intentions and positive communication. However, the store's atmospheric and environment might have influence on the customer customers' decision-making and selection options. Nevertheless, all atmosphere or environment attributes intend to be an effective and critical marketing approaches if they applied and understood by service providers (Osman et al., 2014).
Once mood is a state of mind, a lot of previous literatures addressed atmosphere elements to be effective tool or technique that affect on the customer mood. Taylor and Cosenza (2002) figure out that ambient cues refer to all elements and attributes design, colors, decor, architecture and overall store layout. However, these factors and elements affect on each other and their interrelation between it such as the element that might influence on music may comprise intensity, style, tempo, volume (Kim and Fiore, 2007). Store's attributes have significant influence on customer enjoyment while they in the store which accordingly lead to more shopping and increase future customers' re-patronage. On the other hand, environmental and store's atmospheric have great relationship with customer's emotional states and their behavior (Bloch et al., 1994). Additionally, enjoyable ambience can stimulate an optimistic mood which reflect on the customers and make them pleased and enhance customers attentions to patronage. Research framework of the study is presented in Figure 1. From mediation role perspective, Lucia-Palacios et al. (2016) investigate the mediating role of customer mood between the business atmosphere and customer behavior. He concludes that optimistic moods might moderate the generation of affective and intellectual responses resulting from store attributes. Babin and Darden (1996) find out that mood might partiality the awareness of customers towards environmental factors and therefore influence on customer behavior. According to the forementioned discussion; the following hypothesis has been generated: H 4 : Customer mood moderates between retailscape elements and customer joy.
METHODOLOGY OF RESEARCH
The main research method is quantitative approach by using adopted questionnaire in order to test hypotheses or answer questions concerning the research problem. The questionnaire consists of many constructs which include: retailscape elements that adopted from (Bustamante and Rubio, 2017), customer mood items adopted from Bustamante and Rubio, 2017) customer patronage adopted from (Panichpathom, 2016). As well as, seven Likert scale was used.
The questionnaires for this study were distributed in face-to-face interactions with customers of three retail stores in the car parking area of these stores. Respondents were informed of the aim of the study before completing the questionnaire. A total of 297 out of 300 questionnaires were returned, and 289 of these were deemed complete and usable. Table 1 summarize the sample characteristics.
Data Analysis
Smart PLS 3 to conduct SEM and (SPSS) Statistical package for social science version 22 were conducted in order to analyze research data. SEM support researchers to investigate the relationship among research variables.
Measurement model
Convergent validity refers to the level to which indicators of exact variable meet a high volume of variance and are highly inter-correlated amongst themselves (Hair et al., 2010). It can be tested by using loadings and (AVE) (Hair et al., 2010). Moreover, AVE as a convergent validity examination denotes to the Amount of Variance Explained in indicators by their respective variable in relation to the unexplained variance due to measurement error (Fornell and Larker, 1981). Hair et al. (2010) pointed out that, the acceptable value of AVE is greater than (0.5). As well as, AVE values are greater than accepted value. Furthermore, Discriminant validity was measured by using Correlation matrix. The obtained results are presented in Tables 2-5. The environment of this retail store and the display of its products and services makes me feel hopeful 0.831 The environment of this retail store and the display of its products and services makes me feel enthusiastic 0.867
Hypothesis testing
The above results outline an evidence of all types of validity for the research model which indicate that the researcher can go ahead with further analysis. Bootstrapping method in smart PLS 3 was applied to test research hypotheses. Figure 2 and Table 6 show the analysis result for all hypotheses. Hypothesis (1) Furthermore, effect size (f2) was calculated based on Cohen's (1988) cut off value (0.02) indicate to small effect; whereas 0.15 refers to med effect; while 0.35 refers to high effect size. Table 6 summarize the results.
Moderation analysis
In this research, the mediation role of customer mood between retailscape elements and customer joy was tested. For this perspective, Bootstrapping method in smart PLS 3 was applied. Figure 2 and Table 6 show the analysis result for H 4 . The outcome support and conforms our prediction. Its shows that customer mood mediate this relationship (t = 13.900, Ρ < 0.05). Thus, H 4 was supported (Table 7).
CONCLUSION
This research findings support the previous literatures by providing empirical provision for the impact of retailscape features and elements on two factors which are customer joy and customer patronage and the mediation role of customer mood. A limited number of researches try to investigate these variables especially in the Arab context, and there is no any research conducting in the Saudi Arabia context. However, this research investigated four main hypotheses to achieve research goals. The analysis stage supports all hypothesis. Practically, retailscape elements can influence customer joy and customer patronage. As well as, the results indicate that customer mood moderates between retailscape and customer joy.
RECOMMENDATIONS AND LIMITATIONS
This research has faced many obstacles, challenges, and limitations Like other research. The first of these limitations is geographical restrictions, as the study was restricted to retail groceries in the city of Riyadh only. The time and effort limitation lead to focus on a few elements of retailscape. The researcher based on his findings recommended that more research should conduct by focusing on more variables such customer behavior and satisfaction.
|
2021-05-05T00:09:49.092Z
|
2021-03-10T00:00:00.000
|
{
"year": 2021,
"sha1": "eff611a8f4f90dd734d9fe823df84d3902517367",
"oa_license": "CCBY",
"oa_url": "https://econjournals.com/index.php/irmm/article/download/11184/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a9dcbacadbe7aa3521cfa6c3f4daca7e15cbdf24",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
269579741
|
pes2o/s2orc
|
v3-fos-license
|
Harnessing T-Cells for Enhanced Vaccine Development against Viral Infections
Despite significant strides in vaccine research and the availability of vaccines for many infectious diseases, the threat posed by both known and emerging infectious diseases persists. Moreover, breakthrough infections following vaccination remain a concern. Therefore, the development of novel vaccines is imperative. These vaccines must exhibit robust protective efficacy, broad-spectrum coverage, and long-lasting immunity. One promising avenue in vaccine development lies in leveraging T-cells, which play a crucial role in adaptive immunity and regulate immune responses during viral infections. T-cell recognition can target highly variable or conserved viral proteins, and memory T-cells offer the potential for durable immunity. Consequently, T-cell-based vaccines hold promise for advancing vaccine development efforts. This review delves into the latest research advancements in T-cell-based vaccines across various platforms and discusses the associated challenges.
Introduction
Vaccines, biological agents employed in the prevention of infectious diseases, function by stimulating the host immune system to generate immune responses against specific pathogens.They play a crucial role in bolstering resistance, reducing disease incidence, and mitigating transmission risks, thereby safeguarding global public health [1].Their origins can be traced back to the late 18th century, credited to the pioneering work of British physician Edward Jenner.In 1796, Jenner successfully employed the cowpox virus as a vaccine to prevent human smallpox infection, marking the genesis of modern vaccinology [2].
Vaccine-induced responses typically involve B-cell-mediated antibody responses and T-cell responses.While a robust antibody response has historically been a major focus in vaccine development, recent years have seen increased attention on the protective role mediated by T-cell responses.This has led to a growing focus on the development of T-cell-based vaccines [2].The aim is to develop vaccines capable of inducing sufficient quantities of specific CD4 + and/or CD8 + T-cells with the necessary phenotype and effector functions.These induced specific T-cells can directly promote pathogen clearance through cell-mediated effector mechanisms [3], emphasizing beyond the helper role of CD4 + T-cells for B-cells.The protective potential of T-cell-based vaccines has been demonstrated in the real world, notably with the tuberculosis vaccine, Bacille Calmette-Guérin (BCG).First used in the 1920s and still in use today, BCG is considered one of the first vaccines to confer protection primarily through inducing T-cell responses [2], despite this not being its original development goal.Mycobacterium tuberculosis, being an intracellular pathogen, cannot be recognized by antibodies and is only controlled through T-cell-mediated effector mechanisms [4,5].
Cellular immunity constitutes a vital component of the host's antiviral defense mechanism, involving CD4 + and CD8 + T-cells.Multiple studies underscore the pivotal role of virus-specific T-cells in orchestrating immune protection and regulation against viral infections.CD8 + T-cells, also known as cytotoxic T-lymphocytes (CTLs), are indispensable for combatting viral infections.In addition to their cytotoxic function, they secrete cytokines like TNF-α and IFN-γ, endowed with antiviral properties, thereby aiding in the elimination of viruses [6].CD4 + T-cells also play a crucial role in combatting virus invasion.Functioning as T-helper cells (T H ), they furnish costimulatory signals via CD40/CD40L signaling to antigen-presenting cells (APCs) during B-cell and CD8 + T-cell priming [7,8].Notably, in mice, some CD4 + T-cells exhibit cytotoxic capabilities by releasing perforin, providing direct defense against influenza A virus (IAV) infection [9].Studies have revealed that pre-existing CD4 + T-cells in humans respond to pandemic 2009 H1N1 peptides, correlating with reduced virus shedding and milder illness during IAV infection [10].Additionally, research confirms the existence of protective cross-reactive T-cell responses among human coronaviruses [11][12][13], mirroring the diverse protective effects of cross-reactive T-cells against the influenza virus (IV) [14,15].Notably, specific memory T-cell populations boast an extended lifespan, with highly conserved T-cell epitopes, rendering viruses more prone to evading humoral immunity over T-cell immunity [16][17][18][19][20][21].Given the advantages of T-cell responses, the field of vaccine research is increasingly gravitating towards eliciting T-cell responses.Consequently, what attributes characterize T-cell-based vaccines?Subsequent sections will delve into the distinctive features of T-cell-based vaccines.
1.
Neutralizing antibodies typically recognize spatial epitopes located in specific regions of viral surface proteins that are involved in the initial interaction with host cells [22,23].A single amino acid mutation may lead to the evasion of neutralizing antibody recognition.Pre-existing antibodies in the host may induce antigenic drift, as notably observed in IAV and coronaviruses [24,25].In contrast, the epitopes recognized by T-cells are mostly linear epitopes that can be distributed on the surface and inside viral particles, expressed in infected cells, and presented by antigen-presenting cells (APCs).Point mutations outside the anchored residues can also be recognized by T-cells; therefore, the T-cell antigen repertoire is broader and less susceptible to antigenic drift [25].Consequently, T-cell-based vaccines are expected to confer protection against infections caused by rapidly mutating viruses.
2.
Viruses within the same genus typically harbor conserved proteins [26], which can be strategically harnessed in T-cell-based vaccines to elicit a broad-spectrum crossreactive T-cell response effective against various viruses within the genus, as T-cells recognize antigen motifs.These highly preserved proteins are commonly found within virus particles and are distinct from the proteins that can be targeted by neutralizing antibodies during the viral invasion process [22].For instance, the N protein of coronaviruses is typically conserved [27], yet neutralizing antibodies directed against it are rare.
3.
Infections involving intracellular pathogens pose a challenge for antibody-mediated clearance as antibodies cannot access the pathogens within infected cells.In such cases, the cytotoxic activity of specific T-cells becomes pivotal.For instance, in HIV infection, HIV-specific CD8 + T-cell-mediated cytotoxic immunity can prevent disease progression and virus transmission, presenting opportunities for therapeutic and prophylactic antiviral interventions [28][29][30].
4.
Virus-specific memory T-cell responses have the potential to confer prolonged protection.Studies with 6-year and 17-year follow-ups of SARS survivors revealed long-lasting specific T-cell responses, even after antibody levels have waned to undetectable levels [31,32].
5.
Through immune modulation, tissue-resident memory (TRM) T-cells can be induced to provide enduring immunosurveillance and protective functions at local tissue sites, crucial in combating viruses invading local tissues [33].Studies on various viruses, such as herpes simplex virus [34], vaccinia virus [35], and some respiratory tract infections [36,37], have demonstrated the potent protective role of TRM T-cells at tissue sites.6.
However, it is important to note that T-cell-based vaccines also face limitations.Historical success in preventing diverse infections has been limited, and while research on T-cell-based vaccines in the HIV field has been ongoing, clinical application remains challenging [28].Factors like HLA restrictions constrain the applicability of T-cellbased vaccines [28], and T-cell-mediated immunity lacks the capacity to neutralize cell-free virus particles.
Given the current lesser prevalence of T-cell-based vaccines compared to antibody vaccines, comprehensive details on T-cell-based vaccine platforms remain scarce.This review endeavors to furnish an encompassing view of T-cell responses elicited by diverse vaccine platforms.Figure 1 provides an overview of the host immune responses in vaccination and infection post-vaccination, emphasizing adaptive immune responses.Through this overview, we aim to assess the ongoing progress of these platforms, evaluate their respective strengths and limitations in development, and discuss the hurdles encountered in the pursuit of T-cell-based vaccine development.
Inactivated Vaccines
Inactivated vaccines are formulated by subjecting the live virus cultures to a sequence of purification technologies, subsequently rendering them non-infectious through physical and chemical means.These vaccines are characterized by their composition, which closely resembles the natural virus structure.With a lengthy history of licensing and established safety profiles, inactivated vaccines for viral diseases have garnered trust.While they predominantly elicit a humoral immune response, certain vaccines, such as inactivated IV and SARS-CoV-2 vaccines, have demonstrated the ability to induce T-cell responses as well.
Influenza virus vaccines (IVVs) currently in use come in two forms: inactivated or live attenuated, targeting tri-and tetravalent vaccines of IV A and B [38].Inactivated IVVs incorporate epitopes derived from viral surface proteins hemagglutinin (HA) and neuraminidase (NA), stimulating the production of antibodies that target HA proteins crucial for combating viral infections [39].While existing inactivated IVVs primarily induce CD4 + T-cell responses to HA and nucleoprotein (NP) [40,41], CD8 + T-cell responses remain relatively ineffective [42][43][44].Inactivated SARS-CoV-2 vaccines have played a crucial role in combatting the COVID-19 pandemic.In China, the widespread administration of inactivated SARS-CoV-2 vaccines from companies like Sinovac Biotech and China National Biotec Group Company Limited has demonstrated safety and elicited antibody responses in clinical trials [45,46].Following inoculation with inactivated vaccines, the population of helper T-cells (Th1) tends to increase [47], responding by secreting interferon-γ (IFN-γ) [48,49].
In addition to inactivated IV and SARS-CoV-2 vaccines, other inactivated vaccines have found clinical applications, including those for polio, Japanese encephalitis, rabies, and hepatitis A, as well as EV71 hand, foot, and mouth disease vaccines.The advantages of inactivated vaccines include safety, easy storage, low risk of contamination, insensitivity to neutralization by maternal antibodies, and the ability to generate combined or multivalent formulations.Nonetheless, drawbacks exist, such as the requirement for large vaccination doses, a short immunization period, and a single immune pathway that predominantly stimulates humoral immunity rather than mucosal immunity.
Live Attenuated Vaccines
Live attenuated vaccines utilize weakened strains of viruses to trigger immune responses, constituting a common vaccination approach.Here, we discuss several attenuated live vaccines utilized for preventing viral infections, with a focus on their capacity to induce T-cell responses.
Live attenuated IVVs emulate natural IV infections, prompting the production of protective antibodies and cellular immunity.In contrast to inactivated vaccines, live attenuated vaccines elicit superior CD4 + and CD8 + T-cell responses in children, thereby enhancing cross-protection [43,50,51].Moreover, attenuated live vaccines can sustain specific T-cell responses for six months to one year following child vaccination [52].Zostavax, a live attenuated vaccine containing the varicella-zoster virus (VZV), is recommended for adults over 50 to prevent shingles.Immunization with Zostavax augments the multifunctional memory of CD4 + T-cells, broadening the T-cell receptor repertoire of antigen-specific CD4 + T-cells in adults.In the general population, T-cell responses peak eight days after immunization.Older individuals vaccinated with the attenuated varicella-zoster virus exhibit defects in T-memory-cell differentiation, experiencing more rapid antigen-specific T-cell loss, thus leading to incomplete protection [53][54][55].In addition, several effective live attenuated vaccines have been developed to prevent flavivirus infections [56].The 17D vaccine is a live attenuated vaccine developed against yellow fever virus (YFV), and three strains of 17D vaccines are presently in production [57].The humoral and cellular immunity triggered by the 17D vaccine has been thoroughly studied in humans.In addition to the production of neutralizing antibodies, the 17D vaccine also initiates a robust, long-lasting, and multifunctional T-cell immune response [58].
Live attenuated vaccines harbor weakened or closely related viruses, such as those used in vaccines for measles, mumps, rubella, chickenpox, and shingles.This technology facilitates mass production akin to inactivated vaccines.Although live attenuated vaccines can elicit both humoral and cellular immune responses, providing prolonged protection, their storage and transportation must adhere to precise requirements due to their active nature [59].Individuals with compromised immune systems may have a higher risk of the attenuated strain reverting to a more virulent form, which could lead to illness in vaccinated individuals.It is also important to note that even in individuals with normal immune function, disease may occur if the vaccine strain reverts to a more virulent state.
Viral Vector Vaccines
Viral vector vaccines show great promise as they utilize replication-deficient vectors capable of expressing foreign antigens.These vaccines can trigger both humoral and cellular immune responses without the need for additional adjuvants.These vaccines comprise viral particles that have undergone modification to include genes encoding desired antigens [60,61].Notable viral vectors utilized in vaccine development include adenoviruses (AdVs), modified vaccinia Ankara (MVA), IV, measles virus (MeV), and lentiviruses (LVs).
MVA serves as a robust foundational vector for vaccines, owing to its stability, immunogenicity, high safety profiles, and notably, genome-coding capabilities [76].Preclinical studies in mice and macaques have demonstrated that immunization with an MVA-delivered stabilized SARS-CoV-2 S, alone or in combination with the N protein, elicited robust CD8 + T-cell responses [77,78].Moreover, an MVA-based HIV vaccine candidate has exhibited remarkable safety in clinical trials involving 500 participants, including HIV patients and immunocompromised individuals [79,80].Similarly, other poxvirusbased vaccines, such as the temperature-stable replicating vaccinia virus vector expressing the SARS-CoV-2 receptor-binding domain (RBD), have shown the ability to induce T-cell responses without compromising safety [81].Overall, these vaccinations hold significant potential to emerge as highly effective shields against diseases caused by poxvirus and other pathogen infections.
T-cell-based vaccinations against pathogen infections can also utilize other viral vectors, including the IV and the measles virus (MeV).In the fight against COVID-19, IFV-based vaccines have shown effectiveness both independently and as booster vaccines.One such example is scPR8-RBD-M2, a single-round replication IFV-based COVID-19 vaccine.In mice, with two doses of intranasal (i.n.) immunization with this vaccine induced cellular, humoral, and mucosal immune responses [82].Additionally, a live attenuated IV vector-based SARS-CoV-2 RBD vaccine elicited mucosal RBD-specific IgA and IgG responses, as well as specific T-cell responses in hamster lungs [83].Similarly, a COVID-19 vaccine delivered by live MeV vectors expressing the prefusion-stabilized S protein demonstrated potential in mice by inducing Th1-biased T-cell responses [84,85].Live attenuated MeV-based vaccines have been among the most effective and safe human vaccines in clinical use [86].MeV-based vaccines targeting MERS-CoV [87], SARS-CoV [88], West Nile virus [89], and Chikungunya virus [90] have been developed, inducing robust levels of neutralizing antibodies (NAbs) and T-cells.Highly immunogenic and efficacious MeV-based vaccine candidates may be incorporated into regular MMR (measles, mumps, rubella) vaccination regimens, providing additional defense against various illnesses.
LVs, initially derived from HIV, are also ideal platforms for vaccines due to their robust immunogenicity and capacity to elicit immunological responses even after a single dose of immunization.LV-based vaccines have made significant strides in pre-clinical studies targeting SARS-CoV-2 and Zika viruses [91][92][93].An intranasal booster utilizing LV-based vaccinations containing the S of the Beta variant has bolstered systemic and lung-resident Tand B-cell immunity, offering protection against Omicron variant infection [94].However, concerns have been raised regarding the integration of LV vector-based vaccines into the host genome.To address this, a non-integrative LV T-cell antigen-based vaccine for human coronaviruses has been developed, generating protective T-cell immunity and providing a broader defense against SARS-CoV-2 variants [95].Furthermore, a non-integrating LVbased Zika vaccine encoding the pre-membrane and envelope glycoproteins of Zika virus strains elicited robust neutralizing antibody titers and conferred full protection against the Zika challenge [92].These findings suggest promising prospects for the development of vaccines utilizing LVs.
In general, viral vector vaccines are considered safe and effective in activating both innate and adaptive immunity.Their capacity to replicate natural infection processes, effectively deliver antigens, and prompt robust immune responses renders them valuable tools in vaccine development.However, challenges such as pre-existing immunity and limitations in vector packaging must be overcome to enable broader utilization of viral vectors in vaccine design.Furthermore, thorough studies on the long-term safety and efficacy are still needed.
Subunit Vaccines
Non-viral vaccines are a type of vaccine that solely utilize specific components (subunits) of viruses or bacteria essential for immune system recognition.Unlike traditional vaccines, they do not contain the entire microorganism or utilize benign viruses as vectors.This section will explore various subunit vaccines and delve into the advancements in peptide vaccine research.
Virus-like particle (VLP) vaccines are a type of subunit vaccine that mimics virus morphology without containing a viral genome.They are produced by expressing antigen proteins in eukaryotic or prokaryotic systems, allowing for the formation of particles with self-assembling antigen proteins [96].Dendritic cells take up VLPs, process them, and present them on MHC I and II to trigger the CD8 + and CD4 + T-cell responses [97][98][99][100].Many VLP vaccines, including those targeting hepatitis B (HBV), human papillomavirus (HPV), IV, Zika virus, etc., are now in clinical use or undergoing clinical trials.
Nanoparticle (NP) vaccines are created by chemically cross-linking proteins and carrier molecules to boost immune response and prevent antigen breakdown.In certain studies, researchers have incorporated immune receptor agonists (ligands of Toll-like receptor) into NP vaccines to increase dendritic cell activation and stimulate robust CD8 + and CD4 + T-cell reactions [101][102][103].
Recombinant protein vaccines are a type of subunit vaccine that is currently extensively researched.Flublok, for instance, is a recombinant entire quadrivalent-HA protein subunit vaccine designed to prevent IV infections.It is notable for being the first licensed IVV to utilize recombinant viral proteins instead of antigens taken from live influenza viruses [104].In comparison to split vaccines, Flublok demonstrates superior effectiveness in stimulating IV-specific CD4 + T-cells and CD4 + T-cell-dependent antibody responses in humans [105].Another noteworthy vaccine is the gE protein-based Herpes zoster (HZ) vaccine, which incorporates a second-generation lipid adjuvant (SLA), a synthetic TLR4 ligand, within an oil-in-water emulsion (SLA-SE).This formulation induces polyfunctional CD4+ T-cell responses in both young and aged mice, with sustained T-cell responses observed up to 5 months post-immunization [106].Likewise, other VZV recombinant protein vaccines have the capability to provoke multifunctional and enduring T-cell responses [107,108].
While VLPs, NPs, and recombinant protein vaccines can trigger T-cell immune responses, their antigen designs primarily target inducing humoral immunity.A novel type of vaccine, known as peptide vaccine, focuses on epitope peptide design rather than protein design.Peptide vaccines utilize peptides, typically derived from pathogen proteins and shorter than full proteins, to elicit an immune response against infections.They show potential in developing vaccines for infectious diseases, with numerous reports highlighting their effectiveness.T-cell epitope-targeting vaccines, such as CoVac-1, incorporate multiple T-cell epitopes from various SARS-CoV-2 viral proteins along with toll-like receptor 1/2 agonist XS15.CoVac-1 can stimulate robust SARS-CoV-2 T-cell immunity and potentially offer cross-reactivity to SARS-CoV-2 variants of concern due to the conservation of selected T-cell epitopes [109].In patients with B-cell/antibody deficiency, a single dose of CoVac-1 can induce extensive and efficient T-cell responses with a favorable safety profile [110].Another example is Multimeric-001, a synthetic recombinant peptide vaccine comprising nine T-cell and B-cell epitopes sourced from IV NP, M1, and HA proteins.When paired with the adjuvant Montanide ISA 51VG, Multimeric-001 has demonstrated the capacity to induce cellular responses in both healthy and older individuals [111,112].
Compared to whole-virus vaccines, non-viral vaccines are considered safer and more stable.They provide long-lasting immunity with a single dose, eliminating the need for repeated booster shots.Subunit vaccines can be manufactured on a large scale using vectors such as E. coli, rod-shaped viruses, yeast, and others.These vaccines not only elicit humoral immune responses but also activate T-cell-mediated immunity.
Dendritic Cell Vaccines
Dendritic cells (DCs) are tissue-resident and circulating cells that sense microbes, initiate innate immune defense reactions, and present microbial proteins to T-cells to initiate adaptive immune responses [113].DCs are the most effective antigen-presenting cells capable of inducing CD4 + and CD8 + T-cell responses.Due to their unique characteristics, DC vaccines have been developed for treating cancer and infectious diseases.In clinical immunotherapies and the regulation of the anti-tumor immune response, monocytes are isolated from patient peripheral blood mononuclear cells (PBMCs) and differentiated into DCs through incubation with granulocyte-macrophage colony-stimulating factor (GM-CSF) and other cytokines for about 5 days.Subsequently, DCs are loaded with inactivated autologous viruses, transfected with antigen-encoding RNA or DNA, or pulsed with viral antigens [114].Recently, DC vaccines have been developed for various viruses, including HIV, HCV, SARS-CoV-2, HBV, IV, and LCMV.
In chronic viral infection diseases, DC vaccines are employed for the prevention and treatment of HIV and HCV.In HIV prevention and treatment, research has demonstrated that an autologous DC HIV-1 vaccine, loaded with either autologous HIV-1-infected apoptotic cells or inactivated HIV-1, can induce polyfunctional HIV-1 specific CD4 + T-cell and gag-specific CD8 + effector T-cell responses [115].Furthermore, DCs loaded with recombinant proteins have been shown to elicit HIV-specific lymphocyte proliferation responses, resulting in enhanced production of IL-2, TNF-α, and IFN-γ [116].Additionally, DCs have the capability to present viral peptides as immunogens, including peptide pools [117], single epitopes [118], and HIV-1-antigen lipopeptides [119].Notably, all peptide-presenting DCs were capable of stimulating HIV-1-antigen-specifific T-cell responses.Currently, DCs have been effectively utilized to express HCV structural proteins, such as HCV-core protein and NS3 protein, using various systems like lentiviral systems [120], recombinant adenoviral vectors [121], adenovirus systems [122], and recombinant adeno-associated virus systems [123].This approach has shown significant success in generating antigenspecific CD4 + and CD8 + T-cell responses [124].Additionally, loading DCs with HCV antigens has been demonstrated to elicit robust humoral and cellular immune responses in mice [125,126].In a phase I clinical trial involving six HLA-A2 patients, DCs pulsed with lipopeptides containing a CD4 + T-cell epitope and HLA-A2-restricted CD8 + T-cell epitope, along with the lipid Pam2Cys, were capable of inducing specific CD8 + T-cell responses [127,128].
DC vaccines are also used in acute viral infections, including SARS-CoV-2 and IV.Several phase I-II trials (NCT04386252, NCT05007496, NCT04690387, NCT05007496.)have been conducted to evaluate the efficacy of DC vaccines in preventing COVID-19.Data from trials (NCT04386252, NCT04690387, and NCT05007496) demonstrated increased levels of anti-RBD antibodies [129], while data from trial NCT05007496 show that subjects exhibited reactivity to the SARS-CoV-2 S protein.In another phase I/II trial (NCT04276896), the LV-SMENP-DC vaccine was developed by modifying DCs using LV expressing the SARS-CoV-2 minigene SMENP and immune modulatory genes.The LV-SMENP-DC vaccine not only triggers the production of neutralizing antibodies but also elicits specific CD8 + T-cell responses [130,131].China Celartics Biopharma is focusing on utilizing engineered DCs with NP to induce NP-specific CD8 + T-cells [132].IV DC vaccines have the potential to induce HA-specific antibodies and T-cell responses [129].Lentiviral vector-transduced DCs with lymphocytic choriomeningitis virus (LCMV) GP33-41 peptides can elicit a protective response to LCMV infection by enhancing the CD8 + T-cell responses [133].
The DC vaccine offers the advantage of individualized preparation tailored to the patient's specific condition, providing long-lasting immune effects and immune specificity, particularly in T-cell responses.However, in clinical practice, the conventional DC manufacturing method necessitates a significant amount of blood, proving cost-prohibitive, time-consuming, and potentially unable to fully replicate the essential properties of naturally occurring dendritic cells.Additionally, the production of monocyte-derived dendritic cells may pose limitations in addressing highly pathogenic viruses, requiring specialized PC3 facilities and involving labor-intensive and complex operations, thereby restricting the application of DC as a preventive vaccine against viral infections.Nucleic acid vaccines, represent a promising strategy in the battle against viral infections, particularly COVID-19.Nucleic acid vaccines fall into two main categories: DNA vaccines and RNA vaccines.RNA vaccines are further categorized into linear mRNA, circular RNA (circRNA), and self-amplifying mRNA (saRNA).These vaccines induce Tand B-cell responses by introducing foreign genes into recipients, prompting the production of antigenic proteins.
DNA vaccines utilize DNA plasmids as vectors to deliver immunogenic antigens, which are encoded in genes and must be electroporated into cells for effective delivery.This approach induces both humoral and cell-mediated immune responses [134,135].Research on DNA-based COVID-19 vaccines has shown promising results in triggering Tcell-mediated immunity.DNA vaccines like COVID-eVax and GX-19N have demonstrated efficacy in eliciting SARS-CoV-2-specific T-cell responses [136,137].DNA vaccines have also been successfully developed against SARS-CoV, with results indicating robust humoral and cellular immune responses in mice, macaques, and camels [138,139].Additionally, the first DNA vaccine candidate against MERS-CoV, GLS-5300, entered clinical trials and was well tolerated without any major side effects [140,141].Despite some safety concerns, DNA-based vaccines containing T-cell epitopes are showing great promise.Additionally, DNA vaccines present several advantages compared to traditional and mRNA vaccines, including rapid manufacturing, cost-effectiveness, and enhanced stability for transportation and storage.Nonetheless, a significant challenge of DNA vaccination lies in the limited immune responses observed in humans so far [134,142].This is exacerbated by the fact that the effectiveness of a DNA vaccine in priming immune responses may only become apparent following the administration of a heterologous boost, which is suboptimal in situations necessitating a prompt immune response, such as during an outbreak [143].
RNA vaccines, on the other hand, offer the advantage of rapid development and flexibility.To date, mRNA therapeutics are the most advanced application for infectious diseases.mRNA vaccines, such as the COVID-19 vaccines developed by Pfizer BioNTech and Moderna, do not require the cultivation of viruses or preparation of specific proteins, enabling them to be designed and produced more rapidly [144][145][146].mRNA vaccines have the ability to elicit strong CD4 + and CD8 + T-cell responses, providing lasting immunity against infections [147,148].When comparing mRNA-1273, BNT162b2, Ad26.COV2.S and NVX-CoV2373, mRNA vaccines showed a higher memory CD4 + T-cell response and a similar memory CD8 + T-cell response compared to the viral vector-based immunization [149].The mRNA technologies have also been applied in the development of vaccines for IV [150][151][152], Zika [153,154], HIV [155], Respiratory syncytial virus (RSV) [156] and EBoV [157].In phase I clinical trials, mRNA vaccines for RSV (mRNA-1777) elicited a robust humoral response and a CD4 + T-cell response to RSV F peptides with no serious adverse events reported [158].In addition, mRNA vaccines against EBOV elicited robust expression of IFN-γ and IL-2 by CD8 + and CD4 + T-cells [159].CircRNA, unlike linear mRNA, are stable noncoding RNAs with a covalently closed ring structure that shields them from degradation.CircRNA vaccines have shown stronger and longer-lasting immunogenic responses compared to mRNA vaccines, inducing Th1-biased T-cell responses [160].Moreover, genetically modified replicons derived from single-strand RNA viruses, known as saRNA vaccines, can prolong antigen expression and stimulate humoral and cellular immune responses [161].For example, mice injected with the saRNA vaccine containing the stabilized SARS-CoV-2 S protein showed pronounced CD8 + and CD4 + T-cell responses even at low doses [162].Another saRNA vaccine in mice, ZIP1642, encoding both S-RBD and N antigens, induced Th1-skewed cytokine responses along with specific T-cell responses [163].Ongoing research in this field indicates that such vaccine approaches could be valuable in combating COVID-19 and potentially other viral infections in the future.However, RNA vaccines require storage at low temperatures and may present challenges for large-scale production and the establishment of mucosal immunity.The optimization of the nucleic acid sequence of the antigen is necessary to enhance antigen expression.Long-term safety and efficacy studies are essential in the coming years to safeguard the well-being of global populations.
Table 1 provides an overview of various vaccine platforms, describing their mechanisms of action, adjuvant requirements, induction of antibody and T-cell responses, and advantages and challenges.[13,164].While prediction methods offer speed, the predicted epitopes may not always be authoritative T-cell epitopes.Conversely, peptide library screenings are more accurate but are time-consuming and labor-intensive.
The functionality of T-cells specific to different epitopes of a virus can be either protective or pathogenic-for instance, CD8 + T-cells targeting PA 224-233 of IV can impede virus clearance, whereas NP 366-374 -specific CD8 + T-cells can expedite IV elimination [165].Once the T-cell epitopes are known, determining whether they are protective or pathogenic is vital for accurate selection.However, validating T-cell functionality and understanding the immunological mechanisms involved can be challenging, time-consuming, and laborious.
Time Constraint: "Swift Deployment, Lasting Immunity"
When discussing the timeframe for vaccine development and the duration of vaccine protection, the phrase "swift deployment, lasting immunity" succinctly encapsulates the concept."Swift deployment" denotes the ability to rapidly develop T-cell epitope vaccines in response to viral pandemics.This involves promptly identifying epitopes, selecting those with protective properties, efficiently choosing a suitable delivery platform, and swiftly verifying their protective efficacy.Conversely, "lasting immunity" emphasizes the necessity for T-cells to persist long-term post-vaccination, providing sustained protection to the host.Achieving both rapid development and long-lasting protection with T-cell-based vaccines poses significant challenges that require innovative solutions.
Choosing the Delivery System
As outlined earlier, each vaccine platform presents unique strengths and weaknesses.When crafting T-cell-based vaccines, the selection or development of an efficient delivery system for expressing or delivering epitopes is paramount.Additionally, integrating adjuvants to bolster the cellular response should be carefully considered in tandem with the chosen platform.
Achieving Immune Response Balance
A successful T-cell-based vaccine must strike a delicate balance in eliciting immune responses.Excessive T-cell activation can trigger inflammation, underscoring the importance of modulating the immune reaction.Research has demonstrated the essential role of adaptive immune cells in eliminating SARS-CoV-2 infection, However, this process can also result in increased inflammation and associated pathology [166].Immunization with vaccines that selectively induced CD4 T-cell responses led to severe inflammation and mortality after a challenge with a persistent strain of chronic lymphocytic choriomeningitis virus (LCMV) [167].Balancing the T-cell responses is pivotal in harnessing protective immunity without inciting an overly aggressive reaction.Managing this equilibrium poses a significant challenge in T-cell vaccine development.
Strain-Specificity and Cross-Protection Considerations
It is imperative that T-cell epitopes align with the targeted pathogen, ideally allowing T-cells to recognize viruses within the same family or across other families.Conducting a comparative analysis of T-cell epitopes from diverse strains within the target virus genus aids in selecting epitopes with minimal variability.Particularly for RNA viruses prone to mutation, prioritizing T-cell epitopes located on conserved proteins helps mitigate epitope escape.Developing an optimal T-cell epitope vaccine capable of conferring protection against infection while offering cross-protective capabilities poses a significant challenge.
Selection of Immunization Routes
For viruses that primarily infect via mucosal routes, such as respiratory viruses, it is crucial to consider respiratory mucosal delivery methods.Vaccination through the respiratory tract may generate trained innate immunity and optimal B-and T-cell immunity that is essential in defense against infection [168][169][170][171].The efficacy of mucosal immunization in eliciting mucosal immune responses relies on several factors, including the vaccine platform itself.For instance, mRNA vaccine platforms necessitate overcoming the substantial technological hurdle of mucosal targeting lipid nanoparticle (LNP) delivery systems.
Conclusions
T-cells are pivotal in orchestrating regulatory and protective functions within the host's antiviral immune responses.The integration of T-cell epitopes into vaccine development for infectious diseases signifies a progressive shift in vaccine design.Nevertheless, the development of T-cell-based vaccines or those integrating T-cell epitopes faces several challenges.Despite these obstacles, it is expected that scientists will progressively overcome these hurdles, ultimately paving the way for the creation of potent, enduring, and broadspectrum T-cell-based vaccines.
Figure 1 .Figure 1 .
Figure 1.Diagram of Vaccine-Induced Immune Response.This diagram provides an overview of various vaccine platforms and illustrates the host immune responses they elicit, emphasizing adaptive immune responses.Vaccination: In summary, dendritic cells (DCs) capture and present peptides to CD4 + and CD8 + T-cells via surface MHC II and MHC I molecules, thereby initiating the activation of naive CD4 + and CD8 + T-cells.Following activation, T-cells undergo proliferation and differentiate into effector T-cells.While the majority of effector T-cells undergo apoptosis, a small fraction differentiates into memory T-cells.With antigen stimulation and the assistance of CD4 + T-cells, B-cells become activated, proliferate, and further differentiate into plasma cells and memory B-cells.Infection post-vaccination: Pre-existing antibodies neutralize the viruses upon infection.Memory B-Figure 1. Diagram of Vaccine-Induced Immune Response.This diagram provides an overview of various vaccine platforms and illustrates the host immune responses they elicit, emphasizing adaptive
4 .
Challenges in T-Cell-Based Vaccine Development 4.1.Identification and Selection of T-Cell Epitopes T-cell epitopes form the foundation of T-cell-based vaccine development.Due to the MHC molecule restriction of T-cell responses, T-cell epitopes vary across different species or individuals.Thus, identifying specific T-cell epitopes is crucial for candidate virus T-cell vaccine development.Current methods for identifying T-cell epitopes include bioinformatics predictions and peptide library screenings
Table 1 .
Detailed overview of diverse vaccine platforms, including their mechanisms of action, requirement for adjuvants, stimulation of antibody, and T-cell responses, as well as their benefits and obstacles.
|
2024-05-05T15:13:16.062Z
|
2024-04-29T00:00:00.000
|
{
"year": 2024,
"sha1": "1def2b59b167c63619d5a1574f1de800cf84c8b1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-393X/12/5/478/pdf?version=1714402402",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d0dc2cefbb381d07829cd9d740d2c9a99b1a2ee",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
36544680
|
pes2o/s2orc
|
v3-fos-license
|
Hybridization of Thiol-Functionalized Poly ( phenylacetylene ) with Cadmium Sulfide Nanorods : Improved Miscibility and Enhanced Photoconductivity
Molecules of a thiol-functionalized phenylacetylene derivative were assembled on the CdS nanorod surface and copolymerized with phenylacetylene, affording an inorganic semiconductor-conjugated polymer hybrid with excellent solubility and high photoconductivity.
Molecular weights (M w and M n ) and polydispersity indexes (M w /M n ) of the polymers were estimated in THF by a Waters Associates gel permeation chromatography (GPC) system.A set of monodisperse polystyrene standards covering molecular weight range of 10 3 -10 7 was used for the molecular weight calibration.
Thermogravimetric analysis (TGA) was carried out on a Pyris 6 thermogravimetric analyzer (Perkin Elmer).A sample of ~3 mg was heated at the rate of 10 ℃/min under a constant flow of dry nitrogen.
Information about the shape and dispersion of the CdS nanorods and the hybrid were obtained from a JSM-5510 scanning electron microscopy (SEM), a JEM-200CX transmission electron microscope (TEM), and a Philips CM200 high-resolution TEM (HRTEM) coupled with an energy dispersive X-ray (EDX) analyzer.
Monomer Synthesis.The thiol-containing monomer (M1) was prepared according to the synthetic route shown in Scheme S1.Detailed experimental procedures are given below.Preparation of .Into a 500mL round-bottom flask equipped with a reflux condenser were added 11-bromoundecanoic acid (13.3 g, 50 mmol) and 200 mL of methanol.
Scheme S1
With gentle stirring, 5 mL of concentrated sulfuric acid was added dropwise into the flask.The reaction mixture was refluxed for 2 h.After cooling the content to room temperature, calcium carbonate was added gradually to neutralize the excess acid.The solvent was removed with a rotary evaporator.The residue in the flask was re-dissolved in 200 mL of chloroform and washed with deionized water.The organic layer was dried over 5 g of magnesium sulfate.After filtration of the solids and removal of the solvent, the crude product was purified on a silica gel chromatography column using chloroform as eluent.Evaporation of the solvent afforded 11.9 g of 5 as a colorless liquid (yield: 85.2% and 5.7 g (34.3 mmol) of potassium iodide were dissolved in 100 mL of acetone/DMSO mixture (9:1 by volume) with gentle stirring.To the mixture was added 6.3 g (22.6 mmol) of 5, and the content was then refluxed for 24 h.The solids were removed by filtration, and the filtrate was evaporated under reduced pressure.The crude product was dissolved in 50 mL of DCM, and the resultant solution was washed with 50 mL of deionized water.The aqueous phase was extracted twice with 50 mL of DCM.
The combined organic layers were dried over 5 g of magnesium sulfate.The crude product was condensed and purified on a silica gel column using chloroform as eluent.Evaporation of the solvent gave 7.5 g of a pale yellow solid of 4 (yield: 79.3% under nitrogen.After all the catalysts were dissolved, 1.7 mL (12 mmol) of trimethylsilylacetylene was injected into the flask, and the mixture was stirred at room temperature for 12 h.The solids formed during the reaction were removed by filtration and washed with TEA.The filtrate was then evaporated with a rotary evaporator.The residue in the flask was redissolved in 100 mL of chloroform and washed with 50 mL of hydrochloric acid (1 M) and then 50 mL of deionized water.The crude product was condensed and purified on a silica gel column using chloroform as eluent.Removal of the solvent gave 3.6 g of a light yellow solid of 3 (yield: 92.7%).IR (KBr), ν (cm -1 ): 2158 (w, C≡C), 1733 (s, C=O). 1 H NMR (300 MHz, CDCl 3 ), δ (TMS, ppm): 7.4 (m, 2H, aromatic protons meso to -O-), 3.9 (m, 2H,
Cadmium chloride and sulfur powders were stoichiometrically added into a mixture of deionized water and diethylamine under stirring.The contents were then transferred to a Teflon-lined stainless steel autoclave, which was airproofed and put into an oven.The solution was neither shaken nor stirred during the period of heating process.The autoclave was gradually heated to 120-200℃ and maintained at the temperatures for 24 h and then cooled to room temperature.The product was collected by precipitation and washed with CS 2 , ethanol and deionized water.A yellow powdery product was obtained after drying in a vacuum oven and treated at a temperature of ∼650 ℃ for 1 h.Inset: a typical HRTEM image of a single nanorod capped with an assembly of M1 molecules.
Assemblies of M1 on
The absorption spectrum of C1 is shown as Figure S3.The characteristic feature of CdS crystal can be observed at ~485 nm, confirming the existence of the CdS nanorods in the solution.The TEM images of C1 reveal that the CdS nanorods retain their original shapes (Figure S4).A typical HRTEM image of C1 given in the inset of Figure S3 shows that the surface of the assembly is not as smooth as those of the original nanorods.A rational explanation is that the clear surfaces of the original nanorods have been covered with M1 molecules.EDX analysis data are listed in Table S1, which clearly tell that C, N, Cd and S elements co-exist in the assemblies.a Recorded on a Hitachi H-900 TEM microscope, using an accelerating voltage of 300 kV.
Copolymerization of C1 with PA.The copolymerization reaction was carried out under nitrogen using the Schlenk technique in a vacuum-line system.Into a 20 mL Schlenk tube, 10 mg of purified C1 was added.The tube was evacuated under vacuum and then flushed with dry nitrogen three times, followed by injecting 50 mg of PA into the tube.DCM (1.5 mL) was then injected into the tube to dissolve the monomer.The catalyst was prepared in another tube by dissolving 2.5 mg of [Rh(cod)Cl] 2 in 0.5 mL of DCM with one drop of TEA, which was transferred to the monomer solution using a hypodermic syringe.After stirring at room temperature for 24 h, the polymerization mixture was diluted with 5 mL of DCM and added dropwise to 500 mL of methanol under stirring.The precipitate was collected by filtration.The solid product was washed with methanol and dried under vacuum at room temperature to a constant weight.
The copolymerization of C1 and PA gave rise to the expected PPA-CdS hybrid (H1).Because the inorganic nanorods cannot penetrate through the columns in the GPC instrument, we removed the CdS nanorods in H1 by extracting it with concentric hydrochloric acid.After removal of the CdS component and drying in a vacuum oven, the yield of the polymer was calculated to be ~71%.The M w and M w /M n values of the sample were estimated to be 72600 and 2.9, respectively.The 1 H NMR spectrum of H1 is shown in Figure S5.The appearance of the resonance peak for =C−H at δ ~ 5.8 and the upfield shift of the resonance peaks of the aromatic protons confirm the transformation of the acetylenic triple bonds to the vinyl double bonds by the acetylene polymerization process.The elemental analysis data for M1, PPA and H1 are summarized in Table S2.The nitrogen in H1 is entirely from M1, thus the relative content of M1 in H1 can be can be calculated by equation 1 to be 7.78 wt %: where W M1/H1 is the weight percentage of M1 in H1, and W N/H1 and W N/M1 stand for the weight percentages of nitrogen element in H1 and M1, respectively.The weight percentage derived from the elemental analysis data is given in Table S2.
Rationally, M1 component also contributes the same percentage to the total carbon content in H1.
Based on this analysis, the PA component contributes 92.22 wt % of carbon to H1, i.e., 78.28 wt % of the total 84.88 wt % in H1.Therefore, the relative content of PA in H1 can be calculated by equation 2 to be 83.21wt %: where W PA/H1 is the weight percentage of PA component in H1, and W C/H1 and W C/PA stand for the weight percentage of carbon element in H1 and PA, respectively.Finally, the relative content of CdS nanorods in H1 can be calculated to be 9.01 wt % by subtracting the weight percentages of M1 and PA components from 100%.was replaced by pure CdS nanorods.In another case, 10 mg of CdS nanorods and 80 mg of preformed PPA was dissolved in 4 mL of DCM.The mixture was stirred for 24 h and the solvent was evaporated with a rotary evaporator.Blend 2 or B2 was obtained after drying in vacuum oven at 40 ℃ overnight.Photoconductivity Measurement.Photoconductivity of the obtained double-layered photoreceptor was measured using a standard GDT-II photoinduced discharge instrument, which constitutes the basis of the xerographic process in the photoimaging system.The surface of the photoreceptor was first negatively corona charged to a surface potential V 0 .After a dark discharge for 3 s, its surface potential dropped to V i .The photoreceptor was then exposed to a light with an intensity of I (11 µW/mm 2 in this work).The light source was a halogen lamp (5 W, 24 V).The electron-hole pairs were immediately generated in the CGL upon the photon absorption and injected into the CTL.The photogenerated pairs migrated towards the negatively charged surface following the applied field through the CTL.The surface charges were thus neutralized with a low potential V r remained.From the discharge experiment we can obtain the parameter of half-discharge exposure energy E 1/2 , which equals to t 1/2 I, where t 1/2 is the time from initial potential V i to its half value under exposure to light.Photosensitivity S is defined as the reciprocal of E 1/2 , or S = 1/E 1/2 .
Figure S1. 1 H NMR spectrum of thiol-containing monomer M1 in chloroform-d.The solvent and water
FigureFigure S6 .
Figure S5. 1 H NMR spectra of H1 in chloroform-d.The solvent and water peaks are marked with
|
2017-06-15T20:43:02.489Z
|
2007-03-22T00:00:00.000
|
{
"year": 2007,
"sha1": "1555e3e81dedaac0ff444d6464f089ee362239b2",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/852283/files/article.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "1555e3e81dedaac0ff444d6464f089ee362239b2",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
251415969
|
pes2o/s2orc
|
v3-fos-license
|
5xFAD mice do not have myenteric amyloidosis, dysregulation of neuromuscular transmission or gastrointestinal dysmotility
Abstract Background Alterations in gastrointestinal (GI) function and the gut‐brain axis are associated with progression and pathology of Alzheimer's Disease (AD). Studies in AD animal models show that changes in the gut microbiome and inflammatory markers can contribute to AD development in the central nervous system (CNS). Amyloid‐beta (Aβ) accumulation is a major AD pathology causing synaptic dysfunction and neuronal death. Current knowledge of the pathophysiology of AD in enteric neurons is limited, and whether Aβ accumulation directly disrupts enteric neuron function is unknown. Methods In 6‐month‐old 5xFAD (transgenic AD) and wildtype (WT) male and female mice, GI function was assessed by colonic transit in vivo; propulsive motility and GI smooth muscle contractions ex vivo; electrochemical detection of enteric nitric oxide release in vitro, and changes in myenteric neuromuscular transmission using smooth muscle intracellular recordings. Expression of Aβ in the brain and colonic myenteric plexus in these mice was determined by immunohistochemistry staining and ELISA assay. Key Results At 6 months, 5xFAD mice did not show significant changes in GI motility or synaptic neurotransmission in the small intestine or colon. 5xFAD mice, but not WT mice, showed abundant Aβ accumulation in the brain. Aβ accumulation was undetectable in the colonic myenteric plexus of 5xFAD mice. Conclusions 5xFAD AD mice are not a robust model to study amyloidosis in the gut as these mice do not mimic myenteric neuronal dysfunction in AD patients with GI dysmotility. An AD animal model with enteric amyloidosis is required for further study.
exhibit peripheral nervous system symptoms of which gut dysbiosis and constipation stand out. [2][3][4] Several hypotheses have proposed mechanisms to explain the gut-brain connection in AD with a focus on gut microbiome composition, disturbances in gut permeability, and systemic inflammation. 2,[5][6][7] The gut-brain connection has been linked to several neurodegenerative diseases in central nervous system (CNS). 8 For example, the contribution of gut-brain connection in Parkinson's disease has been well documented, where α-synuclein neuronal aggregation is associated with GI dysmotility and dopaminergic neuronal degeneration in brain. [9][10][11] Unlike Parkinson's disease, studies of GI dysmotility in AD are limited.
It is well established that the activity of amyloid precursor protein (APP) and Aβ accumulation are highly associated with AD pathology in the CNS causing cholinergic neurodegeneration of the basal forebrain. 12 APP, from which Aβ is derived, is also expressed in the enteric nervous system (ENS). 13 The presence of Aβ accumulation in the gut and stool has been reported in patients with neuropathologically confirmed AD. [14][15][16][17][18] The ENS controls GI motility, secretion, and blood flow through coordinated interactions between networks of interneurons, motorneurons, and sensory neurons that populate the GI tract from the esophagus to the anus. 19 Disruption in neurotransmission, particularly in the ENS could contribute to GI dysfunction in AD. In some animal models of AD, Aβ accumulation in the ENS causes increased inflammation, GI dysmotility, and loss of enteric neurons, 15,[20][21][22][23] including nitrergic neurons, 24 prior to the development of severe cognitive deficits. However, the fundamental effects of disruptions in enteric neuromuscular transmission, specifically in myenteric neurons in AD associated GI dysmotility, is still unknown.
There are over 200 transgenic animal models that have been developed to mimic the progressive pathology of familial AD (https:// www.alzfo rum.org/resea rch-model s/alzhe imers -disease). The 5xFAD mouse is a widely used Aβ pathogenic model whereby Aβ accumulation in the brain occurs as early as 2 months of age. In addition, neuronal loss in the CNS occurs at 3 months, and cognitive impairment is observed at 5 months of age. 25,26 Literature has reported that 5xFAD mice also present minor GI dysmotility, 22,27 changes in enteric neuronal structure, 27 altered colonic gene expressions and calcium homeostasis, increased enteric neuronal viability, 22 and GI dysbiosis. 28 However, there is no study directly supporting that Aβ accumulation occurs in enteric ganglia in the myenteric plexus of 5xFAD mice and if Aβ accumulation disrupts myenteric neuromuscular transmission causing GI dysmotility. Because myenteric neurons control GI motility, we determined the expression of Aβ in the brain and colonic myenteric plexus from male and female 5xFAD and wild-type (WT) mice by using immunostaining and an ELISA assay at 26 weeks of age. We also assessed colonic transit in vivo; propulsive motility and GI smooth muscle contractions ex vivo, electrochemical detection of colonic nitric oxide (NO) release; and changes in myenteric neuromuscular transmission using smooth muscle intracellular electrophysiological recordings. Although cholinergic neurodegeneration of the basal forebrain is a major contributor in AD brain pathology, 12 in the current study, we determined both cholinergic and nitrergic neuromuscular transmission since the loss of enteric nitrergic neurons has been also reported in animal models of AD. 24 2 | ME THODS AND MATERIAL S
| Animals
Male and female 5xFAD mice (
| Measurements of body weight, food intake, and fecal pellet output
To assess GI motility, we measured body weight, food intake and fecal pellet output every 4 weeks, starting at 9 weeks up to 25 weeks in all mice. After measuring body weight, mice were separated from their original cages and individually housed in a cage with access to 40 g of food and free access to water for 72 h. Total food (g) consumption was measured within 72 h.
For fecal pellet output measurements, each mouse was separated from their home cage and individually housed in cages without access to food and water for 2 h, on 3 consecutive days. Fecal pellets were assessed for number and length (mm). Wet pellet weight was assessed immediately after pellet collection, whereas dry weights were measured after 24 h desiccation at room temperature. Fecal water content was calculated by subtracting dry weight from wet weight.
Key points
• Transgenic mouse models of AD carring mutated human gene, are designed to mimic the AD associated pathological changes in mice.
• Amyloidosis is one of the major cause of neurodysfunction/degeneration in AD.
• 5xFAD mouse model is one of the most popular AD mouse model to study amyloidosis associated pathological changes in central nervous system, but this model lacks the amyloidosis in enteric nervous system.
| Animal euthanasia and tissue collection
At 26 weeks of age, all mice were euthanized by 4% isoflurane anesthesia and cervical dislocation. The brain, small intestine, and colon were collected. Left cerebral hemispheres and a small segment of colon were immediately fixed at 4°C with Zamboni's fixative (4% formaldehyde with 5% picric acid in 0.1 M sodium phosphate buffer, pH 7.2) for 48 h. Brain tissues were immersed into 15% and 35% of sucrose in 0.1 M sodium phosphate buffer for 2-3 days, then embedded in O.C.T. and stored at −80°C for further cryostat sectioning and immunostaining. Right cerebral hemispheres and a small segment of small intestine were collected and freshly stored at −80°C for further ELISA assay.
| Measurement of colonic migrating motor complex (CMMC)
The frequency and propagation velocity of CMMCs were important in evaluation of colonic GI motility. After euthanasia, the entire colon
| Isometric tension recording in isolated organ bath
Longitudinal smooth muscle contractility was determined pharmacologically and electro-physiologically using isometric tension recording in an organ bath. Following euthanasia, a 1.5 cm length of duodenum, ileum, proximal and distal colon were mounted onto a platinum foil electrode on one end and a stationary isometric force transducer on the other end with silk ligatures. The assembly was placed into a 20 ml organ bath containing oxygenated Krebs solution at 37°C and a resting tension of 1 g was applied to each preparation. To determine smooth muscle contractility for excitatory neurotransmission, bethanechol (0.1-30 μmol L −1 ; C5256, Sigma-Aldrich), a muscarinic receptor agonist, was cumulatively added into each organ bath at 2 min intervals to produce myogenic contractions. Chemical Company), a voltage-gated Na + channel inhibitor, was used to block neuromuscular transmission to reveal myogenic responses.
All tissues were dried using Kimwipes and tissue weight between ligatures was weighed at the end of experiment. All drug and electrophysiological responses were calculated by subtraction of baseline from peak responses and converted into mg (contraction force)/ mg (tissue weight).
| Intracellular IJP recordings from circular smooth muscle cells
Myenteric neuromuscular transmission modulates GI motility.
Colonic myenteric neuromuscular transmission was evaluated electro-physiologically using sharp microelectrode intracellular recording. Following euthanasia, a 1 cm colon segment was isolated and cut along the mesenteric border, pinned flat on the dish with the mucosa facing upward, and the mucosal and submucosal layers were removed. A 1 cm 2 exposed circular muscle prep was transferred to a
| Continuous amperometry
To determine if enteric nitrergic neurotransmission is altered in 5xFAD mice, continuous amperometry was used to directly detect nitric oxide (NO) release from myenteric ganglia. The tissue preparation was similar to that used for IJP recordings, as described above.
A modified boron-doped diamond (BDD) microelectrode was used to make these measurements; fabrication is described in detail elsewhere. 29,30 Briefly, the BDD microelectrode was decorated with a uniform coating of Pt nanoparticles to enhance sensitivity and reduce the working potential of NO oxidation. Then, the Pt modified BDD microelectrode was dip-coated in a thin film of Nafion® to electrostatically repel negatively charged interferents such as the nitrite anion, an oxidation product of NO. The modified microelectrode has been used to monitor NO release from the mouse colon in vitro. For amperometric recordings, the modified BDD microelectrode was carefully positioned under a low-power microscope near a myenteric ganglia of a midcolon circular muscle preparation using a micromanipulator. The microelectrode was polarized at 0.8 V vs Ag/ AgCl, a potential at which the rate of electrochemical oxidation of NO is mass transport limited and directly proportional to the number of moles of NO being oxidized. Potential was controlled using N5501, Sigma-Aldrich,) and TTX (0.5 μmol L −1 ) to verify neurogenic NO release. The modified-BDD microelectrode was potential cycled in 0.5 mol L −1 sulfuric acid prior to each day's measurement to activate and clean the electrode surface and the Nafion® film was renewed every three measurements. Data were filtered through a 10 Hz low pass filter before peak analysis using Clampfit 10.7 software (Molecular Devices).
| Immunostaining for Aβ expression
The expression of Aβ was determined in whole brain, colonic circular muscle myenteric plexus (CMMP) in whole tissue preparation, and in coronal and transverse sections of the colon by immunostaining. Fixed brain tissues were sectioned at 10 μm thickness using a cryostat. For CMMP preparations, a 1 cm 2 segment was cut along the mesenteric border, pinned flat on the petri-dish with the mucosa facing upward, and the mucosal and submucosal layers were removed.
The prep was fixed overnight at 4°C with Zamboni's fixative (4% formaldehyde with 5% picric acid in 0.1 M sodium phosphate buffer, pH 7.2). The fixative was washed with 0.1 M phosphate buffer solution (84 mM Na 2 HPO 4 , 18 mM NaH 2 PO 4 , pH 7.2) and the tissue was flipped over, and the serosa and longitudinal muscle layer were dissected using fine forceps. Coronal and transverse sections of fixed segment of colon were embedded with paraffin, performed by the histology core at Michigan State University. The colonic sections were cut at 5 μm thickness and dewaxed before immune staining.
All preps were incubated overnight at 4°C with primary antibodies followed by 1 h incubation at room temperature with secondary antibodies (Table S1, antibody information). All preps were examined using a Nikon C2+ upright confocal laser scanning microscope (Nikon Instruments, Inc,). Fluorophores were excited using 488 nm and 594 nm lasers and their spectra were captured using Nikon NIS-Elements advanced research software version 4.0. Identical photomultiplier settings were used for image acquisition from all samples.
Images for publication were prepared using Adobe Photoshop CS5.
| ELISA assay for Aβ 42 expression in ileum
Expression of Aβ42 (insoluble Aβ isoform) was evaluated in whole brain, ileum and colon from 5xFAD and WT mice by ELISA assay.
| Statistical analysis
In our studies, power analysis was conducted assuming a 95% confidence level (p < 0.05), a standard deviation that is 20% of the mean value, a difference in means between groups that is less than or equal to 20% of mean values and statistical power > 80%. In most of our studies, a sample size of 6 animals/group were included.
Data are reported as mean ± SEM. A two-way ANOVA followed by Bonferroni's post hoc test was used to compare the changes in body weight and fecal pellet output following age development, the concentration and frequency responses of tissue isometric force recording, and frequency responses of amperometry detection of NO release, in different groups, 5xFAD vs WT. Unpaired Student t-tests were used for comparing two groups, such as CMMCs and IJP measurements in 5xFAD vs WT mice. Data were graphed and analyzed using GraphPad Prism 6.0 software. A p-value of <0.05 was considered statistically significant.
| 5xFAD mice did not show significant alterations in GI motility following maturation
In the beginning of this study (~9 weeks old), all male 5xFAD mice had a lower body weight when compared with aged-matched WT mice, but this difference was not observed in female 5xFAD vs WT mice ( Figure 1A). The lower body weight persisted in male 5xFAD mice ( Figure 1A) until 25 weeks of age. The lower body weight in male 5xFAD mice was not associated with lower food intake, since male 5xFAD mice had similar food intake compared with WT mice ( Figure 1B). All 5xFAD mice did not show significant decline in body weight and food intake until 25 weeks of age.
Fecal pellet number, dry weight, fecal water content, and pellet length in all mice are shown in Figure 1C-F. In both male and female 5xFAD mice, we observed a very transient increase in fecal pellet number compared with WT mice at the early stages of age progression ( Figure 1C). Overall, compared with the WT mice, all 5xFAD mice did not show significant changes in fecal pellet dry weight ( Figure 1D), water content ( Figure 1E), and pellet length ( Figure 1H) during the age progression study.
| 5xFAD mice did not show significant changes in longitudinal smooth muscle contractility in response to excitatory neuromuscular transmission ex vivo
Myogenic contractions in the duodenum, ileum, proximal and distal colon were induced by using the muscarinic cholinergic agonist, bethanechol (0.1-30 μmol L −1 , Figure 2). Overall, there were no significant changes in concentration-dependent post-junctional cholinergic muscular reactivity in 5xFAD mice across the four regions of the GI tract (Figure 2A-D). Only female 5xFAD mice showed an increased colonic reactivity at very high concentration of bethanechol compared with WT mice ( Figure 2D).
Next, we measured muscle contractions induced by electrical stimulation at frequencies of 0.5, 1, 3, 5, and 10 Hz (Figure 3). There were no significant differences in frequency response curves in the four regions of the GI tract between WT and 5xFAD mice ( Figure 3). TTX (0.3 μmol L −1 ) was used to block neurogenic neuromuscular transmission via nerve stimulation. After TTX treatment, frequency response curves for nerve stimulation were significantly reduced in all tissues and there were no significant differences between WT and 5xFAD male or female mice. Overall, there were no changes in excitatory neuromuscular transmission in 5xFAD mice.
F I G U R E 1 Measurements of body weight (A)
, food intake (B), fecal pellet number (C), fecal dry weight (D), fecal water content (E), and pellet length (F) in WT and 5xFAD male and female mice from Weeks 9-25. Data are presented as mean ± SE, *p < 0.05. 5xFAD male mice showed lower body weight than WT mice persistently. All 5xFAD mice show increased pellet number than WT mice in Week 13 only. Data are presented as mean ± SE, *p < 0.05, 5xFAD vs WT.
Using 5xFAD mice, we also explored whether there was dysfunction at the colonic neuromuscular junction. IJPs were recorded from colonic circular smooth muscle cells ( Figure 5). IJPs consist of a fast purinergic hyperpolarization of membrane potential followed by a slower nitrergic hyperpolarization before the membrane potential returns to baseline (~-45 mV) ( Figure 5A). Using train durations of 100-300 ms, we did not observe significant differences in IJP amplitude in 5xFAD mice vs. WT mice ( Figure 5B-D). When the purinergic component of the IJP was blocked by MRS2179 (10 μmol L −1 , a P2Y1 receptor inhibitor), IJP amplitudes were significantly reduced in all mice, the nitrergic component of the IJP (leftover of IJP from purinergic blockade) was also similar in 5xFAD and WT mice ( Figure 5B-D). Overall, there were no significant changes in colonic inhibitory neurotransmission in all 5xFAD mice.
| 5xFAD mice did not show significant changes in NO release from colonic myenteric nitrergic neurons
Inhibitory neurotransmission plays a key role in control of colonic motility, and loss of nitrergic myenteric neurons in AβPP/PS1 AD F I G U R E 2 Concentration response curves for bethanechol in small intestinal and colonic longitudinal smooth muscle from WT and 5xFAD male and female mice. Bethanechol induced contractions in the (A) duodenum, (B) ileum, (C) proximal colon, and (D) distal colon. Bethanecholinduced contraction force was converted to mg (force)/mg (tissue weight). Data are mean ± SE; * p < 0.05, 5xFAD vs WT.
F I G U R E 3 Frequency responses curves for electrical nerve stimulation in small intestinal and colonic longitudinal smooth muscle from
WT and 5xFAD male and female mice, before and after application of TTX (0.3 μmol L −1 ). Longitudinal smooth muscle contractions were recorded after electrical stimulation in male mice using the (A) duodenum, (B) ileum, (C) proximal colon, and (D) distal colon; and in female mice using the (E) duodenum, (F) ileum, (G) proximal colon, and (H) distal colon. Contraction force was converted to mg (force)/mg (tissue weight). Data are presented as mean ± SE mice has been reported previously. 24 We determined colonic myenteric nitrergic neuronal function directly by measuring electrically stimulated NO release from myenteric ganglia using amperometry.
NO oxidation current response curves were similar in 5xFAD vs WT in male and female mice, although all female mice had a smaller peak current than male mice ( Figure 6A-C). A post-hoc test following twoway ANOVA revealed that these sex differences were only statistically significant for the 5xFAD mice at 3 pulses. In addition, peak oxidation currents were nearly completely inhibited by L-NNA (NOS inhibitor) and TTX in all mice, demonstrating that the responses were neurogenic NO release from myenteric ganglia ( Figure S1A-E).
Overall, we did not observe significant changes in colonic nitrergic inhibitory neurotransmission in 5xFAD mice compared with WT mice.
| Aβ accumulation was undetectable in small intestinal and colonic myenteric plexus in 5xFAD mice
We used two anti-Aβ antibodies with different epitopes in our study, from ThermoFisher and Cell Signaling (Table S1). The ThermoFisher antibody (71-5800) is a 30 amino acid synthetic polyclonal peptide derived from the full length (1-43 amino acid) Aβ peptide (https:// www.therm ofish er.com/antib ody/produ ct/beta-Amylo id-Antib MRS2179 significantly decreased IJP amplitude but no significant differences were seen between WT and 5xFAD mice. Data are presented as mean ± SE, #p < 0.05, control vs MRS2179 ody-Polyc lonal/ 71-5800). All 5xFAD mice at 6 months of age showed strong Aβ immunoclusters in whole brain slices, without visible immunoclusters in the brain of WT mice (Figure 7). However, we did However, we could not detect any visible positive Aβ immune staining in the myenteric plexus from these mice too (data not shown).
More specifically, the expression of insoluble Aβ 42 (by ELISA assay) in the brains from 5xFAD mice was significantly increased when compared with brains from WT mice, Aβ 42 expression was undetectable in brains from WT mice and the ileum and clone tissues from all 5xFAD mice (Figure 8).
| DISCUSS ION
Patients with neuropathologically confirmed AD also exhibit peripheral nervous system symptoms such as gut dysbiosis and constipation. [2][3][4] In AD patients, Aβ accumulation in the gut wall and stool have been reported, [14][15][16][17][18] which strongly supports the contribution of amyloidosis in GI dysmotility of AD patients. Some mouse models of AD also revealed amyloidosis in the GI wall and moderation of GI dysmotility, alterations of microbiome composition, disturbances in gut permeability, changes GI structure and systemic inflammation. 15,[20][21][22][23]27 However, Aβ accumulation associated with enteric neuronal dysfunction, specifically myenteric neurons in GI dysmotility, has not been studied mechanistically, even though Aβ accumulation was directly detected by positive immune staining in the myenteric ganglia in AD mouse models of AβPP/PS1 (APPswe, PSNE1dE9) 15 and TgCRND8,. 23 These two studies directly showed that amyloidosis occurred in the myenteric ganglia. Our goal was to identify a novel mechanism for myenteric neuronal dysfunction in GI dysmotility of AD. We used an APP overexpression mouse model that produces robust levels of Aβ and Aβ 42 leading to synaptic dysfunction and neuronal cell death in the brain causing cognitive deficits. The 5xFAD mouse model is one of the most popular models used in AD studies. These mice show rapidly accumulating and robust intraneuronal Aβ 42 levels as early as 1.5 months of age. 25 Aβ 42 favors formation of insoluble fibrils in AD and has a strong genetic link to early-onset familial AD. However, reported GI dysmotility in 5xFAD mice was very In addition, amyloidosis associated GI dysfunction in 5xFAD mice cannot be concluded without the support of amyloidosis in ENS. In previous studies in 5xFAD mice, 22,27 the authors did not provide any direct evidence showing the occurrence of amyloidosis in GI from 5xFAD mice. More importantly, the authors have analyzed the expression of the most common AD-linked genes in different regions of the GI tract, but overall they did not find F I G U R E 6 Detection of NO release from colonic myenteric ganglia in WT and 5xFAD male and female mice using continuous amperometry. NO release was directly measured using transmural electrical stimulation and continuous amperometry. A frequencyresponse curve was generated by plotting the peak current response normalized to the electrode's electrochemically active area against the number of electrical stimuli delivered. (A) Peak NO current responses curves from male and female mice. (B) and (C), Representative currenttime recordings from male and female mice. Increasing the number of stimuli caused increased current until a peak response is observed at 3 pulses. This response then plateaus and decreases slightly with increasing number of stimuli. The black bar indicates the onset of stimulation and the corresponding stimulation artifact. Data are presented as mean ± SE significant changes in these gene expressions in 5xFAD mice compared with WT mice, even until at 40 weeks of age. 27 In their studies, even the expression of APP and PSEN1/2 genes in entire GI tract were very comparable between 5xFAD vs WT mice, 27 these data against that overexpressions of APP/PSEN gene have been employed in GI from 5xFAD mice, but the data are strongly supporting our observations that 5xFAD mice do not have amyloidosis in colonic ENS, since 5xFAD mice should be with the overexpression of three APP and two PSEN1 genes.
It has been also reported that there were changes in enteric neuronal structure/function and altered cellular viability in enteric neurons in 5xFAD mice. 22,27 However, these authors determined the neuronal variability, structure/function and other biomarkes in cultured enteric neurons, not the primary enteric neurons. They also used recombinant human Aβ to induce neuronal amyloidosis. Therefore, the results from these studies are incomparable with our studies. Although 5xFAD mice were also used to study GI dysbiosis, 28 however, there was no direct evidence supporting the amyloidosis in myenteric ganglia in these mice too.
In our studies, at the age of 6 months, 5xFAD mice have developed some cognitive impairments in behavioral tests (we did not include the data in this submission). In clinical relevance, the appearance of GI symptoms is much earlier than the symptoms of cognitive impairment in AD patients. [14][15][16][17][18] We expected that the development of enteric amyloidosis will be earlier than in brain. Gene expressional studies 27 also did not support that extending the time point will increase the chance of Aβ expression in GI tract, because of lacking the changes of AD-linked gene expression in GI tract from 5xFAD mice even at age of 40 weeks.
Taken together, we believe that the amyloidosis associated myenteric neuronal dysfunction in GI dysmotility has not been reported in 5xFAD AD model. F I G U R E 7 Detection of Aβ expression in parasagittal cortex sections and colonic myenteric ganglia in WT and 5xFAD male and female mice. Row one and two, representative images from brain sections; row three to six, representative images from colonic circular myenteric plexus (CMP); row three and four, representative images from whole tissue preparations (whole); row five and six, colonic cross sections (section). Circle markers indicate the colonic myenteric ganglia area between circular and longitudinal smooth muscle layers. NeuN (green), a neuronal nuclear protein marker antibody, indicates the neurons in brain slices; HuC/D (green), a neuronal nuclear protein marker antibody, indicates ENS neurons in GI; Aβ (red), anti-Aβ antibody from ThermoFisher, indicates the expression of Aβ in brain and myenteric ganglia. Confocal images were acquired at 10x and 20x. n = 6 in each group of mice F I G U R E 8 Aβ-42 expression in tissues from WT and 5xFAD male and female mice. ELISA assay was used to determine expression of Aβ42 from the brain, ileum, and colon tissues from these mice. Expression of Aβ42 in the ileum and colon from all 5xFAD mice was undetectable (n = 4-6 in each group)
| Aβ accumulation and GI dysmotility were absent in 5xFAD mice
When we attempted to recapture amyloidosis in the colonic myenteric ganglia, we did not find visible Aβ immunostaining in colonic myenteric ganglia in 5xFAD mice at 26 weeks of age (Figure 7). Aβ immunoclusters were visualized in brain slices from all 5xFAD mice, indicating that the experimental timeline is sufficient to develop amyloidosis in these mice. To confirm our observations, we used two anti-Aβ antibodies with different ectopic sequences from different companies. The results were very similar in both antibodies, 5xFAD mice did not show any visible Aβ accumulation in myenteric ganglia. In addition, Aβ42, an insoluble Aβ isoform, was also undetectable in the ileum and colon but was detected in brain tissues from 5xFAD mice using an ELISA assay (Figure 8). This study did not support amyloidosis in the GI wall in 5xFAD mice as well, as has been reported in other studies. 20,22 Likewise, we did not observe significant GI dysmotility or colonic myenteric neuronal dysfunction in 5xFAD mice too (Figure 1 -5). GI motility was broadly evaluated through GI transit in vivo (fecal pellet output), GI neuromuscular contractility ex vivo, and also the contribution of colonic myenteric neuromuscular transmission. None of these studies support the presence of myenteric dysfunction associated GI dysmotility in 5xFAD mice. Although 5xFAD male mice showed an increase in total number of CMMCs with shorter durations in between each CMMC ( Figure 4A,B) but without changes in propagation speed. These mice did not have any significant GI dysmotility in vivo (Figure 1), as we did not detect any signs of decline in body weight and food intake, constipation, or diarrhea from these mice. In addition, we did not detect the presence of nitrergic neurodegeneration in colonic myenteric ganglia ( Figure 6, Figure S1).
| Animal models of amyloidosis-associated myenteric neuronal dysfunction in GI dysmotility
APP, a precursor to Aβ, is expressed in the ENS. 14-17 Therefore, we anticipated an accelerated drive toward intraneuronal Aβ 42 generation throughout the ENS. However, amyloidosis was absent in myenteric ganglia in 5xFAD mice. Amyloidosis in myenteric ganglia was detected directly in other AD animal models such as the AβPP/PS1 (APPswe, PSNE1dE9) 15 and TgCRND8 23 models. We cannot completely explain why the amyloidosis is absent in the ENS of 5xFAD mice, but the model facilitates studies of amyloidosis phenotypes in CNS. Recently, we used similar anti-Aβ antibodies and identified amyloidosis in colonic myenteric ganglia from AβPP/PS1 (APPswe, PSNE1dE9) mice, which indicates that amyloidosis in the ENS depends on the animal model of AD, and amyloidosis in the ENS is technically detectable, and the antibodies used in the current study are efficient to detect amyloidosis in GI too. 5xFAD mice overexpress five human mutated genes and use the Thy1 transgene cassette as a promoter, 25 but AβPP/ PS1 and TgCRND8 models overexpress only two or three human mutated genes and use the prion transgene cassette as a promoter. [31][32][33] It is well known that the prion promotor is more efficient for gene transfer, and gives the highest level of transgene expression in contrast to other promotors. 34 In addition, neurons in the ENS and CNS are derived from different embryonic precursors (i.e., neural crest vs neural tube), 35 and these neurons do not share similar molecular signaling pathways in neuronal development. 36 Therefore, the absence of amyloidosis phenotype in the ENS is not surprising. These data also indicate that the 5xFAD mouse model may not be suitable for studies of amyloidosis-associated myenteric neuronal dysfunction and GI dysmotility in AD. Other animal models, such as the AβPP/PS1 mouse, may be appropriate for this study. We have to be aware that all AD animal models are developed to mimic the amyloidosis and tau pathology in CNS. Currently, there is no reliable AD animal model to specifically show the amyloidosis and tau pathology in ENS.
Finally, we emphasize that we only determined the amyloidosis phenotypes in colonic myenteric ganglia in these 5xFAD mice. We did not examine amyloidosis in other GI areas. Therefore, we cannot exclude amyloidosis-associated alterations of the microbiome, mucosal permeability, gene expression, and inflammatory markers in the GI tract of these mice, as these alterations have been reported by other investigators. 22,27,28
| CON CLUS IONS
In conclusion, although 5xFAD mouse model is reliable for studies of the role of amyloidosis in CNS pathology, 5xFAD mice may be inadequate to study amyloidosis in the ENS and GI dysmotility in AD.
This model lacks amyloidosis in myenteric ganglia and associated GI dysmotility. Further work using animal models with promoters that target enteric specific neurons are needed to elucidate the gut-brain connection in AD.
ACK N OWLED G M ENT
This study is fully supported by NIH R01 DK121272-01A1 and R01DK121272-3S1 for HX, GS, and JG.
CO N FLI C T O F I NTE R E S T
The authors have no competing interests.
|
2022-08-09T15:29:49.239Z
|
2022-08-05T00:00:00.000
|
{
"year": 2022,
"sha1": "3a3e455d055807ed9c45c66a897e2b99b825d12a",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "Wiley",
"pdf_hash": "5cd9c9d48069d649c205b617d463084efb7be746",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17881814
|
pes2o/s2orc
|
v3-fos-license
|
Immune suppression in vivo with antigen-modified syngeneic cells. I. T- cell-mediated suppression to the terpolymer poly-(Glu, Lys, Phe)n
The palmitoyl derivative of the linear polypeptide of poly-(L-Glu-L-Lys- L-Phe)n (GLphi) can be coupled to spleen cells directly. The intravenous administration of 2 X 10(5)--3 X 10(7) GLphi-coupled syngeneic spleen cells induces GL-phi-specific suppressor T cells in C57BL/6 nonresponder mice. The suppression is antigen specific and can be detected by the inhibition of the primary GLphi plaque-forming cell response to challenge with GLphi-fowl gamma globulin. The number of inducer cells required for suppression carry less than 0.1 microgram of antigen. Spleen cells from tolerized mice can transfer suppression to normal syngeneic recipients. The suppression is cyclophosphamide sensitive and the suppressor cells bear the Thy 1.2 marker. This method of inducing antigen-specific suppressor cells may be generally applicable to other antigen systems.
Intravenous injection of hapten-conjugated syngeneic cells results in specific immunological tolerance affecting both hapten-specific contact sensitivity and plaqueforming cell responses (1)(2)(3)(4) . Although the underlying basis for this nonresponsiveness is not fully understood, it appears that both the T-and B-cell compartments may be affected (3)(4)(5) . In addition, several studies have reported the generation of specific suppressor T cells under these conditions (4)(5)(6).
The antigen that was used to induce specific nonresponsiveness (suppression) in these studies was the synthetic linear polypeptide of L-glutamic acid, L-lysine and Lphenylalanine (GLO).1 This antigen was chosen for several reasons . (a) The immune response to GL4 is under histocompatibility-linked immune response gene control (7) . (b) Although previous attempts to identify GLO specific suppressor T cells in nonresponder mice have been unsuccessful (8,9), we questioned whether induction with antigen-modified syngeneic cells would be a more efficient means of inducing T suppressor cells. (c) In addition, we sought to provide evidence that nonresponder mice could generate functional T cells with specificity for GLO .
To date, all systems that analyzed nonresponsiveness induced with antigen-modified syngeneic cells employed small, chemically reactive molecules which were covalently conjugated to cell surface proteins . In the present study, we have used a different method of coupling antigen to spleen cells . This new coupling method uses the ability of palmitoyl-derivatized polypeptides to adhere to cell membranes . Thus, polypeptides can be coupled onto spleen cells without excessive chemical modification of the peptide and without the formation of covalent bonds between the polypeptide antigens and any cell surface molecules . We now demonstrate that the intravenous administration of syngeneic spleen cells modified with a palmitoyl-derivatized polypeptide results in antigen-specific nonresponsiveness and the induction of specific suppressor T cells. first experiment mice were injected with 10 or 1 [Lg of p-GL4 in PBS i.v. Three days later, they were immunized with 20 fig GLO-FyG in M/P (day 0) . On day 8 their spleens were assayed for GL¢-specific and FyG-specific PFC responses . When 10 Ag p-GL4 was injected, both the IgM and IgG anti-GLO responses were significantly reduced (Table I) ; however, there was no reduction of the FyG response (vide infra) . In contrast, no significant tolerance was noted when 1 lag of p-GL4 was used as pretreatment . In separate experiments 10 ng to 1 Ag doses of soluble p-GLO also failed to affect the GL.O-specific PFC response, whereas 10-NLg doses were again tolerogenic (data not shown) . It was likely that hydrophobic molecules such as p-GL4 reacted with both serum proteins and cells . However, because the cell-bound antigen may be more effective in producing nonresponsiveness, the next experiment was carried out to elucidate the importance of cell surface-associated GL4 molecules (Table I) . Mice were injected with 107 GL.p-coupled syngeneic spleen cells (p-GLO-spl cells) or with 107 control spleen cells. 3 days later they were immunized with 20 ltg G14-FyG in M/P (day 0) . 8 days later their spleens were assayed for GLO PFC responses . In mice that received p-GL.0-spl, both the IgM and IgG anti-GLcp responses were decreased by 70-80%, compared with mice that received control cells . The efficiency of cell surface-associated GLqS in the induction of tolerance was evident from the trace labeling studies. The data indicated that 0 . 1-Ftg quantities of GL4 were carried by the 107 p-GL.~-spl cells able to induce tolerance, whereas, as demonstrated above, 0.01-1g,g quantities of p-GL4 (when administered i.v. as the soluble form) were not able to decrease the GL4 PFC response to G14-FyG (Table I) .
To demonstrate that the decrease in the GL4 PFC response on day 8 after immunization was not the result of a shift in the appearance of the peak of the antibody response, the PFC responses of the tolerized mice were analyzed at various days after immunization . As shown in Fig . 1 A, day 8 after immunization was the peak of the PFC response for both control and tolerant mice. Moreover, the immune response of the tolerant mice remained significantly diminished throughout the period of day 6 to day 12. This tolerance is GL4 specific, because the anti-FyG PFC response was not affected (Fig. 1 B) . Furthermore, the PFC response to trinitrophenyl-keyhole limpet hemocyanin or horse erythrocytes was not decreased in mice that were given an i.v. injection of GLO spleen cells, confirming the specificity of the p-GLq~-splinduced nonresponsiveness (data not shown) .
Fine Specificity of GL4 Nonresponsiveness. We have previously demonstrated that the antibody responses to GLO and GL are highly cross-reactive (7) and that these antibodies share idiotypic determinants (11) . Therefore, we tested the fine specificity ofGLcp nonresponsiveness with the cross-reacting polymer GL. Nonresponsiveness was induced in C57BL/6 mice with GL.0,-palmitoyl-coupled syngeneic cells. 3 days after induction, mice were challenged with either G14-FyG or GL-FyG. As shown in Table 11, the GL,0-and GL-specific PFC responses were significantly reduced, while the FyG-PFC responses were not affected. Thus, the effector mechanism (s) which mediate this tolerance can not distinguish between the closely related cross-reactive polypeptides GL and GLO . TABLE 11 Fine Specificity of GL4 Tolerance * C57BL/6 mice were given 3 X 107 GL¢ coupled spleen cells, i .v ., 3 days before challenge with 20 Wg GL40-FyG or GL-FyG. Animals were sacrificed 8 days after challenge for PFC assay. $ Arithmetic mean of PFC response t SE ; three to four mice are included in each group . § Indicates P < 0 .02 .
Dose Requirements for Induction of Nonresponsiveness . To establish the optimal dose of p-GLO-spl cells used for tolerance induction, C57BL/6 mice were injected with 2 X 104 to 3 X 10' p-GL4-spl cells, while control groups received equivalent numbers of normal spleen cells. 3 days later, the mice were immunized with 20 Itg GLO-FyG in M/P (day 0) and their spleens were assayed on day 8. The degree of tolerance in the experimental mice was expressed as percent reduction of the control PFC response. As shown in Fig. 2, there was a dose dependence for tolerance induction by p-GI.4-spl . At least 2 X 105 cells were required for optimal reduction of both IgM and IgG responses . Trace labeling studies revealed that -2 ng GLO was carried on 2 X 105 p-GL(to-spl cells. Kinetics of Tolerance Induction. The time required for the development of nonresponsiveness and the tolerant state was next investigated. C57BL/6 mice were treated with 3 X 107 p-GLO-spl cells i.v., while control groups were given 3 X 107 normal spleen cells. As shown in Fig . 3, groups of mice were immunized with 20 Ftg GLO-FyG in M/P at various times after receiving control or GL.~-coupled spleen cells, and their spleens were assayed 8 days later. Under optimal conditions G14-specific IgM and IgG responses were decreased by 80-90% of control responses . The induction of nonresponsiveness required a latent period (1-2 days before challenge) and the state of tolerance lasted for at least 63 days after injection.
p-GLO-spl Cells Induce GL4-Specific Suppressor Cells. Since these antigen modified cells induced such a strong state of nonresponsiveness lasting at least 2 mo, an active suppressor mechanism was considered to be involved. Two transfer experiments were performed in which similar results were obtained in each experiment . Donor animals were sacrificed 5 days after tolerance induction with p-GLO-spl and their spleen cells were transferred into normal syngeneic recipients . Control mice received an equal number of normal spleen cells. All mice were immediately challenged with 20 lig GL.¢-FyG in M/P and 8 days later their spleens were assayed . 2 X 107 viable spleen cells from p-GLO-spl-treated donors led to significant (60-70%, P < 0.01) suppression in the recipient animals (Table III) . The treatment of such suppressor spleen cells with anti-Thy 1 .2 serum plus complement (C) but not with normal mouse serum (NMS) plus C before transfer removed the suppressive activity (Table III) . Thus, the suppression of GLO responses is a T-cell-dependent process .
Induction is Cyclophosphamide Sensitive. Since suppressor T cells have been shown to hosphamide sensitive in other systems (l2, 13) we have analyzed the phamide sensitivity of the GLO-specific <57BL/8mice were rocsaline Lp 2 days before suppression induction cells. Control mice were given cyclophosphamide 'xsaline 2 days before in injection of 3X \07 normal spleen cells . These nice were then challenged with 2OAg G1.0-F}G in M/P and their spleens were assayed on day 8. As shown iuTable IV ' suppression could ont 6cdemonstrated iot6e cycop6nop6aooidepo,treutodgronpgi~eap'C;l^O-spiI6uo,t6ngeomrabooofuouremponxiveneaoiut6in system is vnoikive tusmall doses of cyclop6oop6arui6m . ' 5 days after suppressor cell induction, 2 X 10 7 spleen cells from tolerant or control groups were transferred into normal recipients . Tolerant cells were also treated with either NMS plus C or anti-Thy 1 .2 serum plus C before transfer into normal recipients. All mice were then challenged with 20 pg GL~-FyG in M/P. Spleen cells were assayed 8 days later for PFC . $ Arithmetic mean PFC response t SE. § Indicates P < 0 .01 . ' Mice were treated with 5 mg/kg cytoxan in 0 .5 ml saline i.p . 2 days later, suppression was induced with 3 X 107 p-GL0 spleen cells. Control cytoxan group was induced with 3 X 107 normal spleen cells . 3 days after induction mice were immunized with 20 pg GLO-FyG in M/P (day 0) . Their spleens were assayed 8 days later. $ Arithmetic mean PFC/spleen t SE. § Indicates P < 0 .01 .
Discussion
The present experiments demonstrate the striking efficiency of cell surface-associated GLO in inducing a long-lasting state of nonresponsiveness . The intravenous administration of 2 X 106 to 3 X 107 p-GLO-spl cells carrying 1-100 ng inhibits the primary GLO-specific IgM and IgG PFC responses . This state of nonresponsiveness lasts at least 63 days. Although the response to the closely related, serologically crossreactive polymer GL is also inhibited, the nonresponsiveness is antigen specific, because the PFC response to the FyG carrier is not affected. Suppressor cells appear to be responsible for initiating and (or) maintaining this state of tolerance, because spleen cells from tolerant mice can transfer tolerance to normal recipients. Moreover, the induction of nonresponsiveness is cyclophosphamide sensitive and the cells which can transfer suppression are Thy 1 .2 positive. The phenomenon we have described contrasts with the state of "low zone" tolerance described by Dresser and Mitchison (14) and Rajewsky and Brenig (15) . The induction of low zone tolerance required repeated injection ofsubimmunogeneic concentrations of antigen over a prolonged period of time. In contrast, GLO-specific tolerance was rapidly induced (<3 days) after a single intravenous administration of tolerogen . Furthermore, in the present studies, nanogram quantities of soluble p-GLO failed to induce suppression unless the p-GLO was associated with cells .
The adoptive transfer experiments clearly demonstrated that Thy 1 .2-positive cells from tolerant mice were required to specifically suppress the GLO antibody response in normal recipients. Antigen carryover cannot account for the ability to adoptively transfer suppression, because p-GLO-spl cells administered on the day of immunization with GL0-FyG could not decrease the day 8 GL4 PFC response. In addition, cyclophosphamide treatment, which has been shown to interfere with the process of suppressor cell induction (12,13), abrogates the induction of GL(O tolerance . Thus, from these various experiments it appears that suppressor T cells are required for the generation of G14-specific nonresponsiveness induced with p-GLO-spl cells. It is well established that haptenated syngeneic cells can be used to specifically suppress a hapten-specific PFC response (3,4) or contact sensitivity reaction (1,2) . The most commonly used haptens for these studies have been trinitrophenyl and dinitrophenyl . The induction of hapten-specific T-cell tolerance by the i.v. administration of nitrophenylated syngeneic cells is at least in part mediated by Thy 1positive suppressor cells (4)(5)(6) . Furthermore, the induction of trinitrophenyl-specific suppressor T cells was demonstrated to be cyclophosphamide sensitive (16) . The results presented in this report closely parallel those noted in the hapten systems, in that small numbers of antigen-coupled cells are sufficient to induce a potent, longlasting T-cell-dependent suppression . However, unlike the chemical conjugation methods that covalently couple hapten to cell surface proteins, the palmitoyl coupling method does not covalently associate GL4 with any cell surface molecules . We believe that the palmitoyl-derivatized peptides are anchored into the lipid portion of the membrane and may be presented to the immune system in the same form as a foreign minor histocompatibility determinant . The palmitoyl coupling method should prove generally applicable to other lysine-containing peptides and proteins . Indeed, initial experiments have indicated that proteins can be coupled to syngeneic cells to induce specific suppression .
The route of administration of antigen-modified syngeneic cells may be crucial in the induction of suppression . In other systems, the i.v. administration of modified syngeneic cells appears to be the most efficient route for the induction of suppressor cells (1)(2)(3)17) . Experiments are in progress to define (a) the importance of the route of antigen administration, (b) the cell type(s) that are most efficient in the induction of suppression, and (c) the nature of the cell surface molecules that provide the induction signals.
C57BL/6 mice are genetic nonresponders to GLcp by virtue of the fact that they lack an H-linked Ir gene which is required for GLO responsiveness (7) . The genetic defect in GLO nonresponder mice appears to function at the level of the induction of GLO-specific helper T cells . Thus, C57BL/6 nonresponder mice produce anti-GLO antibody, after immunization with GLO that has been conjugated to an immunogenic carrier, such as FyG (10), indicating that nonresponder mice do not have a defect in their B-cell repertoire. Hence, the GLO polypeptide may function as a "hapten" in C57BL/6 nonresponder mice, and as a result the many similarities between the suppression induced with antigen-modified cells in the hapten and GL¢, systems may not be coincidental.
The mechanism(s) of GLO-specific suppression are under investigation . Several possible mechanisms can be considered at this time. First, the GL.O suppressor cells may act directly upon GL4t0 or GL helper T cells . In opposition to this possibility are the data from various systems, which indicate C57BL/6 nonresponder mice do not produce helper cells to these polypeptides (18,19) . A second possibility, which has been suggested previously (3) and which may apply here, is that the GL0 suppressor cells act directly on the GI-0-or GL-specific PFC precursor cells . This mechanism could also explain the lack of fine specificity with respect to GL4 and GL, since Kipps et al . (11) have demonstrated that the anti-GL¢ and anti-GL responses of C57BL mice share idiotypic determinants . Finally, an unprecedented mechanism may be invoked whereby the G14-specific suppressor cells may inactivate a subpopulation of This report represents the first indication that GL4 nonresponder mice possess functional T cells with specificity for GLO, and suggests that GL¢ nonresponder animals can generate a T-cell receptor with specificity for this polymer. Heretofore, the data have suggested that nonresponder mice lacked helper and suppressor T cells (8,18,19) . Transferable T-cell-mediated suppression was not previously demonstrated in GLO nonresponder mice (9) .
|
2014-10-01T00:00:00.000Z
|
1978-12-01T00:00:00.000
|
{
"year": 1978,
"sha1": "b5f46715714ba1d0169d794f284a9fb749600ee6",
"oa_license": "CCBYNCSA",
"oa_url": "http://jem.rupress.org/content/148/6/1539.full.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "b5f46715714ba1d0169d794f284a9fb749600ee6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
244349380
|
pes2o/s2orc
|
v3-fos-license
|
Electrocardiographic abnormalities in COVID-19 patients visiting the emergency department: a multicenter retrospective study
Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can be associated with myocardial injury. Identification of at-risk patients and mechanisms underlying cardiac involvement in COVID-19 remains unclear. During hospitalization for COVID-19, high troponin level has been found to be an independent variable associated with in-hospital mortality and a greater risk of complications. Electrocardiographic (ECG) abnormalities could be a useful tool to identify patients at risk of poor prognostic. The aim of our study was to assess if specific ECGs patterns could be related with in-hospital mortality in COVID-19 patients presenting to the ED in a European country. Methods From February 1st to May 31st, 2020, we conducted a multicenter study in three hospitals in France. We included adult patients (≥ 18 years old) who visited the ED during the study period, with ECG performed at ED admission and diagnosed with COVID-19. Demographic, comorbidities, drug exposures, signs and symptoms presented, and outcome data were extracted from electronic medical records using a standardized data collection form. The relationship between ECG abnormalities and in-hospital mortality was assessed using univariate and multivariable logistic regression analyses. Results An ECG was performed on 275 patients who presented to the ED. Most of the ECGs were in normal sinus rhythm (87%), and 26 (10%) patients had atrial fibrillation/flutter on ECG at ED admission. Repolarization abnormalities represented the most common findings reported in the population (40%), with negative T waves representing 21% of all abnormalities. We found that abnormal axis (adjusted odds ratio: 3.9 [95% CI, 1.1–11.5], p = 0.02), and left bundle branch block (adjusted odds ratio: 7.1 [95% CI, 1.9–25.1], p = 0.002) were significantly associated with in-hospital mortality. Conclusions ECG performed at ED admission may be useful to predict death in COVID-19 patients. Our data suggest that the presence of abnormal axis and left bundle branch block on ECG indicated a higher risk of in-hospital mortality in COVID-19 patients who presented to the ED. We also confirmed that ST segment elevation was rare in COVID-19 patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12873-021-00539-8.
Conclusions: ECG performed at ED admission may be useful to predict death in COVID-19 patients. Our data suggest that the presence of abnormal axis and left bundle branch block on ECG indicated a higher risk of inhospital mortality in COVID-19 patients who presented to the ED. We also confirmed that ST segment elevation was rare in COVID-19 patients.
Keywords: Electrocardiogram, In-hospital mortality, Patterns, COVID-19 Background Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) can be associated with myocardial injury, which have been described in various case reports, from acute myocarditis to pseudo acute myocardial infarction [1][2][3][4]. Emergency departments (EDs) worldwide have been at the epicenter of COVID-19 pandemic [5]. Early identification of cardiac involvement in COVID-19 in patients presenting to the Emergency Department (ED) is crucial. Electrocardiogram (ECG), widely performed in the ED and costless, could be a useful tool. Bertini et al. analysed COVID-19 patients who died or were treated with invasive mechanical ventilation, and found that electrocardiogram (ECG) recorded at hospital admission was abnormal in 93% of the patients, with signs of acute right ventricular pressure overload (RVPO) in 30% of the patients [6]. However, they did not compare these ECG findings to those from patients with mild to moderate forms of COVID-19. Mccullough and al. performed a retrospective cohort study in patients with COVID-19 who had an ECG at or near hospital admission in a large New York City teaching hospital. Using a multivariable logistic regression model that included age, ECG, and clinical characteristics, they found that the presence of one or more atrial premature contractions, a right bundle branch block or intraventricular block, ischemic T-wave inversion and nonspecific repolarization increased the odds of death [7]. However, these findings from a population with high incidence of cardiovascular conditions may be of limited external validity. Thus, the aim of our study was to assess if specific ECGs patterns could be related to in-hospital mortality in COVID-19 patients presenting to the ED in a European country.
Study design and participants
From February 1st to May 31st, 2020, we conducted a multicenter study in three hospitals in France: Nantes University Hospital, La Roche sur Yon Hospital and Saint Nazaire Hospital. We included adult patients (≥ 18 years old) who presented to the ED during the study period, with an ECG performed at ED admission, and diagnosed with COVID-19. COVID-19 diagnosis was confirmed by a positive reverse transcription-PCR (RT-PCR) targeting different genes of SARS-CoV-2 on nasopharyngeal swab [8]. All of the SARS-CoV2 PCR swabs were performed in ED.
Data collection
Patients' demographic details (age, gender), historical diagnoses and comorbidities (diabetes, coronary artery disease, history of cardiac heart failure, diabetes, arrhythmia, tobacco use, chronic obstructive pulmonary disease, chronic kidney disease (eGFR < 60 mL/min/m2), stroke, hypertension), current medication list (chronic oral anticoagulation, chronic antiplatelet therapy, anti-arrhythmia agents), signs and symptoms presented (blood pressure, heart rate, respiratory rate, oxygen flow rate, oxygen saturation, and chest pain), and outcome data were extracted from electronic medical records using a standardized data collection form. We used International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, codes to define comorbidities in the population. Troponin T was assessed using Elecsys, Roche©. Two physicians (HDC and EM) checked the data extracted from said patients. Prior ECGs were rarely available on emergency room admission, which precluded comparison with ECGs performed on emergency room admission.
Outcomes
The primary outcome was in-hospital mortality. We ascertained death based on review of discharge summaries and death notes in the electronic medical records.
ECGs interpretation
Available ECGs were interpreted by two independent emergency physicians (Nantes University Hospital, Saint Nazaire Hospital) or by one emergency physician and one cardiologist (La Roche sur Yon Hospital) using a standardized data collection form [7]. When interpretation was discordant between the two physicians, another interpretation was performed by another independent cardiologist (DT). No formal testing of between-or within-reader variability of interpretation was performed for this study. Data extracted from each ECG included rhythm categorized as normal sinus rhythm or atrial fibrillation/flutter, atrioventricular block, axis deviation, intraventricular conduction block (IVB) (QRS duration of > 110 ms), right bundle branch block (RBBB), left bundle branch block (LBBB), left or right ventricular hypertrophy, ST segment or T-wave changes (localized ST elevation or depression, localized T-wave inversion, or other nonspecific repolarization abnormalities) and presence of U wave.
Statistical analysis
Categorical variables are shown as frequency rates and percentages, and continuous variables as mean (standard deviation, SD). Relationship between ECG abnormalities and in-hospital mortality was assessed using univariate and multivariable logistic regression analyses, as described in additional file 1. Variables with a p-value under 0.10 were included for the multivariable model, and a backward regression was performed. Odds Ratio (OR) are expressed with 95% confidence interval. A two-sided α of less than 0.05 was considered statistically significant. Statistical analyses were done using the R software (version 3.6.0).
Ethics
The study was approved by the Ethics Committee named Groupe Nantais d'Ethique dans le Domaine de la Santé, which waived the informed consent. Due to its retrospective nature on de-identified data, an informed consent was waived (GNEDS 28-05-2020). In France, the study is excluded from the legal requirements applicable to research involving humans within the provisions of the French Public Health Code. The sponsor of the study is CHU de Nantes (Nantes University Hospital), Delegation for Clinical Research and Innovation.
Electrocardiographic findings
All 275 ECGs were interpreted by two emergency physicians or by an emergency physician and a cardiologist. A discordant interpretation between the two physicians was found in 41 (14.9%) ECGs. ECG findings are shown in Table 2. Baseline electrocardiographic characteristics included a mean heart rate of 85 ± 16 bpm, with a mean PR interval of 160 ± 40 ms and a mean QRS interval of 98 ± 29 ms. Most of the ECGs were in normal sinus rhythm (87%), and 26 (10%) patients had atrial fibrillation/flutter on ECG at ED admission. Abnormal axis was rare (n = 16, 6%), with 5% having left axis deviation and 1% a right axis deviation. Abnormal intraventricular conduction was found in 16% of the patients, with RBBB in 5% and LBBB in 4%. Repolarization abnormalities represented the most common findings reported in the population (40%), with negative T waves representing 21% of all abnormalities. Importantly, ST segment elevation was rare (n = 6%). When comparing patients with repolarization abnormalities to patients without, troponin levels were not significantly different (39 vs 31 ng/L, p = 0.45).
Relationship between electrocardiographic findings and outcomes
Univariate and multivariate logistic regression analyses were then performed (Table 3). Variables with a p-value under 0.10 in the univariate analysis were included for the multivariable model (sinus rhythm, Abnormal axis, IVB, LBBB), and a backward regression was performed,
Discussion
Our multicenter study of the ECGs performed at ED admission in 275 COVID-19 patients found that repolarization abnormities were frequent, whereas ST segment elevation was rare, as previously reported [7]. In our cohort, the patients were older than in another Asian cohort [9], and our patients had lower prevalence rates of cardiovascular comorbidities than the North American cohorts [8,10]. However, the baseline characteristics of COVID-19 patients reported here were consistent with those observed in French EDs during the first wave of COVID-19 pandemic [11]. Compared to a cohort of patients with community-acquired pneumonia (CAP), our population had similar age but with less cardiovascular risk factors [12]. Nonetheless, the comparison should be made with caution, regarding the impact of cardiovascular comorbidities in both COVID-19 and CAP. In our multivariable logistic regression model, the presence of abnormal axis and LBBB were associated with in-hospital mortality. Finding early signs of cardiac impairment is extremely valuable in prioritizing ED patients, and our results suggest that ECG, widely performed in the ED and at no cost, is a useful tool in the ED assessment of COVID-19 patients. Mccullough and al. previously reported that ischemic T-wave inversion was associated with an increased risk of death in patients with COVID-19 who had a ECG at or near hospital admission [7]. Moreover, Lombardi et al. reported that elevated troponin was an independent variable associated with in-hospital mortality and a greater risk of complications during hospitalization for COVID-19 [13]. Myocardial involvement in COVID-19 is supported by pathological finding of interstitial inflammatory mononuclear cells in heart tissue during autopsy [14]. But the specific involvement of SARS-CoV2 is unclear in the underlying cardiac pathogenicity. Importantly, similar patterns have been found in CAP, where cardiovascular disease (CVD) events have been reported to be frequent [15][16][17] and ECG changes are often seen, with frequent ST segment or T-wave abnormalities [18]. Hypoxia, hypotension or decreased cardiac output could be one of the non-specific explanotary factors. Animal models have previously suggested that bacterial infections may lead to apoptosis of cardiomyocytes through direct and indirect toxicity of S. pneumoniae [19]. Violi et al. suggested that Nox2 related oxidative stress could explain myocardial damage [20]. Complex interactions through direct cardiomyocyte invasion and indirect inflammatory-mediated damage could explain cardiac involvement in COVID-19 [21]. Myocardial injuries have been described in various case reports, from acute myocarditis to pseudo acute myocardial infarction [1][2][3][4]22], and several studies highlight an association between cardiac injuries and mortality [23][24][25][26]. Other studies suggested that abnormal heart rhythms are common [7,13,27]. But relationship between these findings and COVID-19 infection should be made with caution, as in an unselected population of healthy adults, inverted T waves in the anterior and lateral leads have been shown to be associated with long-term risk of mortality from CVD event [28]. A prospective cohort of patients with COVID-19 infection and ECG abnormalites should assess long-term cardiovascular consequences of COVID-19. Our study has some limitations. First, because of its retrospective design, data were missing in both biological data and comorbidities, especially regarding troponin level which was available in only 171 patients. As symptoms onset was also unknown, this prevented us from doing any survival curve analysis. Since ECGs were not performed in every patient, it is possible that more severe patients had ECG analyses, therefore inducing a selection bias explaining the high mortality rate in our patients. Second, when the ECG was abnormal, it was not possible to determine if those abnormalities were related to the COVID-19 infection or to a previous unknown cardiopathy. Third, the ECG were interpreted by two emergency physicians or an emergency physician and a cardiologist, it is possible that some subtle ECG findings have been missed. However, previous studies suggested that most ED misinterpretations were determined unlikely to have clinical significance [29,30]. Moreover, centralized ECG reading by cardiologists is not feasible in most hospitals. Fourth, echocardiography or cardiovascular magnetic resonance were not performed in all included patients for evaluation of heart involvement.
Conclusions
In a multicenter study, we reported that ECG performed at ED admission may be useful to predict severity and death in COVID-19 patients. Our data suggest that the presence of abnormal axis and LBBB on ECG at ED admission indicated a higher risk of death. We also confirmed that ST segment elevation at ED presentation was rare in COVID-19 patients. A prospective study should be performed to evaluate the monitoring of ECG in COVID-19 prognosis, and if ECG abnormalities detected during COVID-19 infection is a risk factor for subsequent CVD event.
|
2021-11-19T14:40:37.876Z
|
2021-11-19T00:00:00.000
|
{
"year": 2021,
"sha1": "b4a08b78ca7e28ad69a660456626d6de62c6026d",
"oa_license": "CCBY",
"oa_url": "https://bmcemergmed.biomedcentral.com/track/pdf/10.1186/s12873-021-00539-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4a08b78ca7e28ad69a660456626d6de62c6026d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119058938
|
pes2o/s2orc
|
v3-fos-license
|
Demonstration of teleportation-based error correction in the IBM quantum computer
Quantum error correcting codes (QECC) are the key ingredients both for fault-tolerant quantum computation and quantum communication. Teleportation-based error correction (TEC) helps in detecting and correcting operational and erasure errors by performing X and Z measurements during teleportation. Here we demonstrate the TEC protocol for the detection and correction of a single bit-flip error by proposing a new quantum circuit. A single phase-flip error can also be detected and corrected using the above protocol. For the first time, we illustrate detection and correction of erasure error in the superconducting qubit-based IBM's 14-qubit quantum computer.
Introduction
Successful transmission of quantum information requires the information to be less affected by noise in the communication channel. Quantum error correction (QEC) [1][2][3][4] achieves this by encoding the information into a large number of physical qubits and by performing operations on them to detect and correct errors. QEC is an essential ingredient in achieving fault-tolerant quantum computation [5][6][7][8] . Many protocols have been proposed such as the Shor code 9, 10 and Steane code 11,12 for error correction mechanism. Errors can be broadly divided into two types: (a) operational errors and (b) erasure errors 13 . Operational errors are the action of Pauli operators on qubits. Examples of operational errors are: (i) bit-flip error 14 (ii) phase-flip error 15 and (iii) combination of both bit-flip and phase-flip errors. Operational errors can be corrected by Quantum Error Correction Codes (QECC) 16 such as the bit-flip code, the phase-flip code, the Shor code etc. Erasure errors are errors where photons or qubits (whose exact location information is known) are erased. A quantum erasure channel replaces a qubit (qudit) with an 'erasure state' that is orthogonal to all the basis states of a qubit (qudit) with a certain probability, thereby erasing a qubit (qudit) and enabling the receiver know that it has been erased. Erasure error occurs physically due to various situations such as the leakage to other states, atom losses 17 , and photon losses 18 .
By reliably measuring the logical operators of a code, we can actively detect and correct errors.
Teleportation-based error correction (TEC) 19 provides a method to do so. Bell measurements [20][21][22] implemented during teleportation acts as syndrome measurements 23 here. Erasure errors can be corrected effectively by implementing TEC.
Protecting quantum information from erasure errors remains an open challenge in a long distance quantum communication as well as in practical quantum computation. Recently Muralidharan et al. 24 investigated the usage of highly effective error correcting codes of multilevel systems to protect encoded quantum information from erasure errors and then they implemented it to repetitively correct these errors. For successful long distance quantum communication through optical fibres 25 , photon loss errors possess a major threat. They proposed three generations of quantum repeaters 18, 26 based on different approaches to correct both photon loss and operational errors. One of them employs QEC to correct both loss and operational errors and it does not require a two-way classical communication between repeater stations, which provides a significant advantage over other protocols allowing for ultrafast communication across transcontinental distances [27][28][29][30] .
We explicate the teleportation-based error correction for bit-flip errors and erasure errors in IBM's 14-qubit quantum computer. We measure the fidelity of the states obtained for TEC of erasure errors using quantum state tomography [31][32][33] . A number of experiments related to quantum information science have been performed by researchers on IBM quantum computer since its inception. Some of the works include quantum simulation [36][37][38][39][40] , quantum artificial intelligence 41 , quantum machine learning 42 , quantum teleportation 33,43,44 , quantum state discrimination 34
Implementation of TEC for operational errors
Operational errors are errors which are caused by the action of Pauli operators on the encoded information 57,58 . The action of Pauli-X (σ x ) 59, 60 and Pauli-Z (σ z ) 61 operators are called bit-flip and phase-flip errors respectively. The action of Pauli-Y operator is a combination of action of both the above errors. Let's represent σ x , σ y and σ z operators as X, Y and Z respectively. Since Y can be represented as Y = iXZ, sequential correction of X error and Z error will automatically correct the Y error as well. Error correction of X and Z errors are done by identifying the type of error and the qubit location in which the error has occurred 62, 63 and applying the corresponding correction operation (e.g., if an X error has occurred, the effect of the error can be nullified by applying an X operator to the same qubit, since X 2 = I). The correction of these errors is a major challenge in the field of quantum computation as well as quantum communication. QECC helps in the detection and correction of such errors. We could actively detect and correct the errors by measuring the logical operators of the code reliably 64 . Teleportation-based error correction provides a method to do so.
The quantum circuit representation of TEC is illustrated in Fig. 1. Here, X L and Z L are the logical Bell measurements 65 , which are employed during teleportation. They are used in detecting the errors that affect the encoded information. The X-basis (Z-basis) measurements determine whether a phase-flip (bit-flip) error has occurred or not. During teleportation, logical CNOT gates 66 are performed between the incoming message and the first logical qubit of the Bell state. The error correction is applied to the second logical qubit of the Bell state as the information is transferred to that logical qubit during teleportation. Classical control operations are implemented during teleportation which are shown by X and Z boxes in Fig. 1.
In the proposed TEC circuit, a logical qubit is encoded in three physical qubits. Our QECC can correct one bit-flip error on an arbitrary qubit. It requires a total of eleven qubits. The quantum circuit for TEC of operational errors is illustrated in Fig. 2. The detailed step-by-step procedure is given as follows.
1. We encode a logical qubit, α |0 L + β |1 L as where α = 0.92 and β = 0.38 as depicted in block A of the Fig. 2. 2. The logical Bell state is encoded as The quantum circuit for this encoding is illustrated in block B of Fig In the proposed TEC circuit for erasure error, we encode the message based on redundancy and parity encoding 74,75 . Our QECC can correct one erasure error on an arbitrary qubit. The total number of qubits required is sixteen. The quantum circuit for TEC of erasure error is given in Fig. 3. The step-by-step procedure for erasure error correction is provided below.
1. We encode a logical qubit, α |0 L + β |1 L as where α = 0.92 and β = 0.38. The circuit for encoding is demonstrated in block A of the Fig. 3.
The logical Bell state is encoded as
which is illustrated in block B of the Fig. 3.
Quantum nondemolition (QND) measurements are performed using CNOT gates condi-
tioned on the incoming message and targeted on ancilla qubits to obtain the location information of the erased qubits. This is identified as such when a qubit is erased, the conditional not is not applied between the corresponding information qubit and the ancilla qubit 28 . This circuit is demonstrated on block C of the Fig. 3 6. In our circuit, we assume that an erasure channel acts in between the blocks A and C. This is illustrated by a block labelled ' ' in Fig. 3. The simulation of erasure of an error can be achieved by removing the gates which affect the qubit that need to be erased. 7. The X logical operators and the Z logical operators of our code are X L = IZIZ, ZIZI and Z L = IIXX, XXII respectively. We will get back the encoded message by measuring the logical operators of our code reliably. For example, if the first or third qubit is lost, we measure the logical operators X L = IZIZ. And if the second or fourth qubit is lost, we measure the logical operators X L = ZIZI. Ideal case.
Discussion
We have successfully implemented the Teleportation based error correction (TEC) for operational errors and erasure errors on the IBM's 14-qubit quantum computer. We have also quantified the fidelity of the output states generated for TEC of erasure errors using quantum state tomography.
The TEC protocol for operational errors can be used in correcting a single bit-flip error acting on an arbitrary qubit of the encoded state and the TEC for erasure error can be used for correcting an erasure error acting on an arbitrary qubit of the encoded state. Parity and redundancy encoding is used for encoding the information in the case of erasure errors. Erasure errors affecting the transmission of information over long distances can be corrected using the TEC protocol that we proposed. The TEC based protocols in general can be implemented in any system for the detection and correction of operational and erasure errors affecting the system. In future, this work can be extended to quantify how gate errors and memory decoherence affect the error correction for erasure errors. Qubits
|
2019-02-02T08:03:10.000Z
|
2019-02-02T00:00:00.000
|
{
"year": 2019,
"sha1": "1b6d1e79a82f37c96e1a1b4728a817ec7a62a5ec",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1b6d1e79a82f37c96e1a1b4728a817ec7a62a5ec",
"s2fieldsofstudy": [
"Physics",
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
8349754
|
pes2o/s2orc
|
v3-fos-license
|
Ranking Journals Using Altmetrics
The rank of a journal based on simple citation information is a popular measure. The simplicity and availability of rankings such as Impact Factor, Eigenfactor and SciMago Journal Rank based on trusted commercial sources ensures their widespread use for many important tasks despite the well-known limitations of such rankings. In this paper we look at an alternative approach based on information on papers from social and mainstream media sources. Our data comes from altmetric.com who identify mentions of individual academic papers in sources such as Twitter, Facebook, blogs and news outlets. We consider several different methods to produce a ranking of journals from such data. We show that most (but not all) schemes produce results, which are roughly similar, suggesting that there is a basic consistency between social media based approaches and traditional citation based methods. Most ranking schemes applied to one data set produce relatively little variation and we suggest this provides a measure of the uncertainty in any journal rating. The differences we find between data sources also shows they are capturing different aspects of journal impact. We conclude a small number of such ratings will provide the best information on journal impact.
Traditional citation-based metrics are difficult to calculate since they are based on the bibliometric journal databases, such as Thomson Reuters' Web of Science. Conversely, Altmetrics are conglomerates of disparate sources of references to research output derived from non-traditional sources, primarily modern electronic sources characterised by fast response times (see Bornmann, 2014 for a recent overview). The lack of any systematic peer review is another characteristic of most altmetric data. The open and electronic nature of much altmetric data offers the prospect of alternative paper and journal metrics, which may be more accessible to stakeholders. The rapid response of such data to innovations suggests such metrics might offer improvements over metrics based on slower traditional sources. This paper considers a number of approaches to the aggregation of altmetric data in order to create a robust journal-level metric that complements the existing citation-based metrics already in use across the academic community. The aim is not to create a contender for a single metric to quantify journal output but instead to create a useful measure that gives "the user" a sense of the non-citation attention that a journal attracts in the same way that Journal Impact Factor, Eigenfactor and other related metrics give this sense for citation attention. The raw data illustrated here contains fifteen "mentions" (solid lines) by five "authors" (hexagons A1 to A5) of seven papers (squares P1 to P7). We also know the journal (circles), which published a paper (dashed lines).
Data Sources
In this paper we use the 2013 IF (Impact Factor) and EF (Eigenfactor) as examples of traditional sources of journal ratings. Our altmetric data comes from 20 months of data from altmetric.com, a commercial company. For each mention about a paper we had the journal in which it was published, the source (twitter, Facebook, etc.) and the account (here termed an 'author'), as shown in Figure 1. In our case, a 'paper' has to be an article coming from a known journal. A single 'author' for us is a single account (e.g. one twitter account) or a single source (a news outlet such as a newspaper). In some cases several different authors may be responsible for one site or one author could provide information to many different sites or accounts (a twitter account, a facebook account, a blog, etc) but in our data such an author appears as many distinct authors.
Methods
The simplest type of journal altmetric is one based on basic counts where each mention of a paper in a journal adds one to that journal's count. We collected counts for social media 'sbc', non-social media 'nsbc' (e.g. downloads) and combined scores 'bc' (for blind count i.e. with no weighting for different sources). We also obtained the current journal rating produced by altmetric.com (denoted 'ca'), which is a weighted count rating in which different sources are given different weights (blogs and news sources get highest weighting).
Network Definitions
A criticism of simple count based methods, such as Impact Factor or our altmetric counts discussed above, is that some citations or some altmetric authors are more important than others. Eigenfactor is an illustration of a response to these criticisms in the realm of traditional data (Bergstrom 2007), as it uses a network based view to arrive at a PageRank style measure. We will also turn to a network-based view in order to look at a wide range of measures, which probe the relationships between journals on a much larger scale. There are many possible network representations of our data. In this paper we will focus only on networks in which the nodes represent journals. The central idea in our construction of the relationship between two journals is that we only want to consider activity from authors who mention both journals because only these authors are making an implicit comparison between journals. The activity of each author is used to define their own "field of interest" in a selfconsistent manner and so the activity of authors is used to make comparisons between journals in the same field as defined by each author's interests. This ensures that at a fundamental level we avoid the much discussed problem of making comparisons between papers or journals from different fields. An author only interested in medical issues will only contribute to the evaluation of Nature, Science and so forth in terms of their interest in these multidisciplinary journals relative to Cell or other specialised journals. A useful analogy here is that each journal is a team and an author who mentions articles published in two journals represents one game between these journalsour pairwise comparison. The score in each game is the number of mentions so in comparing two journals j and l, the score for journal j from the game represented by author a is recoded as the entry J ja .in a rectangular matrix. In Figure 1 the game between J1 and J2 represented by author A2 has the result 2-1, a 'win' for journal 1 over journal J2 suggesting that we should rate journal J1 more highly than journal J2 given the activity of this one author. We shall consider three different ways of quantifying the journal relationships, the network edges. Our first approach gives us an adjacency matrix S where the entry S jl gives the weight of the edge from journal j to journal l, and this is given by Here j and l represent different journals and a is one author. J ja is a matrix, which is equal to the number of papers mentioned by author a which were published in journal j. The expression for S jl is counting the number of times papers published in journal j are mentioned by authors who also mention papers in journal l, with the total normalised by the number of such authors. Note that this defines a sparse, weighted and directed network. In our conventions if journal j is better than journal l we will have S jl > S lj . Our second definition gives us an adjacency matrix P where = Here ( ) = 1 if > 0 otherwise this function gives 0. This definition counts how many authors mention more papers in journal j than they do papers in journal l., normalising again by the number of authors who are able to make this pairwise comparison. Again P jl > P lj if journal j is better than journal l. This definition counts how many authors mention more papers in journal j than they do papers in journal l but when this is balanced gives an equal weighting to both side. This definition has the useful property that Q jl + Q lj = 1 (not generally true for matrix P).
Network Measures
Once we have our network with journals as nodes, we need to find ways to use this structure to define which nodes are the most important. Measures which quantify the importance of a node are known as centrality measures in social network analysis. Unfortunately, many standard measures do not take into account the weights or directions of edges, both of which carry crucial information in our case. We used two well-known network centrality measures to illustrate our approach: PageRank and HITS (e.g. see Langville & Meyer 2012). Both may be cast as eigenvector problems and there are fast algorithms for large networks which are readily available. We apply these two methods to all three networks, giving six different ratings e.g. 'qpr' indicates a PageRank rating derived from a Q matrix while 'ph' indicates a HITs rating derived using a P matrix. We also tried a different type of measure known as Points Spread Rating (denoted 'psr') (p117-120, Langville & Meyer 2012) where the rating r j for journal j is = ∑ ( − ) / , (similarly for the P and Q matrices) and n j is the number of journals. This expression ensures that the differences (r j -r l ) in the rating of any two journals j and l are as close as possible to the actual differences in the number of average mentions of papers.
Comparing Ratings
Once we have obtained different ratings, the final task is to make a comparison. The simplest approach is to make a qualitative comparison of the top ranked journals in each case. For a more quantitative approach we used standard methods of multivariate statistics. First we found a correlation matrix whose entries express the similarity of two rating methods: the Pearson correlation matrix based on the numerical values of the ratings obtained, Spearman's matrix which based on the ranking of journals, and finally Kendall's tau. These were analysed using principle component analysis or hierarchical clustering methods.
Findings
In terms of the altmetric data we found typical fat-tailed distributions, both for the number of mentions of a paper from different sources and in terms of the number of mentions put out by a single author. Some sources, such as twitter, are significantly larger than others. When comparing different journal rating schemes, some results were found only with Spearman and Kendall tau correlation measures (which are based on the ranks of journals). The Pearson measure (based on actual rating values gave slightly different results in some cases. However in most cases there good agreement. Some typical results are shown in Figure 2 and numbers for ranking schemes in the following text refer to the labels in Figure 2. The variation between different rating schemes for the same altmetric data source gives relatively little variation, roughly on the same scale as the difference we find between IF and EF. The four different methods shown for ratings based on Facebook mentions (6,12,16,19) are a typical example. Clearly our Points Spread Rating scheme (psr, 21,22,23) and our simple counts of non-social media mentions (nsbc, 6) produces outliers. Some sources, such as Facebook and News, were also noticeably different from IF and EF, but the difference was much smaller than that found with the psr rating. One source, which gave ratings well correlated with IF and EF was blogs (8,11,15,18). Likewise, most of our simple count based ratings were just as close to IF (3) or EF (5) as these two rating schemes were to each other. This includes our unweighted count of all mentions (bc, 1), the number of times papers are mentioned (pc, 7), counts of just social media mentions (sbc, 14), and in particular the more sophisticated weighted journal ranking produced by altmetric.com (ca, 2). Most of our work focused on statistics for the whole collection. A look at the top journals, see Table 1, confirmed that at an individual level our new altmetric network ratings were giving sensible results, but with variations which indicate the uncertainty in such rankings.
Figure 2. A comparison of some of the different ranking schemes using a Spearman correlation matrix.
On the left a dendrogram and on the right a scatter plot using the first two principle components of PCA. For clarity, only a limited subset of our ratings were used in these plots.
Discussion
Given our differences between ranking based comparisons (Spearman and Kendall Tau) and results based on Pearson correlation matrices, this suggests that ratings are dominated by the measurement of the few journals, which have most of the mentions (fat tails). This is one reason we favour Spearman correlation matrices in Figure 2 and would suggest this makes sense in most journal ranking contexts. Our Points Spread Rating scheme (psr, 21, 22, 23) seems to be reflecting very different patterns in the data from those found using other approaches. Given that the other approaches include Impact Factor, widely accepted as a measure of journal attention, we think it is hard to see a role for PSR to rank journals. Likewise, the simple blind counts of non-social media mentions (nsbc, 6) does not appear to be useful. The remaining different altmetric sources and rating methods do show enough similarity to suggest that they are all an acceptable measure of journal importance. At the same time there are some interesting differences indicating that our altmetric based schemes are capturing different features of the impact of journals. At the very least this diversity will indicate the level of uncertainty in rating schemes. Two possible reasons for the close correlation of blogs and IF are as follows. Perhaps papers in high IF journals are of intrinsic interest to blog writers. Alternatively blog authors may read a limited number of journals but these tend to be those with high IF. Probably both factors are important, each reinforcing the other to produce the strong correlation we find. Another interesting feature is that most of our simple count based ratings, which are not normalised by the number of articles per journal, are also well correlated with IF (3) which does use normalised counts. This can be explained if there is a correlation between the number of papers in a journal and its impact, something we can see in of count of number of papers (pc, 7). We will be looking at normalised altmetric counts in the future but it appears normalisation may not be essential. In particular, we note the altmetric.com journal rating (ca, 2) is well correlated and so provides a good handle on the impact of journals. The fact that we tried many different rating methods and that (with the exception of psr based measures) they showed variations on scales no bigger than those found between IF and EF, suggests that no one method is optimal in any sense. However we can use such a suite of metrics to get a handle on the uncertainty associated with any measure. This would be of great utility for users and a contrast to the three decimal point 'accuracy' associated with IF results.
Conclusions
We have shown how to use altmetric data to provide a reasonable journal ranking. Most types of altmetric data appear to give useful information in the sense that the correlation with IF is acceptable. At the same time altmetric data can be sufficiently different that it might reflect different types of impact. Our results suggest that different rating methods can provide a measure of the uncertainty of any journal ranking. Confirming these patterns over longer periods and producing a better understanding of the social reasons for the patterns we have found are future directions for our work. It would also be interesting to compare our results with journal attention measures derived from journal usage patterns, see for example Bollen et al 2009, an aspect not included in our data.
|
2015-07-02T09:26:03.000Z
|
2015-07-01T00:00:00.000
|
{
"year": 2015,
"sha1": "df7f15fb98f53478d56c9be460d749a4e197f330",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "df7f15fb98f53478d56c9be460d749a4e197f330",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
22619181
|
pes2o/s2orc
|
v3-fos-license
|
Different Lineage of Porcine Deltacoronavirus in Thailand, Vietnam and Lao PDR in 2015
Summary Porcine deltacoronavirus (PDCoV) was detected by RT‐PCR in 12 of 97 (12.4%) intestinal samples collected during 2015 from piglets with diarrhoea in Thailand, Vietnam and Lao PDR. Spike, membrane and nucleocapsid genes were characterized, and phylogenetic analyses demonstrated that PDCoV isolates from Thai and Lao PDR form a novel cluster, separated from US and China isolates, but relatively were more closely related to China PDCoV than US isolates. Vietnam PDCoVs, however, were grouped together with US PDCoV. The analyses of amino acid changes suggested that they were from different lineage.
Introduction
Porcine deltacoronavirus (PDCoV) is a novel pathogen in the family Coronaviridae, genus Deltacoronavirus, causing enteric disease characterized by watery diarrhoea similar to porcine epidemic diarrhoea (PED) and transmissible gastroenteritis (TGE) (Jung et al., 2015). PDCoV was first discovered in Hong Kong in 2012, during a study to identify novel coronaviruses (Woo et al., 2012). In February 2014, PDCoV was first detected and reported in Ohio, United States, in association with PED cases. The retrospective investigation demonstrated the presence of PDCoV in the USA as early as 2013 (Sinha et al., 2015). Since then, PDCoV has been detected in most pig-producing states of the USA (Marthaler et al., 2014;Wang et al., 2014;Homwong et al., 2016). Recently, PDCoV was identified for the first time in South Korea and China (Lee and Lee, 2014;Song et al., 2015), and the identification of PDCoV in China was dated back to 2004 (Dong et al., 2015). Increased identification of PDCoV raises concerns regarding the epidemiology and pathogenicity of this virus. We herein report the identification and molecular characterization of PDCoV identified from piglets with clinical diarrhoea in swine farms in South-East Asian countries (SEAC) including Thailand, Vietnam, Lao People's Democratic Republic (Lao PDR) and Philippines.
Materials and Methods
Samples and the detection method Ninety-seven intestinal samples were collected during 2015 from clinically ill piglets from commercial pig farms with diarrhoea outbreaks in Thailand, Vietnam, Lao PDR and Philippines. Of 97 samples, 68, 10, 6 and 13 were from Thailand, Vietnam, Lao PDR and Philippines, respectively ( Table 1). The sampling locations are shown in the Fig. 1.
Intestinal samples from Thailand were collected from 24 farms in the western (Ratchaburi and Nakhon Pathom), eastern (Chonburi and Chachoengsao), middle (Saraburi and Lopburi) and north-eastern (Buriram and Nakhon Ratchasima) regions, representing four major swine-producing areas of Thailand. Samples from Vietnam were from seven pig farms in Dong Nai, Baria, Long An and Binh Duong provinces in the southern region. Samples from Lao PDR were from two pig farms in Khammouane, a province in the northern region sharing a border with Nakhon Phanom, a province in the north-eastern region of Thailand. Samples from Philippines were from two farms on Luzon Island.
Nucleotide sequencing
Total RNA was extracted from intestinal samples using Nucleospin â RNA Virus (Macherey-Nagel Inc., Bethlehem, PA, USA) in accordance with the manufacturer's instructions. cDNA was synthesized from extracted RNA using random hexamers with commercial kit M-MuLV Reverse Transcriptase (New England BioLabs Inc., Ipswich, MA, USA). To screen for the presence of PDCoV, PCR amplification was performed on cDNA using specific primers for membrane (M) and nucleocapsid (N) genes of PDCoV as previously described (Wang et al., 2014). The detection of other porcine coronaviruses, including PEDV and TGEV, was performed following the previously described protocols using specific primers for spike (S) gene of PEDV (Park et al., 2007) and specific primers for N gene of TGEV (Kim et al., 2000).
Positive PDCoV samples were selected and further characterized for complete S, M and N genes using specific primers as described in Table A1. The specific PCR bands were purified by Nucleospin Gel and PCR Clean-up kit (Macherey-Nagel Inc.). The purified PCR products were sequenced. Sequencing was performed by First BASE Laboratories Inc. (Selangor, Malaysia) using an ABI Prism 3730XL DNA sequencer (Applied Biosystems Inc., Carlsbad, CA, USA).
Genetic and phylogenetic analyses
Phylogenetic analyses of the S, M and N genes of the PDCoV isolates were separately constructed together with 23 other PDCoV isolate sequences available in GenBank (Table A2). Bayesian maximum clade credibility trees were analysed using Bayesian Markov Chain Monte Carlo (MCMC) method in BEAST 1. 8.3 (Drummond and Rambaut, 2007) with Yang 96 model (Yang, 1996) provided in the BEAST. Tree prior was set as coalescent:constant size (Kingman, 1982). The MCMC chains were run for at least 300 million generations and sampled every 10 000 states. Over 30 000 generated trees were annotated using TreeAnnotator 1.8.3 with 10% burn-in, maximum clade credibility tree and median heights nodes. Tree images were generated using FigTree 1. 4.2 (Rambaut, 2014) with decreasing order nodes.
Results and Discussions
Porcine epidemic diarrhoea (PED) has been endemic in SEAC since 2007 with continued sporadic outbreaks with lower severity of clinical disease compared to the pandemic outbreak in -2009(Temeeyasen et al., 2014Vui et al., 2014). Since the emergence of PED, several pig farms in SEAC have experienced sporadic outbreaks of diarrhoea in piglets more than once a year. The causative agent was considered to be a variant of PEDV. The role of PDCoV in the outbreak, although suspected, was not investigated at that time. PDCoV was suspected when rebreaks of clinical enteric disease similar to PED occurred every two months in some herds, which is too frequent compared to the period of six-month protection reported earlier (Goede et al., 2015). PDCoV has since been investigated in addition to the detection of PEDV.
In the study, 97 intestinal samples were submitted to the laboratory in 2015 for PEDV diagnosis and therefore were tested for three viral pathogens including PEDV, TGEV and PDCoV. Of 97 intestinal samples tested, 12 samples (12.4%) were positive for PDCoV, 82 samples (84.5%) were positive for PEDV, and none were positive for TGEV (Table 1). Samples positive for PDCoV were also positive for PEDV. Three, two and one farms in Thailand, Vietnam and Lao PDR, respectively, were positive with both PED and PDCoV. Of 12 PDCoV-positive samples, five, two and five samples were from farms in Thailand, Vietnam and Lao PDR, respectively. Only PEDVs were present in samples from Philippines. Interestingly, PDCoV was detected in all four swine-producing areas in Thailand. The locations and numbers of farms in each country were presented in Table 1.
Six samples (three from Thailand, two from Vietnam and one from Lao PDR) were selected for further complete sequencing of S, M and N genes. Sequences have been deposited in GenBank under accession nos. KU87047 9-KU870484. The genetic analyses demonstrated that S, M and N genes of three Thai PDCoV (P20_15_NT1_1215, P23_15_TT_1215 and P24_15_DT1_1215), one Lao PDCoV (P1_16_BTL_0116) and two Vietnam PDCoV (P29_15_VN_1_1215 and P30_15_VN_1215) isolates are 3477-3480, 651 and 1026 nucleotide (nt) in length, encoding for 1159-1160, 127 and 342 amino acids, respectively.
To demonstrate the genetic relationship between Thai, Lao and Vietnam, and the previously reported China and US PDCoV isolates, phylogenetic analyses of S, M and N genes were separately constructed and the results of all three genes demonstrated that PDCoVs from Thailand and Lao PDR form their own cluster, separated from China and US PDCoV (Fig. 2). Based on S and M genes, Vietnam PDCoV isolates are grouped together with US PDCoV, separated from Thai and Lao PDCoV. Vietnam PDCoV isolates are closely related to the US isolates than China PDCoV. The results based on the phylogenetic analyses of S, M and N genes suggested that PDCoVs from Thailand and Lao PDR are from different lineage compared to Vietnam PDCoV (Fig. 2).
The percentage of nucleotide and amino acid similarities between Thai, Lao and Vietnam, and the previously reported China and US PDCoV isolates, are displayed in Table 2. Based on S, M and N genes, the three Thai PDCoV isolates were more highly homologous to Lao PDCoV than Vietnam PDCoV with nucleotide and amino acid similarities at 99.8% and 100%, respectively. Thai and Lao PDCoV isolates relatively were more closely related to China PDCoV than US isolates. Based on S gene, Thai and Lao isolates shared nucleotide and amino acid similarities at 95.5-96.8% and 98.5-99.1%, respectively, with China PDCoV, as well as sharing nucleotide and amino acid similarities at 96.0-96.4% and 98.2-99.1%, respectively, to US PDCoV. Similar to S gene results, the M and N genes of Thai and Lao PDCoV shared nucleotide and amino acid similarities at 98.0-98.7% and 99.5%, and 97.8-98.7% and 98.5-99.1%, respectively, with China PDCoV, and shared nucleotide and amino acid similarities at 97.8-98.3% and 99.5%, and 97.6-98.1% and 98.2-99.1%, respectively, to US PDCoV isolates. In contrast, Vietnam PDCoV isolates were more homologous to US PDCoV isolates than China isolates. Based on S gene, Vietnam PDCoV isolates shared nucleotide The amino acid substitutions of each gene between PDCoV isolates from each country are showed in Technical Fig. A1ac. Based on S, M and N genes, Thai and Lao PDCoV isolates had 23-26, 1 and 4-5 amino acid substitutions, respectively, compared to China PDCoV. Moreover, Thai and Lao PDCoV isolates had 25-28, 1 and 4-5 amino acid substitutions at S, M and N genes, respectively, compared to US PDCoV. In contrast, Vietnam PDCoV had only 2-4 and 1-2 amino acid substitutions compared to both China and US PDCoV isolates based on S and N genes, respectively, but no amino acid substitution was observed in M gene.
In conclusion, the study reported the identification of PDCoV in SEAC including Thailand, Lao PDR, Vietnam and Philippines. The PDCoVs isolated from Thailand and Lao PDR form their own cluster, separated from China and US PDCoV, but relatively were more closely related to the isolates from China than to US PDCoV. In contrast, the PDCoVs isolated from Vietnam were more closely related to the isolates from the USA. The results of the study suggested that the viruses from these three SEAC might originate from different ancestors. The identification of PDCoV in SEAC suggests that the virus may have been in this region for some time, but has not been detected due to greater focus on PEDV variants. Although PDCoV was not detected in Philippines, it does not mean the virus was not there due to limited sample numbers from this area. The origin and source of introduction into Thailand, Lao PDR and Vietnam are still questionable. The viruses could have been in this region for some time, and continuously evolved until separated into different lineage, or the viruses were introduced from different ancestors or sources. Further retrospective investigations are urgently needed to elucidate source and evolution. In addition, further analysis and molecular epidemiology based on the complete genome sequence, and pathogenicity studies of PEDV and PDCoV co-infection are urgently needed.
|
2018-04-03T00:44:17.662Z
|
2016-10-07T00:00:00.000
|
{
"year": 2016,
"sha1": "eecd47ffa42a6fd60b6e551f147560c5d642a81a",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/tbed.12585",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7bbb497edb6b9490a513ffb4f7aae2fdfc432b31",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
247866067
|
pes2o/s2orc
|
v3-fos-license
|
Adenocarcinoma of the oesophagus: is it gastric cancer?
The incidence of gastro-oesophageal junction (GEJ) cancer, comprising both oesophageal (EAC) and junctional gastric adenocarcinomas, has increased dramatically in Western countries, correlating with a decrease in non-cardia gastric cancer (GC). A better understanding of the origin and pathogenesis of these cancers may allow for improved cancer prevention, detection and treatment. GEJ adenocarcinomas include tumours classified in the past as either oesophageal or gastric in origin. Adenocarcinoma located just above the GEJ (ie, EAC) was for many years viewed as a distinct entity from GC. This view followed the strong association of EAC with Barrett’s Oesophagus (BE), a metaplastic condition of the lower oesophagus which was viewed as a transdifferentiation of normal squamous epithelium to an intestinalised mucosa in the setting of gastric acid reflux. This assumption of a squamous origin of EAC led to (1) an extensive programme of surveillance of BE patients, (2) the inclusion of oesophageal squamous (ESCC) and adenocarcinoma (EAC) together in some clinical trials and (3) a clear distinction of EAC from GC. We propose here to rethink this approach based on novel insights on the origins and pathogenesis of GEJ cancer. New data supporting a gastric origin of EAC/BE have emerged in recent years from both deep analysis of human samples and with experimental results from human derived cells and mouse models. The hypothesis that BE originates in the gastric cardia was
Adenocarcinoma of the oesophagus: is it gastric cancer?
Michael Quante , 1 Timothy C Wang, 2 Adam J Bass 3 The incidence of gastro-oesophageal junction (GEJ) cancer, comprising both oesophageal (EAC) and junctional gastric adenocarcinomas, has increased dramatically in Western countries, correlating with a decrease in non-cardia gastric cancer (GC). A better understanding of the origin and pathogenesis of these cancers may allow for improved cancer prevention, detection and treatment. GEJ adenocarcinomas include tumours classified in the past as either oesophageal or gastric in origin. Adenocarcinoma located just above the GEJ (ie, EAC) was for many years viewed as a distinct entity from GC. This view followed the strong association of EAC with Barrett's Oesophagus (BE), a metaplastic condition of the lower oesophagus which was viewed as a transdifferentiation of normal squamous epithelium to an intestinalised mucosa in the setting of gastric acid reflux. This assumption of a squamous origin of EAC led to (1) an extensive programme of surveillance of BE patients, (2) the inclusion of oesophageal squamous (ESCC) and adenocarcinoma (EAC) together in some clinical trials and (3) a clear distinction of EAC from GC. We propose here to rethink this approach based on novel insights on the origins and pathogenesis of GEJ cancer.
New data supporting a gastric origin of EAC/BE have emerged in recent years from both deep analysis of human samples and with experimental results from human derived cells and mouse models. The hypothesis that BE originates in the gastric cardia was proposed in 2012, based on findings in lineage tracing studies in a BE (L2-IL-1b) mouse model, which recapitulates the histologic progression from oesophagitis to dysplasia. 1 Lineage tracing allows for the genetic definition and tracking of stem cells and their progeny and can help determine the cellular origin of neoplasms. Subsequently, The Cancer Genome Atlas Research Network (TCGA) in 2017 demonstrated in comprehensive molecular genomic profiling of both oesophageal and GCs the distinct features of the two histological subtypes of oesophageal cancer, EAC and ESCC, with ESCC showing much greater similarity to head and neck SCC. Furthermore, joint analysis of EAC and GC could not identify features clearly demarcating EAC from the chromosomal instability (CIN) class of GC, suggesting a shared origin. 2 By contrast, variants of GC that are more common in the non-cardia regions of the stomach including tumours with microsatellite instability, Epstein-Barr virus infection or the diffuse histologic type are less common in GCs localised to the cardia and in EAC. Genetic results from TCGA are consistent with recent epigenetic studies of BE relative to normal gastric and oesophageal tissues, which also demonstrated evidence for a gastric origin to BE. 3 Furthermore, a recent study utilising comprehensive single-cell transcriptomic profiling, in silico lineage tracing, mutation analyses from human tissues spanning the proximal stomach to squamous oesophagus healthy and diseased donors, showed that BE originates from gastric cardia progenitors through distinct transcriptional programmes. 4 This latter study also experimentally determined the capacity of organoid cultures of human gastric tissue to differentiate into BE. Indeed, this emerging view of BE/EAC as originating from gastric tissue is consistent with key pathologic findings that BE always begins at the very distal oesophagus, contiguous with the gastric cardia, and that BE comprises a mosaic of gastric and intestinal cell types which is largely indistinguishable from intestinal metaplasia in the stomach. 5 This new thinking regarding the relationship of EAC echoes the original descriptions of metaplasia of the distal oesophagus by Norman Barrett, who assumed at the time that BE resulted from proximal migration of stomach epithelium. 6 So, more than 70 years following the description of BE, it seems timely to recognise BE's gastric origin and view EAC and GEJ cancer as an extension of GC or 'gastro-EAC' (GEAC), reflecting their common aetiology (inflammation) and common genomic features, and explore the clinical consequences, both in terms of cancer screening and cancer therapy. The initial recognition of BE metaplasia of the distal oesophagus was followed in later decades by marked increases in EAC rates in the Western world. However, whereas metaplasia at the GEJ appears to be promoted by inflammation-induced injury from refluxed gastric acid and bile due to diet or obesity, in the distal stomach this is mostly induced by H. pylori, an established precipitant to gastric intestinal metaplasia. Both stimuli lead to stem cell expansion and thus metaplasia or a risk of progression to cancer. To what extent the predilection to non-CIN forms of GC in regions distal to the GEJ are due to distinct effects of Helicobacter pylori and acid reflux or due to distinctions in the intrinsic stem cell biology or microenvironment along the anatomic gradient of the stomach remains unclear. Nevertheless, H. pylori infection has been declining while GERD has been increasing, thus leading to the changed distribution and more proximal location of GEAC tumours. Prior to the recent studies, when the dominant thinking was thought that BE emerged from differentiated squamous epithelial cells and while rates of EAC (and junctional or cardia GC) were steadily rising, the primary focus was instead on understanding BE metaplasia and prevention efforts limited to the detection and risk stratification of BE.
In light of this new view of GEACs (figure 1), including EAC, proximal and non-cardia GC, as a singular group of diseases wherein the most common form is a group of CIN GEAs that cross classic anatomic boundaries, screening and early diagnosis should also be reconsidered. There is a disconnect between a growing clinical problem which includes cancers in the proximal stomach, cancers emerging directly from the GEJ and cancers emerging in the setting of BE in the distal oesophagus, and a surveillance system that is focused solely on detecting and evaluating BE. In patients with BE, the risk of progression to cancer is low, estimated at 0.1%-0.3% per year. 7 Nevertheless, BE patients are frequently enrolled in endoscopic surveillance programmes aiming to detect oesophageal dysplasia or early stage EAC, and while such programmes may reduce cancer mortality for those rare tumours emerging in the setting of BE, the vast majority of patients that will Commentary develop EAC or GEJ cancer are never first diagnosed with BE. 8 Rather, given the likely origins of GEAC from undifferentiated gastric cells, we need to look more closely at the response of cardia progenitors to chronic inflammatory conditions (such as GERD and H. pylori gastritis), and how they might progress along the path to either histopathological types of differentiated metaplasia or dysplasia and cancer.
Accumulating evidence suggests that BE and EAC pathogenesis involves the aberrant differentiation of stem or progenitor cells at the GEJ. The proximal expansion of cardia progenitors most likely occurs as a last resort, due to deficiencies in squamous healing and the greater resistance of columnar epithelial progenitors to acid/ bile injury. The high mutation rate and clonal complexity of BE is evidence of the ongoing evolutionary process that begins long before the development of a detectable malignancy or even metaplasia. 9 GERD induces chronic inflammation that fuels genomic evolution, selecting for clones harbouring cancer-associated mutations in a distinct sequence 10 11 or genomic instability 12 leading to oncogene activation 13 which increase the chance of cancer development with or without metaplastic development. The stem cell niche represents a clonal mosaic, where genetically distinct clones compete, leading to a dynamic equilibrium of subclone expansion and retraction. In this setting, visible metaplasia may be simply a biomarker of epigenetic reprogramming of epithelial cells at the GEJ, as opposed to a necessary precursor lesion during a process of reprogramming of cardia progenitor cells toward a distinct precancerous state. Molecular alterations that promote these changes, such as CIN, an established hallmark of cancer, may in combination with other risk factors help guide future surveillance and detection strategies.
Finally, the understanding that EAC and GC originate from similar gastric stem or progenitor cell populations has important implications for medical treatment. The distinct genetic and epigenetic profiles of GEAC (EAC and GC) in comparison to ESCC strongly argue against any combining of EAC and ESCC patients in clinical trials, as has occurred commonly in past and in some ongoing phase III drug approval studies. EAC and ESCC are distinct in their lineage, epigenetics and key molecular drivers, thus necessitating separate clinical trials. The FDA has already allowed the grouping of GEJ EAC and GC as a common entity in recent immunotherapy approval, but has still approved combined EAC and ESCC trials. Moreover, although there are distinct molecular subtypes in EAC and GC 14 15 as there are within colorectal adenocarcinoma, these cancers in the future should be viewed as the single entity: GEAC, with non-surgical therapeutic approaches guided less by location and more by their distinct molecular profiles and associated histopathological phenotypes (intestinal vs diffuse type).
Moving forward this new view has the potential to accelerate our understanding of this disease and enhance our tools for prevention, screening and therapy.
Contributors All authors contributed equally to this comment.
Figure 1
Illustration of a novel view on the origin and definition of gastro-oesophageal adenocarcinoma (GEAC) grouping gastro-oesophageal junction (GEJ) oesophageal adenocarcinoma (EAC) and non-junctional gastric cancer (GC) as one common entity, due to their common origin from gastric progenitor cells (purple) which expand due to distinct couses of inflammation (reflux, bile, Helicobacter pylori) giving rise to metaplasia (with or without goblet cells) or dysplasia at the junctional are or non-junctional stomach.
|
2022-04-03T06:22:29.368Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "8fa54fc9766df89101570129f35ca597b6c8b2df",
"oa_license": "CCBYNC",
"oa_url": "https://gut.bmj.com/content/gutjnl/early/2022/03/31/gutjnl-2022-327096.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9278801edebe2eab8cdd64573579d4f5fee8770c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225155347
|
pes2o/s2orc
|
v3-fos-license
|
Environmental Impacts Associated to Di ff erent Stages Spanning from Harvesting to Industrialization of Pineapple through Life Cycle Assessment
: In this research, environmental impacts associated with the harvest and processing of pineapple (fresh-packed, in syrup, and dehydrated) were determined using the life cycle assessment (LCA) tool and specialized software SimaPro ® (version 8.4), according to ISO14040:2006 and ISO14044:2006 standards. The information used to develop inventory included field interviews and industrial visits within the study area. The functional unit was defined as one kilogram of fruit. The selected impact categories were carbon footprint, water footprint, and energy footprint; the results obtained for the agronomic stage were 0.47 kg CO 2 eq (equivalent), 78 L of water, and 9.09 MJ, respectively. The growth stage of the pineapple plant was found to be the one that generates greatest environmental impacts for all three categories. For packaged fruit, 0.58 kg CO 2 eq, 82 L of water, and 11.03 MJ were quantified; for pineapples in syrup it was 1.12 kg CO 2 eq, 103 L of water, and 19.28 MJ; and for dehydrated fruit, it was 5.12 kg CO 2 eq, 782 L of water and 97.04 MJ. This concludes that the most significant environmental impact occurred in all cases during the pineapple cultivation stage.
Introduction
The constant increase of populations with higher incomes leads to a demand for more food. As such, in the last 30 years, food production has increased by more than 100% [1]. This has had repercussions on intensive production crops leading to the consumption of fertilizers, machinery, and supplies, as well as transportation for their distribution to large urban centers. All of these agricultural supply chains require considerable amounts of energy, mainly fossil fuels. Both consumption indicators are direct vectors for the generation of environmental impacts. Nowadays, there is a growing concern about climate change due to the increase in greenhouse gas (GHG) emissions. The agricultural sector contributes approximately 13% of worldwide GHG emissions [2]. In addition, intensive agriculture increases consumption of irrigation water for continuous food production, which causes water stress, especially in areas with less rainfall [3,4]. The use of water grew at almost twice the rate of population increase in the last century and 70% of the total water consumed worldwide is used by agriculture [5].
The parametric indicators of carbon, energy, and water footprint are important quantities to verify the potential direct and indirect environmental effects of agricultural products [6][7][8][9][10][11]. Among the
Definition of Objectives and Scope
This phase includes the initial studies to know the product and everything related to its processing to develop the other three phases of the LCA accurately, as well as the intended use of the results. The geographic scope, delimitation in study time, and quality level required for the data were defined [23,24]. For this reason, the functional unit and the system boundary were described. The functional unit was taken as a way to quantify the flows of matter and energy. The main aim was to provide a reference for comparison between the flows involved in the entire process, so that the
Definition of Objectives and Scope
This phase includes the initial studies to know the product and everything related to its processing to develop the other three phases of the LCA accurately, as well as the intended use of the results. The geographic scope, delimitation in study time, and quality level required for the data were defined [23,24]. For this reason, the functional unit and the system boundary were described. The functional unit was taken as a way to quantify the flows of matter and energy. The main aim was to provide a reference for comparison between the flows involved in the entire process, so that the environmental impacts of the products were reported according to the functionality of the product, and to compare the products from the same category amongst themselves [25,26]. The system boundary was established to consider the components or processes that constitute the product system. Likewise, the inputs and outputs to be considered were identified. For example, the manufacture of inputs; the transformation processes to make the final product; energy supply; identification of coproducts, effluents, emissions, and generated waste, among others [27].
Life Cycle Inventory
Data collection and calculation procedures that identify the inputs and outputs of the system were performed during this phase, taking into account the flows released into the environment. It was essential to have detailed information on each stage of the production process since both the material and energy flows were quantified and the outflows to the environment are associated with each of the processes involved. All of this was incorporated into the LCI database.
Life Cycle Impact Assessment
In the LCIA, the environmental impacts associated with the product system were quantified. Science-based methods through impact categories were used to assess the environmental component of a sustainable product. Table 1 shows a description of the impact categories that were used in this study: carbon footprint, water footprint, and energy footprint. To determine the environmental impacts of greenhouse gas emissions, the ReCiPe methodology was chosen (v. 1.01) [28], based on the ISO14067:2018 standard [29]. While the water accounting and vulnerability evaluation (WAVE) [30] method was used to determine water footprint throughout the production and supply chain, according to the provisions of the ISO14046:2014 standard [31]. Table 1. List of impact categories evaluated.
Category Description Unit
Carbon footprint (CF) It accounts for the GHG emission, which is expressed in kg CO 2 eq., emitted throughout the supply chain for a product or service. It takes into account the production of raw material, transport, and transformation, up to the final disposal of the waste generated.
kg CO 2 eq
Water footprint (WF) The category that quantifies the volume of water consumed in all stages of production to manufacture a product. It includes the source and use of "freshwater", "green water", and "gray water". It is represented according to the water resource available to the human population. The cumulative energy demand (CED) methodology was used to calculate the energy footprint, which allowed us to know the different types of renewable and non-renewable energy sources used in the product system [38]. For the quantification of energy flows, energy conversion, efficiency, and plant factors were used due to energy losses from production to final energy consumption, mainly concerning the generation of electrical power. The data were collected from local and international references [39][40][41][42], as explained and shown below in Table 2. The energy footprint is not a proper category of environmental impact. However, it is a consistent approach indicator for the environmental performance of products and processes, which quantifies the energy content of all different (renewable and non-renewable) energy resources from a product system and an important driver of several environmental impacts as global warming or fossil fuel depletion [8,9,[43][44][45].
Interpretation of Life Cycle
LCA interpretation was aimed at analyzing results and their relation to the goal and scope definition. According to ISO14040:2006 and ISO14044:2006, conclusions were reached, the limitations of the results were presented, and recommendations were provided based on the findings during the preceding phases of the LCI and LCIA. Moreover, a sensitivity analysis was performed to complete the interpretation of the life cycle. Finally, the environmental impacts of the processes were compared individually or globally with other LCA researches.
Objectives and Scope
As mentioned above, the objective of the current LCA was to quantify carbon, water, and energy footprints from the supply chain, and the operations involved in the cultivation of pineapple and its industrialization, such as (i) packed as fresh fruit, (ii) dehydrated pulp, and (iii) pineapples in syrup. The functional unit was defined under the following situations: (i) in the agricultural stage, 1 kg of harvested pineapple; (ii) in the three industrialization scenarios, 1 kg of packed pineapple/dehydrated pineapple/pineapples in syrup. This study was carried out from "the cradle to the gate" since it considered the production (agricultural stage) and fruit processing (industrial stage), which included the extraction, transformation, use, and consumption of raw materials during the supply chain up to obtaining the final products.
For the achievement of this LCA, the material and energy flows were tracked and analyzed, which included intermediate flows, outflows as products, co-products, residual flows to treatment, and flows released to the environment such as gas emissions and liquid or solid effluents. Capital goods were considered, such as the use of machinery for the land preparing and the transportation of supplies [49], while others were excluded, such as the construction of facilities, since their contribution was not very significant in the LCA result [50,51]. The following procedure was used in the quantification of the inventory of capital goods: (i) the main materials that make up the capital good were identified, as well as their quantity; some examples include steel, plastic, non-ferrous metals, among others; (ii) the useful life of the product was established, generally 20 years; (iii) the functional unit was quantified from the amount of product produced, for example packaged pineapple during the product's useful life; and (iv) the amount of material calculated (i) was divided by the functional unit quantified in (iii). Therefore, each material amount from the capital good was quantified per kg of packed fruit. This data was compiled and included in LCI using this procedure.
System Boundary Description
The study area was located in southeastern Mexico, at 18 • 00 north latitude and 95 • 24 west longitude, with an average altitude of 95 m above sea level. The climate is warm-humid with rains in summer and a mean temperature of 25 • C; annual rainfall is in the range of 1100 to 1600 mm [52]. This information was relevant for the water balances. Figure 2 shows the diagram of the agricultural and industrial production processes considered for this study. The agricultural stage was divided into four sub-stages: (i) land preparation, (ii) planting, (iii) development of the pineapple plant, and (iv) harvest. The industrial stage was made up of three different processing routes: (v) packing, (vi) fruit in syrup, and (vii) dehydrator. good were identified, as well as their quantity; some examples include steel, plastic, non-ferrous metals, among others; (ii) the useful life of the product was established, generally 20 years; (iii) the functional unit was quantified from the amount of product produced, for example packaged pineapple during the product's useful life; and (iv) the amount of material calculated (i) was divided by the functional unit quantified in (iii). Therefore, each material amount from the capital good was quantified per kg of packed fruit. This data was compiled and included in LCI using this procedure.
System Boundary Description
The study area was located in southeastern Mexico, at 18°00′ north latitude and 95°24′ west longitude, with an average altitude of 95 m above sea level. The climate is warm-humid with rains in summer and a mean temperature of 25 °C; annual rainfall is in the range of 1100 to 1600 mm [52]. This information was relevant for the water balances. Figure 2 shows the diagram of the agricultural and industrial production processes considered for this study. The agricultural stage was divided into four sub-stages: (i) land preparation, (ii) planting, (iii) development of the pineapple plant, and (iv) harvest. The industrial stage was made up of three different processing routes: (v) packing, (vi) fruit in syrup, and (vii) dehydrator. General diagram of the agricultural and industrial production process. Source: own elaboration.
Agricultural Stage
For land preparation, the use of two tractors-80 horsepower (HP) and 120 HP-and necessary implements for each specific task were considered, including rotary mowers, harrows, rounders, plows, and a bed ridger machine. Diesel consumption was accounted for during the eight stages of site preparation. Each task was performed once per harvest cycle. During the irrigation, the underground water extracted from 100 m below the surface was quantified, accounted for the use of a 150 hp pump to a water supply to 166 ha for 10 months of scheduled irrigation, with four days of irrigation weekly. The use of various agrochemicals was considered for plant growth. Such agrochemicals are formulas of nitrogen, phosphorus, and potassium (NPK), Bromacil, Ametrex, Fosetil-Al, and glyphosate. The quantity of packaging bags and plastic bottles during each activity were considered according to the supplies used. When the harvest was made, pineapple was the main product, but the seed was also obtained for the next growing cycle. Figure 3 shows the processes of the agricultural stage that were quantified for the current LCA.
Industrial Stage
After the fruit was harvested, it was sent for processing, according to the three pineapple industrialization processes considered for this LCA, which were commercially located in the study area. Fruit that did not meet quality standards was marketed locally without receiving any processing. Figure 4 shows all sub-stages that included fresh packaging, fruit in syrup, and dehydrated fruit. Each of them is described below. plows, and a bed ridger machine. Diesel consumption was accounted for during the eight stages of site preparation. Each task was performed once per harvest cycle. During the irrigation, the underground water extracted from 100 m below the surface was quantified, accounted for the use of a 150 hp pump to a water supply to 166 ha for 10 months of scheduled irrigation, with four days of irrigation weekly. The use of various agrochemicals was considered for plant growth. Such agrochemicals are formulas of nitrogen, phosphorus, and potassium (NPK), Bromacil, Ametrex, Fosetil-Al, and glyphosate. The quantity of packaging bags and plastic bottles during each activity were considered according to the supplies used. When the harvest was made, pineapple was the main product, but the seed was also obtained for the next growing cycle. Figure 3 shows the processes of the agricultural stage that were quantified for the current LCA.
Industrial Stage
After the fruit was harvested, it was sent for processing, according to the three pineapple industrialization processes considered for this LCA, which were commercially located in the study area. Fruit that did not meet quality standards was marketed locally without receiving any processing. Figure 4 shows all sub-stages that included fresh packaging, fruit in syrup, and dehydrated fruit. Each of them is described below. Packed fruit. The distance considered from the cultivation area to the packing plant was 2 km; transportation took place through trucks with a 15 t capacity. The fruit was received and quality was verified for export. Pineapples that did not meet the standard were returned to the producer. Next, the fruit was immediately washed in tanks with chlorinated water; then, the pineapples were lifted by a metallic conveyor belt, where the fruits were arranged by operators while the washing water was drained. Each pineapple received a wax bath and was air-dried at normal temperature. Packed fruit. The distance considered from the cultivation area to the packing plant was 2 km; transportation took place through trucks with a 15 t capacity. The fruit was received and quality was verified for export. Pineapples that did not meet the standard were returned to the producer. Next, the fruit was immediately washed in tanks with chlorinated water; then, the pineapples were lifted by a metallic conveyor belt, where the fruits were arranged by operators while the washing water was drained. Each pineapple received a wax bath and was air-dried at normal temperature. Meanwhile, cardboard boxes were assembled where the fruits were packed. The individual boxes with fruit were arranged on pallets that were sent to refrigeration for later transportation and delivery to the customer-distributor. Throughout the process, the electrical energy consumed by the different electrical devices, liters of wax consumed, kilograms of cardboard boxes, and supplies needed to assemble the pallets-i.e., a portable platform on which pineapples can be moved, stacked, and stored-were accounted for.
Fruit in syrup. For this process, the fruit was transported by trucks with a capacity of 15 t and the same 2 km distance. The fruit was received and immediately the crowns were removed during the selection of those that met quality specifications. Subsequently, the fruit was washed, peeled, cored, and cut. Then, tin cans were filled and the sterilization and labeling process continued. The energy supplied for the pre-sterilization, sterilization, and syrup production processes through fuel oil were quantified; energy losses were also considered. The manufacture and transportation of tin cans were also included.
Dehydrated fruit. In this stage, the fruit was transported in trucks with a capacity of 3 t from the production area. Once the pineapple was unloaded at the processing site, the crown was removed and the fruit was washed, peeled, cored, and cut into circular slices. Afterward, it was transferred to dehydration chambers for a 14 h process. Immediately after, selection and packaging processes were carried out. The consumption of electrical power were conducted mainly by conveyor belt motors, a turbine used to drive hot air to the dehydration chamber to reach the final humidity conditions of the dehydrated fruit, and other consumptions from the factory facilities were counted. Likewise, the fuel used during supplies transportation was considered.
Life Cycle Inventory
The information was collected through field interviews with farmers, merchants, entrepreneurs, and specialists in the cultivation and processing of pineapples [46,47]. This field work was carried out in two stages: (i) the first was a recognition of the place of pineapple production, the geographical distribution of the agricultural areas, and the processing places; (ii) in a second stage, questionnaires were prepared that were applied to farmers, operational managers of industrial processes, and the main controllers in the supply chain. The data collected was regarding the time of use for machinery, diesel consumed, type and quantity of agrochemicals, power used by electrical equipment, and water flow for irrigation. Subsequently, the mass and energy balances were carried out, where inputs, intermediate process flows, coproducts, and outputs of the system were quantified, such as air emissions, effluents of residual water, and solid waste The secondary process data was complemented with information from the literature [39][40][41][42]48]. Secondary data were mainly the transport of supplies, some inputs such as plastic or waxes, and electrical power from a specific assembly for national standards; all of them were included in the Ecoinvent 3.5 database [53]. Some of these assemblies were adapted and updated according to local information collected. For example, regarding the secondary data of main inputs consumed in the product system, local information was used, mainly in relation to energy consumption for its manufacturing, as shown in Table 2.
In the case of the electrical grid supply, Ecoinvent 3.5 provided an assembly for the case of local generation of electrical power. The foreground data of primary energy were updated and the energy efficiencies and losses of the distribution network according to information compiled from national technical reports are shown in Table 3. For non-renewable fuels, Equation (1) was used; the primary Appl. Sci. 2020, 10, 7007 8 of 21 fuel supplied was taken as input. For renewable energy, Equation (2) was applied; the input was the primary energy reported from the balance sheets of the national energy sector. where: EPG: electric power generation, GWh. PFS: primary fuel supplied, kg, L or m 3 . NCV: Net calorific value conversion factor, units according to the fuel supplied. η: net thermal efficiency from the overall cycle of electric power generation, fraction. PES: primary energy share, fraction. TPE: total primary energy for electric power generation (in the national grid), MJ FC: conversion factor, 1 GWh/3.6E+06MJ. On the other hand, residual energy flows from equipment such as the boiler, drying chamber, and syrup preparation process were quantified, as shown in Figure 4. These energy flows were considered as lost energy to the surroundings of the system from the devices with heat transfer and were considered as waste energy emitted to the environment.
Economic allocation was applied, according to products and co-products generated from the product system. In the pineapple production system, outlets of co-products were identified. For example, fruit were rejected in the quality controls during industrialization. These fruits, although of lower quality, were marketed at a lower price, and economic allocations were applied. For this to happen, data on prices and quantities of rejected fruits were collected from the producers to quantify economic allocation of the outflow of the co-product. The allocation procedure and its result are shown in Table 4. The quantity column shows the amount of fresh fruit required to produce one kg of processed fruit, while the prices column represents the sales prices locally in 2019. Table 5 shows foreground data collection from the product system. All the quantities that are shown were referenced to the functional unit of 1 kg of processed fruit and classified by sub-stages of the product system. Nursery maintenance is shown first, with water, diesel, transportation, and agrochemicals being the most important supplied resources. During the land preparation labor, the main inputs were accounted for the use of heavy machinery, as a result, fuels were consumed and plastic was used for mulching. On the other hand, water, pesticides, and transportation were needed as basic inputs for planting. However, for the development and growth of the pineapple plant, the use of specific machinery, water, agrochemicals products, and plastic, as well as their transportation from production sites, were quantified. For the packaging of fresh fruit, the number of cardboard Appl. Sci. 2020, 10, 7007 9 of 21 boxes required and the water and electrical energy consumed were considered important. For the fruit in syrup, can containers, fuel oil, electricity, and water were quantified. In the case of the dehydration process, the amount of liquefied petroleum gas (LPG) was required and the packaging material for the final product were considered. Table 6 shows the environmental impacts associated with the agricultural process of the pineapple fruit, as well as three different processes of the harvested pineapple per impact category.
Results and Discussion
As shown in Table 6, the additional processing to the harvested pineapple fruit goes through increased magnitudes of associated environmental impacts in the three categories analyzed. The lowest rise was in the packing process, where fruit handling did not demand extensive processing. In the case of pineapples in syrup, there was growth in the values of carbon and energy footprints. This is a consequence of the energy supply necessary for greater processing operations. However, the greatest impacts were generated in the industrial dehydration stage. The most relevant reasons were, first of all, the energy supplied for the drying of the fruit, and, secondly, the quantity of fresh fruit that was consumed to produce 1 kg dehydrated pineapple, which required 9.94 kg of fresh pineapple. On the other hand, in the stages of packing fresh fruit and for fruit in syrup, 1.05 kg and 1.29 kg of fresh pineapple were supplied, respectively. The specific results of the different processing stages studied are discussed below. Figure 5 shows the results of the carbon, water, and energy footprint of the agricultural stage. For the carbon footprint, fertilization contributed 53.5% of the environmental impacts, which were related to the manufacture, transportation, and supply of these products. While irrigation contributed 22.1% due to the use of the motor pump for water extraction, the manufacturing of plastic and steel pipes from the distribution network was also included. Land preparation contributed 12.9% of impacts, which were associated with the manufacturing and use of plastic for the covering of the soil. With a lower contribution to this category we find plague control, seedbed site, as well as the stage of growth and maturation of the fruit, each of them contributing a share of around 4% of the total environmental impact.
Agricultural Stage
For water footprint, the water supply from irrigation and rainwater (blue water and green water) contributed 96.7%. Irrigation was related to the extraction and use of water during the dry months, while only the proportion of rainwater that was useful for the plant was quantified [54]. With a much lesser contribution to impact (calculated by the water footprint) was the use of agrochemicals, which together contributed just over 3.3%. Regarding the energy footprint, it had a great similarity to the impact of the carbon footprint due to its close relationship with the supply and combustion of fossil fuels. The application of fertilizers had the greatest contribution, with an impact of 50%, mainly due to the manufacture of agrochemicals. Electricity consumption for irrigation and land preparation each contributed about 18%. Nazri [55] reported a share of 45.7% as a result of the consumption of fertilizers from the fruit cultivation stage, which is a value close to that shown in the current LCA. For water footprint, the water supply from irrigation and rainwater (blue water and green water) contributed 96.7%. Irrigation was related to the extraction and use of water during the dry months, while only the proportion of rainwater that was useful for the plant was quantified [54]. With a much lesser contribution to impact (calculated by the water footprint) was the use of agrochemicals, which together contributed just over 3.3%. Regarding the energy footprint, it had a great similarity to the impact of the carbon footprint due to its close relationship with the supply and combustion of fossil fuels. The application of fertilizers had the greatest contribution, with an impact of 50%, mainly due to the manufacture of agrochemicals. Electricity consumption for irrigation and land preparation each contributed about 18%. Nazri [55] reported a share of 45.7% as a result of the consumption of fertilizers from the fruit cultivation stage, which is a value close to that shown in the current LCA. Figure 6 shows the comparison of the results of the carbon and water footprints in the agricultural stage from the literature to the current LCA for the production of 1 kg of fresh pineapple. Usubharatana and Phungrassami [18] quantified a low value in the carbon footprint (0.172 kg of CO2 eq.). This could be because bioorganic fertilizers were used in their process. While Ingwersen [21] and Moss [56] reported results ranging from 0.364 to 0.520 kg of CO2 eq., respectively, in both studies, particular conditions of cultivated land scales, transportation, and different varieties of cultivated pineapple were analyzed with respect to the current study. Figure 6 shows the comparison of the results of the carbon and water footprints in the agricultural stage from the literature to the current LCA for the production of 1 kg of fresh pineapple. Usubharatana and Phungrassami [18] quantified a low value in the carbon footprint (0.172 kg of CO 2 eq.). This could be because bioorganic fertilizers were used in their process. While Ingwersen [21] and Moss [56] reported results ranging from 0.364 to 0.520 kg of CO 2 eq., respectively, in both studies, particular conditions of cultivated land scales, transportation, and different varieties of cultivated pineapple were analyzed with respect to the current study. For water footprint, the water supply from irrigation and rainwater (blue water and green water) contributed 96.7%. Irrigation was related to the extraction and use of water during the dry months, while only the proportion of rainwater that was useful for the plant was quantified [54]. With a much lesser contribution to impact (calculated by the water footprint) was the use of agrochemicals, which together contributed just over 3.3%. Regarding the energy footprint, it had a great similarity to the impact of the carbon footprint due to its close relationship with the supply and combustion of fossil fuels. The application of fertilizers had the greatest contribution, with an impact of 50%, mainly due to the manufacture of agrochemicals. Electricity consumption for irrigation and land preparation each contributed about 18%. Nazri [55] reported a share of 45.7% as a result of the consumption of fertilizers from the fruit cultivation stage, which is a value close to that shown in the current LCA. Figure 6 shows the comparison of the results of the carbon and water footprints in the agricultural stage from the literature to the current LCA for the production of 1 kg of fresh pineapple. Usubharatana and Phungrassami [18] quantified a low value in the carbon footprint (0.172 kg of CO2 eq.). This could be because bioorganic fertilizers were used in their process. While Ingwersen [21] and Moss [56] reported results ranging from 0.364 to 0.520 kg of CO2 eq., respectively, in both studies, particular conditions of cultivated land scales, transportation, and different varieties of cultivated pineapple were analyzed with respect to the current study. In the case of the water footprint, the current LCA quantified 78.6 L, an impact similar to that reported by Nataša [57] of 69.8 L. Due to local conditions, no additional irrigation was required. Ingwersen [21] reported a higher value, 174.5 L. In this study, intensive irrigation water was estimated through farmer surveys. Gheewala et al. [58] reported a very high water footprint value, greater than 600 L since the total amount of rainwater that occurred in the study area was included as a value of water supply for the cultivation of the fruit. Besides, this study was performed in a tropical zone with abundant rainfall throughout the year.
Pineapple Fruit Processing
In the case of the three pineapple processing products analyzed, the contributions per impact per stage are shown in Figure 7. The greatest environmental impact for all three categories were attributed to the agricultural stage; this was more relevant for fresh-packed pineapple and dehydrated fruit. However, industrialization of the fruit in syrup included more processing stages (as shown in Figure 4), thus the contribution of the environmental impact from the cultivation of pineapple decreased slightly. Figure 7 details the impacts associated with each processing scenario proposed in the current LCA.
Ingwersen [21] reported a higher value, 174.5 L. In this study, intensive irrigation water was estimated through farmer surveys. Gheewala et al. [58] reported a very high water footprint value, greater than 600 L since the total amount of rainwater that occurred in the study area was included as a value of water supply for the cultivation of the fruit. Besides, this study was performed in a tropical zone with abundant rainfall throughout the year.
Pineapple Fruit Processing
In the case of the three pineapple processing products analyzed, the contributions per impact per stage are shown in Figure 7. The greatest environmental impact for all three categories were attributed to the agricultural stage; this was more relevant for fresh-packed pineapple and dehydrated fruit. However, industrialization of the fruit in syrup included more processing stages (as shown in Figure 4), thus the contribution of the environmental impact from the cultivation of pineapple decreased slightly. Figure 7 details the impacts associated with each processing scenario proposed in the current LCA. Figure 8 shows the shared values of carbon, water, and energy footprints per process, grouped starting at the agricultural and industrial stages, and up to the packaged pineapple scenario. Regarding the carbon footprint, a share of 82.9% was shown to correspond to fruit cultivation. Within the fruit cultivation percentage, application of fertilizers had the highest participation at 43.1%, since its manufacture was included. Irrigation and land preparation followed at 17.8% and 10.4%, respectively. The industrial stage had a participation of 17.1%. Box assembly and fruit packaging were the processes with the greatest impacts within this stage at 13.2% of the total carbon footprint. For the water footprint, 95.9% the impacts generated came from the irrigation process, mainly during the growth and maturation of the plant. A share of 3.2% corresponded to the water supply for the application of agrochemicals throughout plant cultivation. Less than 1% of the remaining impact was generated during the packaging process; it came from cleaning the fruit.
Pineapple Packing Process
Regarding the energy footprint impact, a similarity to the carbon footprint was shown in the distribution of impacts due to its relationship with the consumption of fossil fuels. The greatest Figure 8 shows the shared values of carbon, water, and energy footprints per process, grouped starting at the agricultural and industrial stages, and up to the packaged pineapple scenario. Regarding the carbon footprint, a share of 82.9% was shown to correspond to fruit cultivation. Within the fruit cultivation percentage, application of fertilizers had the highest participation at 43.1%, since its manufacture was included. Irrigation and land preparation followed at 17.8% and 10.4%, respectively. The industrial stage had a participation of 17.1%. Box assembly and fruit packaging were the processes with the greatest impacts within this stage at 13.2% of the total carbon footprint. For the water footprint, 95.9% the impacts generated came from the irrigation process, mainly during the growth and maturation of the plant. A share of 3.2% corresponded to the water supply for the application of agrochemicals throughout plant cultivation. Less than 1% of the remaining impact was generated during the packaging process; it came from cleaning the fruit.
Pineapple Packing Process
Regarding the energy footprint impact, a similarity to the carbon footprint was shown in the distribution of impacts due to its relationship with the consumption of fossil fuels. The greatest impact came from the agricultural stage with a share of 84.2%, while the industrial stage contributed a share of 15.9%, mainly related to the manufacturing and transportation of cardboard boxes. Figure 9 shows the energy flows that made up the energy footprint from the cultivation process and the packaging of pineapples by the type of primary energy supplied. The largest proportions of energy flows were consumed by fertilizer manufacturing, irrigation, and soil preparation operations, which accounted for more than 70% of total consumption. In the industrial stage, the highest consumption was shown in the pineapple box assembly and packaging operations with a share of 11.3%. The same was observed with the other two pineapple processes, as shown below. The main source of energy supply came from the fossil primary energy, with a share greater than 90%. Figure 10 shows the results of the carbon, water, and energy footprints by grouped processes of the agricultural and industrial stages for the pineapples in syrup scenario. In the agricultural stage of the carbon footprint category, the application of fertilizers had the greatest contribution, with a share Figure 8. Carbon, water, and energy footprint of pineapples packed by main unit processes. Figure 9 shows the energy flows that made up the energy footprint from the cultivation process and the packaging of pineapples by the type of primary energy supplied. The largest proportions of energy flows were consumed by fertilizer manufacturing, irrigation, and soil preparation operations, which accounted for more than 70% of total consumption. In the industrial stage, the highest consumption was shown in the pineapple box assembly and packaging operations with a share of 11.3%. The same was observed with the other two pineapple processes, as shown below. The main source of energy supply came from the fossil primary energy, with a share greater than 90%.
Pineapple Process in Syrup
Appl. Sci. 2020, 10, x FOR PEER REVIEW 14 of 22 impact came from the agricultural stage with a share of 84.2%, while the industrial stage contributed a share of 15.9%, mainly related to the manufacturing and transportation of cardboard boxes. Figure 8. Carbon, water, and energy footprint of pineapples packed by main unit processes. Figure 9 shows the energy flows that made up the energy footprint from the cultivation process and the packaging of pineapples by the type of primary energy supplied. The largest proportions of energy flows were consumed by fertilizer manufacturing, irrigation, and soil preparation operations, which accounted for more than 70% of total consumption. In the industrial stage, the highest consumption was shown in the pineapple box assembly and packaging operations with a share of 11.3%. The same was observed with the other two pineapple processes, as shown below. The main source of energy supply came from the fossil primary energy, with a share greater than 90%. Figure 10 shows the results of the carbon, water, and energy footprints by grouped processes of the agricultural and industrial stages for the pineapples in syrup scenario. In the agricultural stage of the carbon footprint category, the application of fertilizers had the greatest contribution, with a share Figure 10 shows the results of the carbon, water, and energy footprints by grouped processes of the agricultural and industrial stages for the pineapples in syrup scenario. In the agricultural stage of the carbon footprint category, the application of fertilizers had the greatest contribution, with a share of 22%. In the industrial stage, the set of processes that began with reception and ended with the fruit inside filled cans, included washing, peeling, coring, and chopping of the fruit, generated the greatest impact in this category with a share of 43%. This value coincides with that reported by Usubharatana and Phung [18], with a share of 41.6%. They showed that the greatest impacts were caused by the manufacture of metal containers, as in the present study.
Pineapple Process in Syrup
Appl. Sci. 2020, 10, x FOR PEER REVIEW 15 of 22 of 22%. In the industrial stage, the set of processes that began with reception and ended with the fruit inside filled cans, included washing, peeling, coring, and chopping of the fruit, generated the greatest impact in this category with a share of 43%. This value coincides with that reported by Usubharatana and Phung [18], with a share of 41.6%. They showed that the greatest impacts were caused by the manufacture of metal containers, as in the present study. Figure 10. Carbon, water, and energy footprint of pineapples in syrup by main unit processes.
Regarding the water footprint, as it was in the previous packaging process, the largest share corresponded to irrigation with 91.5%. While in the industrial stage, its impact was much less; as a whole it shared only 5.8%. On the other hand, the distribution of the impact from the energy footprint was similar to the carbon footprint due to the strong local dependence on energy supply from fossil fuels. The energy flows are shown in detail below. Figure 11 shows the energy flows that contributed to the energy footprint for the production of pineapples in syrup. In this case, each type of energy generation source, placed using a flow line, supplied the processes that made up the product system. It shows the largest flows of energy supply come from non-renewable sources, represented by fossil fuels; these correspond to more than fourfifths of all the energy consumed by the product system, with a share of 81%. The two processes with the greatest demand for fossil energy (>50%) were the application of fertilizers, which included their manufacturing, in the agricultural stage, and the industrial processes of preparing the fruit up to filling cans with it. Another important source of energy contribution was biomass with a share of 8.6%, which came from added sugar for syrup production. This was because, locally, when manufacturing sugar from sugarcane, the residual bagasse generated in milling was used in cogeneration processes to supply energy for the production equipment [59][60][61]. Regarding the water footprint, as it was in the previous packaging process, the largest share corresponded to irrigation with 91.5%. While in the industrial stage, its impact was much less; as a whole it shared only 5.8%. On the other hand, the distribution of the impact from the energy footprint was similar to the carbon footprint due to the strong local dependence on energy supply from fossil fuels. The energy flows are shown in detail below. Figure 11 shows the energy flows that contributed to the energy footprint for the production of pineapples in syrup. In this case, each type of energy generation source, placed using a flow line, supplied the processes that made up the product system. It shows the largest flows of energy supply come from non-renewable sources, represented by fossil fuels; these correspond to more than four-fifths of all the energy consumed by the product system, with a share of 81%. The two processes with the greatest demand for fossil energy (>50%) were the application of fertilizers, which included their manufacturing, in the agricultural stage, and the industrial processes of preparing the fruit up to filling cans with it. Another important source of energy contribution was biomass with a share of 8.6%, which came from added sugar for syrup production. This was because, locally, when manufacturing sugar from sugarcane, the residual bagasse generated in milling was used in cogeneration processes to supply energy for the production equipment [59][60][61]. Figure 10 shows the contribution of nuclear-non-renewable energy, with shares of 1.7%, which corresponded to the primary energy used for its transformation into secondary energy. Therefore, this energy corresponded to electricity consumption from the product system stages. However, the flow of hydro energy provided a share of 1.5%, which also corresponded to the consumption of electric power. The supply of electrical power comes largely, among other processes, in the manufacture of metal containers filled with pineapples in syrup. It is important to highlight that local electricity generation is highly dependent on processes that use fossil fuels, such as natural gas and fuel oil, which impacts both the energy and carbon footprint. When processing pineapples in syrup, the energy losses from the syrup preparation stages, and boiler, were quantified, which had a share of 15% with respect to the total energy supplied. Figure 10 shows the contribution of nuclear-non-renewable energy, with shares of 1.7%, which corresponded to the primary energy used for its transformation into secondary energy. Therefore, this energy corresponded to electricity consumption from the product system stages. However, the flow of hydro energy provided a share of 1.5%, which also corresponded to the consumption of electric power. The supply of electrical power comes largely, among other processes, in the manufacture of metal containers filled with pineapples in syrup. It is important to highlight that local electricity generation is highly dependent on processes that use fossil fuels, such as natural gas and fuel oil, which impacts both the energy and carbon footprint. When processing pineapples in syrup, the energy losses from the syrup preparation stages, and boiler, were quantified, which had a share of 15% with respect to the total energy supplied. Figure 12 shows the results of carbon, water, and energy footprints by grouped processes of the agricultural and industrial stages for the dehydrated pineapple scenario. As was the case in the two previous industrialization processes, the carbon footprint also showed that a greatest share was generated by the manufacturing and application of fertilizers (43%), followed by irrigation and land preparation, with a share of 18% and 10%, respectively. These three processes come from the agricultural stage. While in the industrial stage, the largest share was produced in the initial stages, which included the reception and preparation of the fruit up to the process of slicing, with a share of 9%. The dehydration process exerted a share of 7%. In the first case, it was caused by electrical consumption from the equipment. In the second one, it was due to the use of energy from fossil fuels for the drying chamber. For the water footprint category, again, its impact was directly related to the supply of water for irrigation during fruit cultivation, with a share of 95%. Figure 12 shows the results of carbon, water, and energy footprints by grouped processes of the agricultural and industrial stages for the dehydrated pineapple scenario. As was the case in the two previous industrialization processes, the carbon footprint also showed that a greatest share was generated by the manufacturing and application of fertilizers (43%), followed by irrigation and land preparation, with a share of 18% and 10%, respectively. These three processes come from the agricultural stage. While in the industrial stage, the largest share was produced in the initial stages, which included the reception and preparation of the fruit up to the process of slicing, with a share of 9%. The dehydration process exerted a share of 7%. In the first case, it was caused by electrical consumption from the equipment. In the second one, it was due to the use of energy from fossil fuels for the drying chamber. For the water footprint category, again, its impact was directly related to the supply of water for irrigation during fruit cultivation, with a share of 95%. For the energy footprint, as expected, most came from the manufacturing and application of fertilizers, with a share of 85%. The remaining proportion corresponded to the industrial stage, as demonstrated in the tracking of energy flow in Figure 13. This shows that energy flows from their primary source of supply up to the processes where they were consumed. Non-renewable energy from fossil fuels supplied a large proportion of the energy required, with a global share of about 92%. For the energy footprint, as expected, most came from the manufacturing and application of fertilizers, with a share of 85%. The remaining proportion corresponded to the industrial stage, as demonstrated in the tracking of energy flow in Figure 13. This shows that energy flows from their primary source of supply up to the processes where they were consumed. Non-renewable energy from fossil fuels supplied a large proportion of the energy required, with a global share of about 92%. As explained above, locally, the energy was supplied to many of the productive sectors and was derived from the oil and natural gas industries, including the electricity generation processes, which had repercussions on the impacts quantified by the energy and carbon footprints. For the energy footprint, as expected, most came from the manufacturing and application of fertilizers, with a share of 85%. The remaining proportion corresponded to the industrial stage, as demonstrated in the tracking of energy flow in Figure 13. This shows that energy flows from their primary source of supply up to the processes where they were consumed. Non-renewable energy from fossil fuels supplied a large proportion of the energy required, with a global share of about 92%. As explained above, locally, the energy was supplied to many of the productive sectors and was derived from the oil and natural gas industries, including the electricity generation processes, which had repercussions on the impacts quantified by the energy and carbon footprints. In the case of renewable energy, they have little relevance, only a share of 4%; there are few incentives for the use of biomass, with some exceptions, such as the bioenergetic use from bagasse in the production of cane sugar, which was previously mentioned [62][63][64]. In the stage that includes the In the case of renewable energy, they have little relevance, only a share of 4%; there are few incentives for the use of biomass, with some exceptions, such as the bioenergetic use from bagasse in the production of cane sugar, which was previously mentioned [62][63][64]. In the stage that includes the dehydration oven chamber, energy losses of 24% were quantified with respect to the total energy supplied, this should be taken into consideration to improve the efficiency in the design of the dryer or in the implementation of an energy recovery system. Figure 14 shows the sensitivity of the carbon footprint in relation to the transportation inputs from the three evaluated industrialization scenarios. The operation of transport in the product system was selected since a considerable environmental impact was demonstrated, mainly from the cultivation stage in the agro-industrial processes [65,66]. For the current sensitivity analysis, this was carried out by varying the distribution centers of the inputs considered in the base case, which were those reported by local producers during field interviews. Sales centers for the main inputs were located both closer to the current ones and further away; thus, the environmental impact in the product system was quantified. It was shown that the sales centers closest to the study area decreased the impacts generated from the carbon footprint by 15%, while the most remote sites increased the quantification of the impact by around 33%. Despite having distribution centers close to the production area, the economic factor may be relevant in the producers' decisions for the selection of the current site for input acquisition at the expense of other factors, such as the quality of the product or even environmental aspects.
Sensitivity Analysis
This coincided with Goordazi, Fahimnia, and Sarkis [67], who reported that inefficiencies in the management of suppliers and shipping routes generated higher CO 2 emissions. A greater change in GHG emissions was also shown in the pineapple packing and dehydration processes. During these processes, there was more dependence on the quantity of pineapples transported from the cultivation areas. Additionally, some inputs for packaging materials were purchased from places far away from the study site. For the pineapples in syrup process, some of the required inputs were acquired in areas close to the place of processing, i.e., the sugar for the syrup was transported from a nearby sugar cane plant. those reported by local producers during field interviews. Sales centers for the main inputs were located both closer to the current ones and further away; thus, the environmental impact in the product system was quantified. It was shown that the sales centers closest to the study area decreased the impacts generated from the carbon footprint by 15%, while the most remote sites increased the quantification of the impact by around 33%. Despite having distribution centers close to the production area, the economic factor may be relevant in the producers' decisions for the selection of the current site for input acquisition at the expense of other factors, such as the quality of the product or even environmental aspects. This coincided with Goordazi, Fahimnia, and Sarkis [67], who reported that inefficiencies in the management of suppliers and shipping routes generated higher CO2 emissions. A greater change in GHG emissions was also shown in the pineapple packing and dehydration processes. During these processes, there was more dependence on the quantity of pineapples transported from the cultivation areas. Additionally, some inputs for packaging materials were purchased from places far away from the study site. For the pineapples in syrup process, some of the required inputs were acquired in areas close to the place of processing, i.e., the sugar for the syrup was transported from a nearby sugar cane plant. Figure 14. Transport sensitivity analysis for the carbon footprint category. Figure 14. Transport sensitivity analysis for the carbon footprint category.
Conclusions
This research quantified the environmental impacts associated with the cultivation and industrial processing of pineapples using carbon, water, and energy footprints. The LCI-database was made up of information from local producers and data from suppliers of raw materials and inputs in the region, which allowed this evaluation of the pineapple industry. Global results showed that the fruit cultivation stage was the greatest generator of environmental impact for the three quantified footprints. This was consistent with the literature consulted. The manufacturing and application of the different agrochemical products supplied during the agricultural stage contributed to the greatest impacts of carbon and energy footprints. Although irrigation operations generated the largest proportion of the water footprint due to intensive cultivation, it nonetheless required a year-round supply of water. Furthermore, agricultural producers do not have a widespread implementation of drip irrigation, which would lead to more efficient water consumption.
The stages of industrialization were shown not to contribute greatly to the evaluated environmental impacts, despite having different equipment and operations. In these stages, the greatest impacts were originated from processes that required considerable energy supplies, both thermal and electrical power. In the study area, which was a sample of what happens at the regional level, there was a lot of energy dependence on fossil fuels throughout the supply chain. This includes the production of electrical power that comes, for the most part, from non-renewable energy sources, as reported in Figures 10 and 12. There is still a need to diversify primary energy sources that could reduce the total environmental impacts of the product system. On the other hand, proper management in the selection of suppliers of the input supply chain would reduce environmental impacts generated by transport.
|
2020-10-28T19:12:42.623Z
|
2020-10-08T00:00:00.000
|
{
"year": 2020,
"sha1": "c964def7d938da22c1e61d60fc0d1c11bb951334",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/19/7007/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "9960f46207ff58d70dba8cb36efb9e7277b0e3b8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
119665318
|
pes2o/s2orc
|
v3-fos-license
|
The verbal width of acylindrically hyperbolic groups is infinite
We show that the verbal width is infinite for acylindrically hyperbolic groups, which include hyperbolic groups, mapping class groups and Out(Fn).
Let e i be the sum of the exponents of x i in w. For example, if w = x 2 x 1 x −2 2 , then e 1 = 1, e 2 = 1 − 2 = −1, e 3 = · · · = e k = 0. Let d(w) ≥ 1 be the g.c.d. of e i 's. If they are all 0, define d(w) = 0. Note that the condition that d(w) = 0 implies that w(G) ⊂ [G, G] for when d(w) = 0, w(G) will be in the kernel of any homorphism from G to an abelian group.
The width of w(G) is the supremum of vl w (g) over all g ∈ w(G). Note that if d(w) > 1, then g d ∈ w [G]. (Replace each x i in w with g a i e i where a i e i = d.) In particular if d = 1 then w[G] = w(G) = G and the width is 1. The reader can consult the book [10] for more information on the subject. If w is the commutator [x 1 , x 2 ], the verbal length is called the commutator length.
A group G is acylindrically hyperbolic if it has a non-elementary acylindrical action on a δ-hyperbolic space [8]. Recall that an isometric action of G on a metric space X is acylindrical if for all D > 0 there exist L, N > 0 such that if d(x, y) > L then the set {g ∈ G|d(x, gx) < D and d(y, gy) < D} has < N elements. The first nontrivial example is due to Bowditch [3] who showed that the action of the mapping class group on the curve complex is acylindrical. There are now many examples with the key point being that many seemingly weaker geometric criteria imply that the group is acylindrically hyperbolic. See [8] and [2].
Here is our main result.
Theorem 1.1. Suppose that G is acylindrically hyperbolic and that d(w) = 1. Then the width of w(G) is infinite.
This result generalizes the work of Rhemtulla [9] and Myasnikov-Nikolaev [7] who proved the theorem for free groups and hyperbolic groups, respectively.
Similarly to the stable commutator length, one can define the stable verbal length of g ∈ w(G), svl w (g), as follows: n .
If svl w (g) > 0 for some g then w(G) have infinite verbal width. However, if d(w) ≥ 1 this method cannot be used due to the following lemma: We may supress w and write vl, svl instead of vl w , svl w .
Proof. As observed above, g d is in w[G] for all g ∈ G so vl(g nd ) = 1. Since vl(gh) ≤ vl(g)+vl(h) this implies that vl(g n ) is bounded and svl(g) = 0. On the other hand, when d(w) = 0 we have the following which implies Theorem 1.1 in this case. Theorem 1.3. If G is acylindrically hyperbolic and d(w) = 0, then svl w (g) > 0 for some element g ∈ w(G).
If G is a free group then this is Corollary 2.16 of [4].
An outline of proof of Theorem for free groups
To illustrate the main idea, we sketch the proof of Theorem 1.1 in the case that G = F is the free group with basis {a, b} and w = x 1 x 2 x 1 x −1 2 , so that d = d(w) = 2. Consider g i = ab 2i ab 2i+1 ∈ G for i = 1, 2, · · · . This sequence has the property that distinct occurrences of any g i in any reduced word have trivial overlap. Denote by H i : G → Z the Brooks counting quasi-morphism with respect to g i . For any y, z ∈ G we have |H i (yz) − H i (y) − H i (z)| ≤ 3 by the usual tripod argument, since at most 3 copies of g i along a tripod can have the tripod point in the interior. The key observation now is that in fact H i (yz) − H i (y) − H i (z) = 0 for all but at most 3 values of i by the non-overlapping property of the g i 's (the exceptional values of i depend on y and z). Now suppose that g ∈ G has vl w (g) = 1, so g = x 1 x 2 x 1 x −1 2 for some is even for all but 3 × 3 = 9 values of i. Thus to detect g with vl w (g) > 1 it suffices to ensure that H i (g) is odd for 10 values of i. Similarly, to detect that vl w (g) is large it suffices to ensure that H i (g) is odd for sufficiently many i. An element such as For a general acylindrically hyperbolic group G we perform the above construction on a suitable Schottky subgroup F ⊂ G and then use the method of Hull and Osin [6] to extend the quasi-morphism from F to G. We will review their construction in Section 4 and show that the key observation above continues to hold for the extended quasi-morphisms.
2 Extending Brooks quasi-morphisms to acylindrically hyperbolic groups We first recall the definition of a quasi-morphism. Let G be a group. Then One way to construct a quasi-morphism that is not a homomorphism is to start with a homomorphism and then add on a bounded function. Of course, this is not an interesting example. The Brooks construction is a way of building an anti-symmetric quasi-morphisms that are not a bounded distance from a homomorphism.
Let F = a, b be the free group on two generators and let w be a reduced word in F . For x ∈ F let N w (x) be the number of copies of w in x when x is written as a cylically reduced word and let H w ( and H w (w n ) = n. Brooks proved the following: Theorem 2.1. The function H w is an anti-symmetric quasi-morphism and if w is not a power of a or b then it is not a bounded distance from a homomorphism. If w is cyclically reduced, then H w (w n ) ≥ n.
Note that if N is a finite group then the Brooks quasi-morphisms can be extended to F × N by choosing them to be constant on the second factor.
In [5], Dahmani-Guirardel-Osin show that an acylindrically hyperbolic group G contains a copy of a hyperbolically embedded F × N where N is the maximal finite normal subgroup of G. In [6], Hull-Osin show that any antisymmetric quasi-morphism on a hyperbolically embedded subgroup extends to a quasi-morphism of the entire group. Combing these two results we have the following theorem.
There is a weaker version of this theorem (that would be good enough for our applications here) that follows from [2]. The approach in [2] is more direct as it does not go through the theory of hyperbolically embedded subgroups. We also note that both approaches use the projection complex from [1] in an essential way.
As a demonstration of our methods we first give a proof of Theorem 1.3.
Proof. Given a quasi-morphism H with the defect ∆ and an element g = g 1 . . . g n by repeatedly applying the quasi-morphism bound we have If g = w(g 1 , . . . , g k ) and H is anti-symmetric this becomes and when d(w) = 0 (so all the e i = 0) this becomes |H(g)| ≤ (|w| − 1)∆ for g ∈ w[G]. More generally for g ∈ w(G) we have |H(g)| ≤ (vl(g)|w| − 1)∆ and therefore if |H(g)| > 0 we have svl(g) > 0 since H(g n ) ≥ nH(g) for all n > 0.
We will use the Brooks construction (and the Hull-Osin extension) to find a g ∈ w(G) with H g (g n ) ≥ n. To do this we need to find a cyclically reduced word in w(F ) ⊂ w(G) ∩ F . Pick a non-trivial element h ∈ w(F ). If it is cyclically reduced let g = h and we are done. If not, then h = a · · · a −1 for a basis element a (or its inverse). Let h ′ be obtained from h by swapping a's and b's (with b another basis element). Then hh ′ is cyclically reduced and still in w(F ) so g = hh ′ is the desired element.
If w ∈ [G, G] then for any g ∈ w(G), cl(g) ≤ cl(w)vl(g) so scl(g) ≤ cl(w)svl(g). In particular if scl(g) > 0 then svl(g) > 0 and Theorem 1.3 would follow if we knew that every verbal subgroup of an acylindrically hyperbolic group had an element g with scl(g) > 0. However, proving this does not seem any easier that the more general proof above.
One can also ask if scl(g) = 0 implies that svl w (g) = 0 for all w. Here the answer is negative. For example, take w = [[x, y], [z, u]] and we have that g is conjugate to g −1 , which forces scl(g) = 0. On the other hand, we claim that svl w (g) > 0. Indeed, if svl w (g) = 0, then g n can be written as a product of a sublinear number of double commutators, which would imply that scl We now argue that scl H (g) > 0. In fact we will show that there is an index 2 subgroup N < G with H < N and so that N surjects to the free group Since nontrivial elements in free groups have positive scl the claim follows. In fact, it shows scl N (g) > 0. It immediately implies scl H (g) > 0.
We let N be the kernel of G → Z/2 that sends t to 1 and a, b, c, d to 0. The corresponding double cover Y of the presentation 2-complex X of G consists of the disjoint union of two roses R i with petals labeled a i , b i , c i , d i , i = 1, 2, with edges t i connecting the vertex of R i to the vertex of R 3−i . The map to X is the obvious one, sending a i to a etc. The relation 2-cell in X lifts to two 2-cells in Y , with attaching maps 3 Some facts about the Hull-Osin extension Unfortunately, rather than just the statement of Theorem 2.2 we need some elements of the proof in [6]. We review them now. In this section F can be any hyperbolically embedded subgroup in G.
It is convenient to replace the quasi-morphism with a function on G × G, called a bicombing in [6]. If H is a quasi-morphism we define r(x, y) = H(x −1 y). Note that r(zx, zy) = r(x, y) and |r(x, y) + r(y, z) + r(z, x)| is bounded by the defect of H. On the other hand if we are given a map r(x, y) (satisfying the properties from the previous sentence) then the map x → r(1, x) is a quasi-morphism so r determines H just as H determines r. In particular, to constructH, in [6] they first constructr : G × G → R. To constructr for each x, y ∈ G and each coset aF is associated a finite collection of pairs E(x, y; aF ) = {(x i , y i )} where x i , y i ∈ F . For the convenience of the reviewer we briefly review the construction of the sets and then state the key properties that we will need.
Let Γ be a Cayley graph for G formed from a generating set that contains every element of F . Given x, y ∈ G let γ be a geodesic in Γ from x to y. Each F -coset has diameter one in Γ so γ will intersect a given coset aF in at most two points x ′ and y ′ . We say that γ essentially penetrates aF if any path in Γ from x ′ to y ′ that doesn't contain any F -edges has length ≥ C where C is a constant that only depends on G and F . We let S(x, y) be the set of cosets aF where there is some geodesic from x to y that essentially penetrates aF . A central fact from [6] is that if there is one geodesic that essentially penetrates then every geodesic from x to y must intersect aF . For each coset in aF ∈ S(x, y) we let E(x, y; aF ) be the set of pairs (x ′ , y ′ ) ∈ F such that ax ′ and ay ′ are the entry and exit points for a geodesic from x to y in Γ. For each coset the particular representative a is not important except that the choice needs to be fixed for once and all. If aF ∈ S(x, y) then E(x, y; aF ) is empty. We now definẽ For this to be well defined we need the sum to be finite. The inside sum is finite by Lemma 3.8 of [6] and the outside sum is finite since S(x, y) is finite by Corollary 3.4. Note that while in [6] it is only stated that the size of E(x, y; aF ) is finite it is in fact uniformly bounded which will be important in the proof of Lemma 4.1 later.
The following lemma is a combination of Lemma 3.9 and (the proof of) Lemma 4.7 in [6]. We fix the word metric with respect to a finite generating set on F and denote the distance between x, y by |x − y|.
Given a pair (x, y) ∈ G let B(x, y) be the collection of cosets that don't satisfy (1) We are interested in the sumr(x, y) +r(y, z) +r(z, x). It will be convenient to define new sets E(x, y, z; aF ) to be the product of the sets of pairs E(x, y; aF ), E(y, z; aF ) and E(z, x; aF ). Note that one or more of the sets may be empty in which case the product would be empty. (In fact for at most one coset at least one of the sets will be empty.) To get around this if E(u, v; aF ) is empty we make it non-empty by adding the "empty pair" (∅, ∅) and we define r(∅, ∅) = 0. With this modification E(x, y, z; aF ) will always be a triple of pairs in F ∪ {∅}. Next we define ρ(x, y, z; aF ) = 1 |E(x, y, z; aF )| E(x,y,z;aF ) r(x − , y + ) + r(y − , z + ) + r(z − , x + ) and observe that r(x, y) +r(y, z) +r(z, x) = aF ρ(x, y, z; aF ).
To show thatr determines a quasi-morphism Hull-Osin show that for nearly all cosets the expression ρ(x, y, z; aF ) is zero and for the finitely many when it is not it is uniformly bounded. Proof. If aF ∈ B(x, y, z) then either E(x, y, z; aF ) is the triple of empty pairs and ρ(x, y, z; aF ) = 0 or all the terms in the sum cancel and again ρ(x, y, z; aF ) = 0.
have that H(g) is divisible by d(w). We will construct a family of quasimorphisms where this is true for nearly all the quasi-morphisms in the family where the number of exceptions is bounded above by the vl(g).
Let F × N be hyperbolically embedded in G where F is the free group of rank at least two and N is a finite group. For simplicity we suppose the rank of F is two in the following. We now fix a sequence of words that we will use to build Brooks' quasi-morphisms on F , then extend it to F × N , trivially on N . Let g ′ i = ab 2i , g ′′ i = ab 2i+1 and g i = g ′ i g ′′ i and let H i = H g i be the Brooks quasi-morphism and r i the corresponding bicombings.
We fix the word metric with respect a finite generating set on F × N and denote the distance between x, y by |x − y|.
Lemma 4.1. Given a triple of pairs
Therefore there is a uniform bound on the number of ρ i where ρ i (x, y, z; aF ) = 0.
Proof. We only discuss the case that N is trivial. The general case is similar.
If r i (x − , x + ) = 0 there is a translate of the word g i in the segment between x − and x + in the Cayley graph (with the standard generators). Since two g i can't intersect in a segment (a very bad) upper bound for the number of r i with r The triple x − , x + and y + form a tripod in the Cayley graph and let m be the central vertex. Then r i (x − , y + ) + r i (y + , x + ) = 0 unless there is a a translate of g i in the segment from y + to x − that intersects the segment from m to x − or a translate in the segment from y + to x + that intersects the segment from m to x + . Again using the fact that two g i 's can't overlap in a segment an upper bound for the number of r i 's with Similarly there are at most L of the r i such that r i (x + , y + ) + r i (y + , x + ) = 0 or equivalently r i (y + , x + ) = r i (y − , x + ) for all but L of the r i and therefore r i (x − , y + ) + r i (y − , x + ) = 0 for all but 2L of the r i . Now we examine the tripod formed by x − , y − and z − . As with the original Brooks' argument the sum is zero unless the a translate of the word g i intersects the central vertex of the tripod. At most three such words can intersect the central vertex so the sum is non-zero for at most 3 of the r i . As above for at most L of the r i we have r i (x − , y − ) = r i (x i , y + ), etc. Therefore for all but at most 3L + 3 of the r i .
Since F is hyperbolically embedded in G, if aF ∈ B(x, y, z) then ρ i (x, y, z; aF ) = 0 for all i by Corollary 3.2. For cosets aF ∈ B(x, y, z) of type (A) there will be at most (3L + 3)|E(x, y, z; aF )| of the ρ i with ρ i (x, y, z; aF ) = 0, for cosets of type (B) at most 2L|E(x, y, z; aF )| and for cosets of type (C) at most L|E(x, y, z; aF )|. By Lemma 3.8 of [6] |E(u, v; aF )| is uniformly bounded 1 and therefore so is |E(x, y, z; aF )|. It follows that there is uniform bound on the number of ρ i with ρ i (x, y, z; aF ) = 0.
Since F × N is hyperbolically embedded in G, letH i : G → R be the Hull-Osin extension of the H i andr i the corresponding bicombings.
holds except for at most M of theH i . It follows that for any a 1 , · · · a k ∈ G, H i (a 1 · · · a k ) −H i (a 1 ) − · · · −H i (a k ) = 0 holds except for at most M (k − 1) of theH i .
Proof. First observẽ
so we can instead show that r i (x, y) +r i (y, z) +r i (z, x) = 0 for all but M of ther i . But this follows from Lemma 4.1, as for all but at most 6 cosets ρ i (x, y, z; aF ) = 0 for all i and for each of these bad cosets there is a uniform bound on the number of ρ i with ρ i (x, y, z; aF ) = 0.
Proof. We will find an element g ∈ w(F ). Recall that we are assuming that d = d(w) > 1 and therefore for any f ∈ F , we have f d ∈ w[F ]. Let h ′ i = (g ′ i ) d and h ′′ i = (g ′′ i ) d . Then the product h i = h ′ i h ′′ i contains a single copy of g i . Let g = h 1 h 2 · · · h K . Note that g is already reduced since there are only positive powers of a and b in the h ′ i and h ′′ i . Furthermore by our construction of the g i there will be exactly one copy of g i in g and no copies of g −1 i . Therefore H i (g) = 1 for 1 ≤ i ≤ K.
We now give a proof of Theorem 1.1 when d(w) > 1.
Proof. We first show that for any g ∈ w(G) we have thatH i (g) is divisible by d(w) > 1 for all but a bounded number of theH i where the bound only depends on vl(g). To see this we first observe that if g = w(g 1 , . . . , g k ) ∈ w[G] then by Proposition 4.2 H i (w(g 1 , · · · , g k )) = k j=1 e jHi (g j ), for all but M (|w| − 1) of theH i . In particular, for any g ∈ w[G] there are at most M (|w| − 1) of the i such thatH i (g) isn't divisible by d(w). Similarly if g ∈ w(G) is product of vl(g) elements g j ∈ w[G] theñ H i (g) = vl(g) j=1H i (g j ) for all but M (vl(g) − 1) of theH i . If all of theH i (g j ) are divisible by d(w) then so isH i (g) so we have thatH i (g) is divisible by d(w) for all but at most M (vl(g) − 1) + vl(g)(M (|w| − 1)) of theH i . In particular a bound on vl(g) gives a bound on the number ofH i whereH i (g) is not divisible by d(w) > 1.
On the other hand, by Lemma 4.3, for any K > 0 we can find a word h K such thatH i (h K ) = H i (h K ) = 1 for 1 ≤ i ≤ K. Therefore vl(h K ) → ∞ as K → ∞. From the above proof we see that vl(h K ) ≥ K/(M |w|). We know of no examples where this bound is sharp.
|
2016-09-13T17:39:47.000Z
|
2016-09-13T00:00:00.000
|
{
"year": 2016,
"sha1": "0a7d6888af17a2690a0c26b07d39cc4298e0aa89",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1609.03950",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0a7d6888af17a2690a0c26b07d39cc4298e0aa89",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
46922715
|
pes2o/s2orc
|
v3-fos-license
|
Diffuse hepatic epithelioid hemangioendothelioma with multiple splenic metastasis and delayed multifocal bone metastasis after liver transplantation on FDG PET/CT images
Abstract Rationale: Malignant hepatic epithelioid hemangioendotheliom (HEH) is a rare vascular tumor of endothelial origin, with multiple metastases to the spleen. This report describes a diffuse HEH with splenic metastasis on 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) images and delayed mutifocal bone metastasis after liver transplantation (LTx). Patient concerns: A 30-year-old male was admitted to our hospital with a complaint of abdominal distension, fatigue, and anorexia for 2 months. Diagnoses: Mild to moderate FDG uptake in the whole liver, and multifocal FDG uptake in the spleen were observed on 18F-FDG PET/CT scan. Ultrasound guided liver biopsy was performed, and a diagnosis of HEH was confirmed. Interventions: The patient underwent LTx and splenectomy. Outcomes: The patient developed low back pain due to unknown etiology, 3 months after surgery. A follow-up 18F-FDG PET/CT scan demonstrated multifocal bone destruction. Unfortunately, the patient died 12 months after surgery. Lessons: It is noteworthy that despite liver transplantation for the treatment of HEH, there may be a risk of recurrence. For these patients with extrahepatic lesions, adjuvant chemotherapy may be a useful alternative treatment method for the prevention of recurrence.
Introduction
Malignant hepatic epithelioid hemangioendothelioma (HEH) is a rare vascular tumor of endothelial origin. [1] It can present as diffuse, multifocal, or solitary nodular form [1][2][3][4] ; the diffuse form is considered to be the advanced stage due to coalescence of the lesions; this is associated with vascular infiltration of hepatic and/ or portal veins, [4] and may be prone to metastasis. To the best of our knowledge, there are a few reports on the diagnosis and treatment evaluation of HEH using fluorodeoxyglucose positron emission tomography (FDG PET) or PET/computed tomography (CT) findings; moreover, such reports have mainly focused on the multifocal or solitary nodular form. [5][6][7][8][9][10] The degree of FDG uptake may be helpful in suggesting a diagnosis of HEH as opposed to other hepatic malignancies. Moreover, 18 F-FDG PET/ CT has a significant advantage in its ability to detect potential metastasis in HEH patients, especially in the diffuse form.
Case description
A 30-year-old male was admitted to our hospital with a complaint of abdominal distension, fatigue, and anorexia for 2 months. He was not an alcoholic. Physical examination revealed jaundice, hepatomegaly, and hard liver with moderate ascites. Viral markers for hepatitis A, B, and C were negative. The levels of carbohydrate antigen 19-9 (CA 19-9), carbohydrate antigen 125 (CA 125), ferritin, and alpha-fetoprotein (AFP) were 50.33 (normal range < 35.00 KU/L), 362.53 (normal range < 35.00 KU/L), 423.21 (normal range, 0∼322.00 ng/mL), and 1.56 (normal range <20 ng/mL), respectively. Liver function tests showed the total bilirubin (TB), direct bilirubin (DB), and total bile acid (TBA) levels to be 441.2 (normal range, 5.1∼17.1 mmol/ L), 262.1 (normal range, 0∼6.0 mmol/L), and 150.3 (normal range, 0∼10.0 mmol/L), respectively. Coagulation function test revealed a normal prothrombin time and partial thromboplastin time (PT/PTT). Contrast enhanced CT scan of the abdomen showed multiple soft-tissue density nodules with irregular mild to moderate enhancement (Fig. 1). Moreover, peripheral multifocal calcifications were noted. Liver cirrhosis, esophageal and gastric varices, and moderate ascites were also seen, and multiple lowdensity nodules were observed in the spleen. 18 F-FDG PET/CT was performed due to suspected cholangiocarcinoma or hepatocellular carcinoma with multiple metastases to the spleen (Fig. 2). Axial fusion images demonstrated pervasively mild to moderate FDG uptake in the whole liver, and multifocal FDG uptake in the spleen. Ultrasound guided liver biopsy was performed, and a diagnosis of HEH was confirmed (Fig. 3). The patient then underwent liver transplantation and splenectomy. However, the patient developed low back pain due to unknown etiology, 3 months after the liver transplantation (LTx); bone metastasis was suspected as a result of multifocal lumbosacral destruction observed on lumbar magnetic resonance imaging. A follow-up whole body 18F-FDG PET/CT scan was performed for further evaluation (Fig. 4). The transplanted liver showed normal morphology without abnormal FDG uptake. However, multifocal osteolytic lesions with moderate FDG uptake were observed in the lumbar and sacral vertebrae, and right iliac bone, with a SUVmax of 6.5. Unfortunately, the patient died 12 months after surgery.
Discussion
As HEH is a relatively rare malignancy, its imaging findings are not widely reported. There are a few reports describing the FDG PET or PET/CT scan findings for the diagnosis and treatment evaluation of HEH; the reports that exist mainly focus on the multifocal or solitary nodular form. [5][6][7][8][9][10] This report details the imaging findings of the diffuse form of HEH. Low-density pattern on CT images are the most common abnormal feature, with calcifications accounting for most of the additional CT scan findings. [2] The initial PET images showed diffuse lesions with mild to moderate FDG uptake in the entire liver accompanied by multiple low-density splenic nodules. These imaging findings may be confused with the findings of malignant carcinomas such as cholangiocarcinoma and hepatocellular carcinoma. However, most peripheral cholangiocarcinomas in the nodular form showed intense FDG uptake, [11] whereas most malignant HEH showed relatively mild to moderate FDG uptake. [10] Diffuse lesions with relatively low degree of FDG uptake and patchy calcifications on CT images should be indicative of a differential diagnosis of HEH. However, it should be noted that imaging studies cannot provide a definite diagnosis for the variable patterns of the tumor. The histology, in conjunction with the immunohistology, provides a valuable diagnostic tool.
There is no generally accepted strategy for the treatment of HEH because of its heterogeneous status and variable clinical outcome. Mehrabi et al [2] reviewed the published literature on HEH from 1984 to 2005 with a focus on clinical outcome after different therapeutic modalities, and they found that LTx accounted for the largest proportion (44.8%) of the various therapeutic methods. The proportion of non-treatment, chemotherapy or radiotherapy and partial liver resection was 24.8%, 21%, and 9.4%, respectively. However, some authors have advocated LTx in the presence of extrahepatic involvement because the 5-year survival rate in HEH patients with extrahepatic manifestations who undergo LTx, ranges from 50% to 70%. [12][13][14][15][16] Whereas liver resection can be performed in case of single nodular type, [17] but some patients with the disease confined to the liver developed rapid recurrence and metastasis after LTx. [18][19] The effect of various chemotherapeutic drugs on HEH has been reported and provides a promising treatment method. Some studies reported that the use of sorafenib took more advantages over other anti-angiogenic agents because of its dual anti-tumor activity. [20] In a case report, Kobayashi et al [21] used sorafenib monotherapy in a patient with mutiple unresectable lesions, who did not agree to receive surgical resection including liver transplantation. About 6 months later, CT findings indicated that these tumors were shrinking slightly; even 60 months later, the partial response was still observed with sorafenib monotherapy. Metastatic HEH has been successfully treated with the use of thalidomide via blocking the proliferation of the malignant vascular endothelial cells. [22] Recently, some authors reported a malignant HEH successfully treated with pegylated liposomal doxorubicin. [23] Metronomic cyclophosphamide also has been presented as a new therapeutic alleviative option to treat metastatic and nonoperative patients. [24] But there is no consensus on preferable or specific medications due to its rare and variable clinical course. In the present case, the patient had multiple metastases to the spleen and underwent LTx and splenectomy; however, he developed multifocal bone metastasis 5 months after surgery. The mechanism involved in the rapid progression of HEH after LTx is uncertain, and may be related to undefined subtypes with varying potentials for progression or recurrence. The reasons for the unpredictable prognosis of patients with HEH after LTx, need to be further investigated. For these patients with extrahepatic lesions, adjuvant chemotherapy may be a useful alternative treatment method for the prevention of recurrence.
Conclusion
This case is a reminder that HEH should be considered among the differential diagnoses in case of diffuse liver disease with multifocal calcifications. The relatively low degree of FDG uptake may be helpful in suggesting a diagnosis of HEH as opposed to other hepatic malignancies. Moreover, 18 F-FDG PET/ CT is advantageous in detecting potential metastasis in HEH patients, especially in the diffuse form of the disease.
|
2018-06-21T12:41:03.901Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "60fafd7a8c8e49dd2921b2f97e4bec8fb97e8c94",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000010728",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60fafd7a8c8e49dd2921b2f97e4bec8fb97e8c94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266549966
|
pes2o/s2orc
|
v3-fos-license
|
Engineering the Outcoupling Pathways in Plasmonic Tunnel Junctions via Photonic Mode Dispersion for Low-Loss Waveguiding
Outcoupling of plasmonic modes excited by inelastic electron tunneling (IET) across plasmonic tunnel junctions (TJs) has attracted significant attention due to low operating voltages and fast excitation rates. Achieving selectivity among various outcoupling channels, however, remains a challenging task. Employing nanoscale antennas to enhance the local density of optical states (LDOS) associated with specific outcoupling channels partially addressed the problem, along with the integration of conducting 2D materials into TJs, improving the outcoupling to guided modes with particular momentum. The disadvantage of such methods is that they often involve complex fabrication steps and lack fine-tuning options. Here, we propose an alternative approach by modifying the dielectric medium surrounding TJs. By employing a simple multilayer substrate with a specific permittivity combination for the TJs under study, we show that it is possible to optimize mode selectivity in outcoupling to a plasmonic or a photonic-like mode characterized by distinct cutoff behaviors and propagation length. Theoretical and experimental results obtained with a SiO2–SiN–glass multilayer substrate demonstrate high relative coupling efficiencies of (62.77 ± 1.74)% and (29.07 ± 0.72)% for plasmonic and photonic-like modes, respectively. The figure-of-merit, which quantifies the tradeoff between mode outcoupling and propagation lengths (tens of μm) for both modes, can reach values as high as 180 and 140. The demonstrated approach allows LDOS engineering and customized TJ device performance, which are seamlessly integrated with standard thin film fabrication protocols. Our experimental device is well-suited for integration with silicon nitride photonics platforms.
INTRODUCTION
It is well-known that inelastic electron tunneling (IET), 1 where a quantum of energy ( ) is lost during the transit, is associated with various radiative and nonradiative decay or outcoupling pathways.The quantum efficiency of IET energy transfer to radiation in the visible (Vis) or near-infrared (NIR) range is significant only if at least one of the electrodes for a tunnel junction (TJ) is plasmonically active.In principle, such plasmonic TJs allow for ultrafast excitation (on the quantum mechanical tunneling times scales on the order of femtoseconds) and low operating voltages (< 3 V) and are, therefore, interesting to explore as an alternative for applications that require electrical sources of photons and plasmons. 2−10 However, TJs so far have a large number of outcoupling pathways leading to photon generation, 8,9 surface plasmon polariton (SPP) excitation, 10−12 and nonradiative modes such as phonons 13 and vibronic excitations. 14,15But for applications, it is crucial to control the outcoupling pathways tailored for the respective target requirements.
When a bias voltage (V b ) is applied across a TJ, tunneling processes are spatially localized to a subatomic region. 16ence, the resulting IET can be seen as a dipole source 17 that decays its power into all available outcoupling channels (modes), 8−15 because of the large in-plane momentum (k ∥ ) it can supply.−20 Hence, it is important to carefully design the electromagnetic environment around the dipole to attain precise control over the outcoupling pathways.Previous attempts to achieve this control involve plasmonic antennas, typically with dimensions smaller than 100 nm, that have been integrated as an active 6−9 or passive 21,22 TJ component.This approach allows for selective enhancement of the LDOS associated with specific energy through localized plasmon resonance, thereby improving light emission efficiencies.Nanoscale surface roughness 23 can also effectively generate outcoupling pathways but complicate such approaches.Furthermore, directional antennas 4,24,25 can enhance the emission directivity, transforming the omnidirectional dipole emission into a unidirectional one.
For large-area metal−insulator−metal (MIM) configurations, where only broadband junction modes (MIM-SPPs) are directly excited by the IET, controlling the outcoupling pathways can be readily achieved by simply changing the geometry of the TJ 26,27 or the thickness of the top or bottom electrode. 28This approach exploits outcoupling via mode scattering at TJ edges to the integrated slab waveguides, 29 which supports single-interface SPP modes where, for example, the coupling efficiency to the extended waveguide can be improved by reducing the thickness of the metal electrodes, 28 but at the cost of reduced LDOS and IET efficiencies.An intuitive way to improve the outcoupling from highly confined MIM-SPP modes is to replace one metal with graphene (G), which results in direct excitation of single-interface SPP modes, 22,30 whose momentum is 10 times lower than that of the MIM-SPP mode, which is more suitable for mode dispersion engineering.In our prior report on metal− insulator-graphene (MIG) TJ devices, 30 we achieved a relative coupling efficiency (η) of SPPs, accounting for up to 80% of the overall optical outcoupling (Γ opt , including SPPs and photons).Regrettably, the unilateral open configuration of MIG-TJs undermines the high LDOS associated with the MIM-SPP mode, leading to a reduced overall enhancement in optical outcoupling (Γ opt /Γ 0 , where Γ 0 denotes the free space decay rate), relative to MIM-TJs.
For all of these reasons, alternative viable methods to engineer mode dispersion and tailor the outcoupling pathways without compromising the MIM or MIG multilayer characteristic of the TJs are required.It is also important to note that achieving a simultaneous combination of high outcoupling to guided modes and low mode propagation losses is crucial for optimizing TJs in long-distance waveguiding systems.High propagation losses restrict all-metal plasmonic circuitry and demand alternative options without an extensive compromise on the optical enhancement offered by the plasmonic components.−35 Here, we integrate plasmonic MIG-TJs on a multilayered substrate that incorporates a specially designed combination of high and low permittivity dielectric layers connected to a photonic waveguide.In this way, we modified the plasmonic mode properties of the TJ to excite a photonic-like mode with a much larger propagation length (l p ) than that of the singleinterface SPPs.The high permittivity layer, positioned to support guided modes within the optical near field of the TJ, facilitates efficient energy transfer from the IET dipole into guided modes.To experimentally verify this point, we choose silicon nitride (SiN) and silicon dioxide (SiO 2 ) as the constituent materials, since they provide a high contrast of dielectric permittivity and low absorption at optical frequencies. 36With a dedicated waveguide extended from the TJ area, the η values to the plasmonic ((62.77± 1.74)%) and photonic-like modes ((29.07 ± 0.72)%) are evaluated.The decent tradeoff between outcoupling strength (ηΓ opt /Γ 0 ) to each individual optical mode and l p results in reasonably high figure-of-merit 37,38 (FOM = (l p /λ 0 )ηΓ opt /Γ 0 , where λ 0 stands for the wavelength) of 180 and 140, respectively, which is a factor of 7−8 higher than those values of MIG-TJ on glass.Our demonstration provides a versatile method for engineering the LDOS to customize the TJ device's performance that is readily compatible with the silicon nitride-based photonics platforms.
RESULTS AND DISCUSSION
Figure 1a shows a schematic illustration of the TJ used in our calculations.Each TJ comprises two electrodes (ε T and ε B ) separated by an insulator layer (ε i ) with a thickness of 2 nm.The TJ is supported on a dielectric substrate (ε sub ) or a multilayered substrate (inset), and the superstrate (ε sup ) is normally air (ε sup = 1).The red oval represents the IET dipole.From the dipole decay rate Γ (normalized to free space decay rate Γ 0 ), the LDOS for various outcoupling channels can be quantified following the method previously reported. 18The differential of Γ with respect to k ∥ /k 0 gives a more intuitive understanding of the mode-LDOS and is represented as where k 0 is the free space wavenumber (see section S1 in the Supporting Information (SI) for details). 30To benchmark our approach, we first calculated the properties of the MIM and MIG junctions on a homogeneous substrate.Figure 1b shows a log−log plot of ΔΓ calculated for the MIM (dotted blue line) and MIG (solid blue line) configuration with infinitely thick Au layer as the top and infinitely thick Au or few-layer graphene (FLG) as the bottom electrode.The two sets of ΔΓ spectra exhibit peaks at k || /k 0 ≈ 1.5 and 24, corresponding to the respective characteristic plasmonic modes supported by the MIG and the MIM configurations.We note that, in ΔΓ spectra, the inverse of the peak width determines the l p of a mode. 39The MIM configuration yields a much broader peak, resulting in a much smaller l p (tens of nm) than the MIG configuration, which offers a longer l p of a few micrometers.While the MIG configuration features lower losses, the overall emission rate is 2−3 orders of magnitude lower than that of the MIM configuration, judged from the peak values of each configuration's characteristic modes.These results are similar to earlier reports, but importantly, they demonstrate the tradeoff between lowering the k || /k 0 and emission rates.Now, we turn to the layered substrate to demonstrate control over the dispersion mode.A three-layer stack is considered, as illustrated in Figure 1a (inset), where the substrate is modified to ε sub = ε low − ε high − ε low , where ε low and ε high represent low and high permittivity layers, respectively.For representation purposes, we set the thickness of the upper low permittivity layer at 20 nm and that of the high permittivity layer at 350 nm.In general, the thicknesses of these layers are important parameters in controlling the mode dispersion and propagation length.This is investigated further in detail to understand the characteristics of the modes supported by the layered substrate.The immediate effect of this modification on dipole outcoupling is evident in the ΔΓ plot (Figure 1c), where we fix the ε low at 2.1 to match ε glass while varying the ε high from 2.1 to 4. When ε low = ε high = 2.1, the mode dispersion is identical to that of the MIG case shown in Figure 1b (solid blue line), where a single mode (M1) is observed.As ε high is progressively increased to 4, k ∥ /k 0 of M1 monotonically shifts from 1.51 to 2.00, indicating higher mode confinement.Interestingly, a second mode (marked as M2) at a lower k ∥ /k 0 (∼1.52) also emerges with the increase in ε high , with a narrower peak width than M1, indicating a lower propagation loss.Apparently, M2 is less sensitive to ε high than M1.
To improve our understanding of the characteristics of M1 and M2, ΔΓ, which is calculated as a function of k ∥ and energy, is plotted in Figure 2a.The ΔΓ is plotted on a log scale and exhibits two bright bands corresponding to the dispersion of M1 and M2 modes, with their intensity maxima indicated by hollow black circles.Both modes are located to the right of the light dispersion in glass (dashed white line), revealing their bound character.Since M1 and M2 show different sensitivities to the value of ε high , the true character of these modes is understood from the variation of effective indices with the thickness of the ε high layer, as shown in Figure 2b (left panel).A distinct feature exhibited by M2 is the mode cutoff, where the dispersion curve intersects the glass light line (dashed vertical line) when the thickness of the high permittivity layer is 280 nm (indicated by the black arrow).This shows the photonic character of M2 compared to M1, which has no lower cutoff limit and, therefore, is plasmonic.
In the right panel of Figure 2b, we compare the l p of M1 (solid red line) and M2 (solid blue line) for a ε high layer thickness of 350 nm (larger than the M2 cutoff thickness) for the energy range of 1.2−2.0eV.For the low permittivity layer between the MIG-TJ and the ε high layer, a thickness of 20 nm is chosen, which can readily be formed during device fabrication (thinner films risk being inhomogeneous).Compared to the plasmonic mode M1, M2 shows more than a half order of magnitude improvement in l p , consistent with the photonic character of M2.As opposed to pure photonic waveguide cases, l p of M2 is limited to ∼ 36 μm due to the interaction losses of M2 with the Au layer at the nanoscale proximity (∼ 20 nm).The shaded region in Figure 2b indicates the l p of Au interface SPPs when the adjacent dielectric permittivity is varied in the range of 2.1 ≤ ε sub ≤ 4.0.We note that M1 falls in this range and exhibits an intermediate l p value, compared to single-interface SPP modes.The photonic-like M2, while experiencing loss, falls out of the shaded zone, demonstrating an improved l p value, compared to plasmonic modes.
Figure 3a shows the schematic of the experimental device with MIG-TJ on a SiO 2 −SiN−glass multilayer substrate with ε SiN = 4.0 and ε SiOd 2 = 2.1, providing a high permittivity contrast with low absorption. 36SiN and SiO 2 are chosen for the multilayer substrate as these materials are readily available along with well-established fabrication processes. 40The fabrication details are described in the Methods section.In the design, an Au strip is used as one of the electrodes of TJ, which also serves as the plasmonic waveguide (pl-WG) that extends up to 10 μm from the TJ.The SiO 2 −SiN stack is etched to create a strip extending beyond the Au strip's length.The SiO 2 −SiN-glass combination, characterized by its high permittivity contrast, fulfills the requirements for total internal reflection, forming a photonic waveguide (ph-WG; see Figure 3b).The two-strip design extending from the TJ area allows us to independently analyze the propagation and scattering of the M1 and M2 modes from far-field optical measurements.
The electrical bias triggers tunneling processes in the MIG-TJ area.As described in our previous report, 30 the TJ area containing IET dipole excites all guided modes and emits photons directly (dashed gray arrow, Figure 3b).The propagations of M1 and M2 originating from the IET dipole are represented by wavy red and blue arrows.At the pl-WG end, M1, which is essentially plasmonic, experiences severe impedance mismatch due to the termination of the Au strip, and most M1 power is scattered into the glass medium immediately (dashed red arrow, Figure 3b).In comparison, the photonic-like M2 is ideally suited to propagate in the ph-WG due to its photonic nature and matching impedance characteristics.It readily propagates until the end of the waveguide, resulting in M2 scattering (dashed blue arrow, Figure 3b).Figure 3c provides the current−voltage I(V b ) curve of the experimental MIG-TJ to excite the guided modes (see Figure 4 for device images).The MIG-TJ used in our experiments exhibits good electrical stability, as is evident from the current−time (I(t)) traces displayed in Figure 3d (along with additional I(V b ) scans in Figure S1 in the SI).This can be attributed to the electrical stability of the hBN barrier. 41igure 4a shows the optical microscopy image of the fabricated device.The MIG-TJ comprises layers of Au (60 nm thick), hexagonal boron nitride (hBN, outlined by dashed gray lines), and FLG (outlined by dashed black lines).The TJ area is the overlapping junction between the Au and FLG electrodes.The x-coordinate for the TJ area center is defined as x = 0 (as indicated by the yellow arrow).Both the Au and SiO 2 −SiN waveguides have a width of 2 μm.We employed an inverted optical microscope with an electron multiplying charge-coupled device (EMCCD) camera to record the farfield emission from the device�all emission contributions, viz.M1 and M2 scattering (from x = 10 and 20 μm) and direct TJ area emissions are collected through the glass substrate using an index-matching oil immersion objective (NA = 1.49).Figure 4b shows the EMCCD image of the light emission for V b = 1.5 V.For IET, in general, the maximum energy of the emitted photons is dictated by the quantum cutoff V ( e ) b = and for the EMCCD, the lower detection limit is ∼1100 nm (∼1.1 eV).Therefore, in Figure 4b, with V b = 1.5 eV (≡ 825 nm), the EMCCD detects photons in the range of 825−1100 nm (see the Methods section for measurement details).We observe that the direct emission from IET uniformly illuminates the TJ area.At the same time, the plasmonic mode (M1), which is generally nonradiative and confined to the metal−dielectric interface, scatters from the ±y edges of the Au strip and results in an enhanced intensity at these locations. 22,29,30igure 4b shows the light scattering from the pl-WG and the ph-WG ends at x = 10 and 20 μm, respectively.Interestingly, we also note that the etched boundaries of the SiO 2 −SiN substrate along the y-axis show a weak scattering contribution, probably from M2, which isotropically propagates in all directions from the TJ before it couples to the ph-WG.To confirm the separate excitation of M1 and M2 from the TJ, we compare the emission characteristics of the first device shown in Figure 4b (Sample 1) with a second MIG-TJ (Sample 2) lacking the pl-WG but with the photonic SiO 2 −SiN strip (see section S3 in the SI).Emission intensities, integrated along the width (y) of the waveguides and normalized to the average tunnel current, are plotted as a function of x in Figure 4c for Sample 1 (red circles) and Sample 2 (hollow blue circles).At x = 0 and x = 20 μm, emission from both Sample 1 and Sample 2 shows similar characteristics.However, Sample 2 shows no signature of the plasmonic scattering (M1) peak at x = 10 μm.It is worthwhile noting that M2 propagation remains unaffected, extending up to x = 20 μm, and consequently, the M2 scattering at x = 20 μm exhibits nearly equal intensities for both samples, corroborating the independent excitation of M1 and M2 from the TJ area.
We used two-dimensional finite-difference time-domain (FDTD) simulations to verify the experimental observations qualitatively.A vertical dipole with broadband spectral response located at x = 0 (Figure 4d) is used as the source for exciting M1 and M2 modes.Figure 4d demonstrates the simulated electric field (E z ) distribution, which reveals direct emission to the glass substrate and excitation of propagating modes within the dedicated waveguides.As the Au strip terminates at x = 10 μm, the plasmonic M1 undergoes scattering, whereas the photonic-like M2 propagates along the SiO 2 −SiN waveguide until it reaches x = 20 μm.The IET dipole at x = 0 eventually results in all three distinct emissions observed at x = 0, 10, and 20 μm.The simulation results are in good agreement with the experimental observations.We analyzed the spectral characteristics of emission from x = 0 (direct emission), x = 10 μm (plasmonic), and x = 20 μm (photonic) with a spectrograph (for the range 300−1100 nm). Figure 5a shows the light emission spectra recorded for all three locations.All spectra exhibit a broadband emission with quantum cutoff , indicated by black arrows) as the high energy limit.As V b increases from 1.5 to 2.0 V, the emission intensities increase, accompanied by a blueshift for the spectral cutoff.We employ Gaussian functions for spectral fitting to determine the cutoff energies.The cutoff energy is defined as the value at 3σ of the Gaussian function.Figure 5b illustrates the correlation between the spectral cutoff and V b .It is evident that, with increasing V b , the spectral cutoff energies for all three collection positions (x = 0 (denoted by hollow gray diamonds), x = 10 μm (denoted by hollow red circles), and x = 20 μm (denoted by hollow blue triangles)) experience a blue shift and remain closely aligned with the theoretical value (indicated by the dashed black line), although they are slightly higher likely due to the influence of finite lattice temperature. 42Additionally, note that the experimental spectral cutoff observed at x = 0 (corresponding to direct photon emission) consistently surpasses the theoretical cutoff by ∼80 meV.In contrast, as V b increases, the spectral cutoff energies for positions x = 10 and 20 μm gradually converge toward the theoretical values, as evident by the linear fits to the experimental data (solid color lines).This convergence trend can be attributed to the gradual increase in propagation loss as the energy of guided modes approaches 2 eV.This experimental observation aligns with the theoretical analysis in Figure 2b (right panel).
We integrate the spectral intensities to determine the η of M1 and M2 by comparing them with the direct photon emission from the TJ area (refer to section S4 in the SI for details).Figure 5c shows the η values as a function of V b in the range of 1.5−2.0V.The average η in this range for M1 (solid red circles) reaches (62.77 ± 1.74)%, while M2 (solid blue triangles) reaches (29.07 ± 0.72)%.The experimental values agree well with the theoretical estimates (dashed red line for M1, dashed blue line for M2) based on an integrated ΔΓ calculation (refer to sections S1 and S4 in the SI for details).We note that, in ref 30, the η value of the plasmonic mode in bare MIG-TJs is ∼80%.This contrast underscores that the multilayer design enables us to preferentially outcouple the photonic-like mode (M2) at the expense of the plasmonic mode (M1).
Furthermore, we invoke the FOM 37,38 to quantify the tradeoff between the mode outcoupling and the propagation losses.Table 1 provides the theoretical parameters for M1 and M2, including ηΓ opt /Γ 0 and l p obtained from Figures 2a and 2b, respectively.These values result in an FOM of M1 of 180 and that of M2 of 140.The FOM values are evaluated for the central wavelength of the experimental spectra (λ 0 ≈ 900 nm when V b = 1.5 V; see Figure 5a).Applying the same approach, we determined a FOM of 23 for the single-interface SPP mode in the bare MIG configuration, with ηΓ opt /Γ 0 = 4 and l p = 5.2 μm. 30This indicates a significant improvement in overall device performance when hybridizing MIG TJs with multilayer substrates, compared to that with the previously reported bare MIG configurations.Despite the improvement, it is essential to acknowledge that there remains room for refining the tradeoff between outcoupling and losses.Achieving this improvement 43 However, the chance of eliminating propagation losses by relying solely on modifying the multilayer substrate of the TJ is limited.To further narrow the divide between plasmonic TJs and photonic transmission lines, one can incorporate other structural designs, including techniques like phase matching 44,45 or adiabatic conversion 46,47 of the outcoupled modes.
CONCLUSIONS
In conclusion, our study demonstrates a method to engineer the LDOS of MIG-TJs by modifying the SiN and SiO 2 multilayer substrate.By precisely controlling the dielectric permittivity of the constituent layers, we effectively modify the outcoupling channels of the MIG-TJ and the properties of the propagating modes.The junctions were equipped with dedicated waveguides extending from the TJ area, enabling us to evaluate the η values of different guided modes.Our experimental results show that outcoupling from the TJ to the plasmonic and photonic-like modes can reach (62.77 ± 1.74)% and (29.07 ± 0.72)%.The decent tradeoff between outcoupling and propagation losses for the plasmonic and photonic-like modes results in reasonably high FOM values of 180 and 140, respectively, a factor of 7−8 improvement over MIG-TJs lacking the multilayered structure, showcasing that our approach offers a versatile method for tailoring the LDOS to customize the performance of plasmonic TJs.The experimentally demonstrated coupling of the photonic-like mode to lossless waveguides presents a promising opportunity to establish a connection between plasmonic TJs and photonic transmission lines.
METHODS
Fabrication.The multilayer substrates were prepared using PECVD (Oxford, Model PlasmaPro System100).A borosilicate glass substrate (Marienfield, 160 μm thick) was sequentially coated with a 350-nm-thick SiN layer and a 20-nm-thick SiO 2 layer.MIG-TJs were fabricated on as-prepared SiO 2 −SiN-glass multilayer substrates.The FLG (∼1.6 nm, corresponding to five layers) and hBN (∼1.7 nm, corresponding to five layers) flakes were exfoliated from graphite (NGS Naturagraphit GmBH) and hBN (HQ Graphene) crystals.The flakes were assembled via the PMGI/PMMA sacrificial layer transfer method. 30The FLG flake was etched as a 5-μm-wide strip using electron beam lithography (EBL) (JEOL, Model JBX-6300FS) and O 2 plasma etching (Femto Science, Model VITA).The hBN-FLG heterostructure was annealed in a vacuum at 220 °C for 6 h.The flake thicknesses were determined by using atomic force microscopy (AFM) (Bruker FastScan).A 2-μm-wide and 60 nm-thick Au strip, with a 2 nm-thick Ti adhesion layer, was then fabricated on the top of the hBN-FLG heterostructure using EBL and thermal evaporation (Kurt J. Lesker, Model NANO 36).The SiO 2 −SiN waveguide that extended from the Au strip was defined by etching the substrate 420 nm in depth using EBL and deep reactive ion etching (DRIE) (Oxford, Model PlasmaPro 100 Cobra).
Measurements.The experimental characterizations were performed on an inverted optical microscope (Nikon, Model Eclipse Ti-E) with an oil-immersed objective (Nikon, 100× , NA 1.49).Continuous V b supplied from a source meter (Keithley 6430) was applied to the sample through microprobes.The light emission of the device was collected from the backside of the sample through the glass substrate.Light emission images were collected using an electronmultiplying charge-coupled device (EMCCD) (Andor, Model iXon Ultra 897) with an integration time of 30 s.Light emission spectra were collected using a spectrometer (Andor, Model Shamrock 303i), with an integration time of 120 s.See Figure S3 in the SI for the quantum efficiency of the EMCCD.
Numerical Simulation.The decay rate (Γ) is numerically evaluated in MATLAB, based on the theory given in section S1 in the SI.The differential of Γ with respect to k || /k 0 is calculated for the stratified medium with an optical permittivity combination ε sup − ε T − ε i − ε B − ε sub and the IET dipole is located in ε i .To obtain Figure 4d, we performed two-dimensional simulations using Lumerical FDTD Solutions, Ansys Canada, Ltd. 48The optical properties of Au and SiO 2 were taken from Palik, 49 and the SiN parameters were taken from Philipp. 50The total simulation area was 30 μm × 5 μm (x × z), with perfectly matched layers as all simulation boundaries.A minimum mesh size of dx = dz = 5 nm was applied across the simulation area.
Figure 1 .
Figure 1.(a) Cross-sectional view of the TJs with two electrodes separated by a thin insulator used for modeling.The red oval represents the IET dipole, and the substrate and superstrate are homogeneous single layers extending to infinity.The inset shows the multilayer stack used in place of the homogeneous substrate to tailor the dipole outcoupling pathways.(b) ΔΓ plotted as a function of normalized in-plane wavenumber (k || /k 0 ) for the MIG (solid blue line) and the MIM (dotted blue line) configurations on substrates with ε sub = 2.1 (i.e., the permittivity of glass).The insets show schematic representations of the plasmonic modes supported by the two configurations.(c) ΔΓ calculated for MIG on a multilayer substrate with ε high varied from 2.1 to 4.0 with a fixed ε low value of 2.1.The thickness of the upper low permittivity layer is set at 20 nm, and that of the high permittivity layer is set at 350 nm in the calculations.
Figure 2 .
Figure 2. (a) log(ΔΓ) plotted as a function of k || and energy.M1 and M2 modes are demarcated with hollow black circles at the intensity maxima.The dashed white line indicates the light dispersion in the glass medium.(b) The left panel shows ΔΓ evaluated as a function of k || /k 0 and ε high layer thickness at an energy of 1.37 eV (� 900 nm).The black arrow indicates where the M2 dispersion crosses the light line in the glass (dashed vertical line).The right panel shows propagation lengths l p evaluated for M1 (solid red line) and M2 (solid blue line) as a function of energy.The l p variation for Au single-interface SPP modes with ε sub = 2.1 (dashed gray line) to ε sub = 4.0 (dashed black line) is shown as a shaded area.
Figure 3 .
Figure 3. (a) Schematic illustration of the MIG-TJ on a SiO 2 − SiN−glass multilayer substrate.The Au and SiO 2 −SiN strips extend from the TJ area, serving as pl-WG and ph-WG.(b) Crosssectional (xz-plane) view of the device.The red oval at the TJ area's center represents the effective IET dipole.(c) Experimental I(V b ) curve measured for the range of |V b | ≤ 2 V.(d) I(t) traces recorded over 120 s for V b = 1.5−2.0V.
Figure 4 .
Figure 4. (a) Optical microscope image of the experimental device.Dashed lines outline the FLG and hBN.(b) EMCCD image of light emission for V b = 1.5 V.The emission intensity was integrated for 30 s during light collection.(c) Emission intensity integrated along the width (y) of the waveguides plotted as a function of x for Sample 1 (hollow red circles) and Sample 2 (hollow blue circles).Au strip is etched out in the case of Sample 2. Both datasets are normalized to the average tunnel current.(d) Simulated electric field (E z ) profile in the xzplane from FDTD method.The dipole source is positioned at x = 0 on the x-axis and 1 nm below the Au layer in the z-direction.
Figure 5 .
Figure 5. (a) Light emission spectra collected from TJ area at x = 0 (top panel), from Au strip end at x = 10 μm (middle panel), and from SiO 2 −SiN strip end at x = 20 μm (bottom panel).The spectra are integrated over 120 s.The black arrows indicate the quantum cutoff for energy.(b) Cutoff energies as a function of V b for the light emission spectra collected for x = 0 (hollow gray diamonds), x = 10 μm (hollow red circles), and x = 20 μm (hollow blue triangles).Solid lines represent the linear fit to the experimental data.The dashed solid line represents the theoretical cutoff V e b = .)(c) Relative coupling efficiency η for M1 (solid red circles) and M2 (solid blue triangles), as a function of V b , calculated from experiments.Theoretical data are depicted in the dashed red line for M1 and the dashed blue line for M2.
Table 1 .
Theoretical Values of the FOM for M1 and M2 optimizations of the thickness or permittivity of the substrate layers.It would be interesting to explore alternative dielectric films characterized by low loss and high permittivity, such as TiO 2 or Al 2 O 3 .
|
2023-12-27T06:16:52.084Z
|
2023-12-26T00:00:00.000
|
{
"year": 2023,
"sha1": "757313f58c5a600da5c2a5d93e914090045f9385",
"oa_license": "CCBY",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsnano.3c10832",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8dbb18970713145def36fc74c376d1d697b535c",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
42785433
|
pes2o/s2orc
|
v3-fos-license
|
The upper end of climate model temperature projections is inconsistent with past warming
Climate models predict a large range of possible future temperatures for a particular scenario of future emissions of greenhouse gases and other anthropogenic forcings of climate. Given that further warming in coming decades could threaten increasing risks of climatic disruption, it is important to determine whether model projections are consistent with temperature changes already observed. This can be achieved by quantifying the extent to which increases in well mixed greenhouse gases and changes in other anthropogenic and natural forcings have already altered temperature patterns around the globe. Here, for the first time, we combine multiple climate models into a single synthesized estimate of future warming rates consistent with past temperature changes. We show that the observed evolution of near-surface temperatures appears to indicate lower ranges (5–95%) for warming (0.35–0.82 K and 0.45–0.93 K by the 2020s (2020–9) relative to 1986–2005 under the RCP4.5 and 8.5 scenarios respectively) than the equivalent ranges projected by the CMIP5 climate models (0.48–1.00 K and 0.51–1.16 K respectively). Our results indicate that for each RCP the upper end of the range of CMIP5 climate model projections is inconsistent with past warming.
Introduction
Climate models project a wide range of future warming following a particular scenario of emissions over coming decades, due to their different sensitivities to climate forcings, and the different rates at which models take up heat into the interior of the ocean [1].Given that higher rates of warming could be associated with greater adaptation challenges and could increase the risks of impacts that are abrupt or Content from this work may be used under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
irreversible [2], it is important to determine whether some model projections are more likely than others.Observations of past climate changes provide a means of discriminating between climate models, based on how well they capture the anthropogenic fingerprints of change that have already emerged in observations [3][4][5].
The extent to which models under-or over-estimate the past response to anthropogenic and natural forcings is closely related to the extent to which they under-or over-estimate the future response to these forcings [6].Therefore observations of past warming may be used to make estimates of the temperature response to future forcings that, to first order, are not biased by model errors in climate feedbacks or in rates of ocean heat uptake [7,8].As the signal of climate change strengthens relative to the noise of natural internal variability, the uncertainties in future warming rates are expected to decrease [5], an expectation borne out by subsequent projections that also include 21st century observations [9,10].
Recently a new multi-model ensemble of climate models, CMIP5, has become available [11], including simulations of future warming rates following the Representative Concentration Pathways (RCPs) of emissions [12].While CMIP5 provides the most complete exploration of climate model uncertainty yet undertaken with a new generation of climate models that incorporate more sophisticated treatments of forcings such as anthropogenic aerosols and land use changes, it provides an ensemble of opportunity rather than a systematic exploration of modelling uncertainty [13].Given that higher rates of warming in the coming decades will potentially pose greater challenges to communities seeking to adapt to future climate change, it is important to evaluate whether the range of warming projected across this new ensemble of opportunity is representative of the likely range of future warming consistent with warming already observed.We assess this using detection and attribution techniques to compare observations and models, thereby investigating whether the CMIP5 ensemble of opportunity provides a realistic exploration of future uncertainty in temperatures and whether future warming rates derived from the CMIP5 ensemble could be biased systematically to give too much or too little warming.
CMIP5 simulations
The CMIP5 models we analyse here are those models that have the three types of simulations we require in order to distinguish between errors in models' climate sensitivity and their net forcing which could, when anthropogenic and natural forcings are combined in the same simulations, lead to possible cancellation of errors.These are: simulations including only changes in well mixed greenhouse gases (GHG), simulations including both changes in well mixed greenhouse gases and other anthropogenic forcings including the direct and indirect effects of aerosols, and in many models the effects of tropospheric and stratospheric ozone changes (ANT), and simulations including natural forcings only (NAT).In addition, we require two types of simulations of future change.These are the standard CMIP5 RCP simulations including all the anthropogenic forcings considered, and also additional simulations including changes in well mixed greenhouse gases only following the RCP scenarios but with all other anthropogenic and natural forcings held constant.These are needed in order to account for the effects of errors in a model's transient climate response to forcings separately from the effects of errors in a model's net forcings.By scaling up or down the model's simulation of well mixed greenhouse gases separately from its response to other anthropogenic forcings, the effects of such systematic errors on the model's projections can be corrected.
In this paper, we consider 6 climate models, the CanESM2, CNRM-CM5, CSIRO-Mk3-6-0, HadGEM2-ES, NorESM1-M and IPSL-CM5A-LR models, taking data from the CMIP5 archive [11].For the historical period, all 6 climate models had at least one GHG simulation.However, very few climate modelling centres have carried out GHG simulations that include future increases in well mixed greenhouse gases only following the RCP scenarios because they were not part of the CMIP5 experimental protocol [11].The only such climate model runs we analyse here are of the HadGEM2-ES and CanESM2 models [9,10].Observationally constrained projections have been shown to have a somewhat lower range of warming when based on CanESM2 [9] than when based on HadGEM2-ES [10], where the former was based on an analysis of the 1851-2010 observational period and the latter was based on the shorter 1911-2010 observational period.However, it is possible to emulate reasonably precisely the global mean temperature response of a model to increasing greenhouse gases without performing transient climate model experiments [14].
Emulation of future GHG responses
Emulation of a climate model's response to future increasing greenhouse gas concentrations is achieved by scaling the response of the CMIP5 experiment in which CO 2 is increased abruptly to four times pre-industrial concentrations and by assuming that the responses to forcing changes combine linearly [14,15].This approach has been validated for various climate quantities at both global [14,15] and regional [16] scales.
An estimate of radiative forcing in climate models is required for the emulation.This is estimated using the method of Forster and Taylor ([17]; denoted FT), using the abrupt 4 × CO 2 experiment to estimate feedback parameters [14].This method requires climate model results for the scenario of interest.However, forcing for one climate model may be estimated using the forcing derived from another, using a linear scaling factor [14].This method is applicable if at least one climate model has simulated the scenario of interest.It leads to accurate reproduction of CMIP5 RCP projections of global mean temperature and heat uptake [14].
For the historical period, all 6 climate models had at least one GHG simulation, so we estimate historical forcing directly for each GCM (where a GCM has multiple simulations, we used the mean of these).Hence, any committed responses to historical GHG forcings are accurately captured.For the future period, only two GCMs had GHG simulations (HadGEM2-ES and CanESM2; CanESM2 results were reserved for validation).We estimate the future GHG forcing for each climate model as the forcing in HadGEM2-ES multiplied by a constant scaling factor.This scaling factor for each climate model was estimated by linear regression of the GHG forcings over the historical period against the equivalent forcings for HadGEM2-ES.The temperature response to this forcing was then estimated as in [14].Simulations by this approach are limited by the length of the abrupt 4 × CO 2 experiment (140 years), so were initialized at 1910 (the RCP anomaly method, permitting initialization at 2005 in [14], is not applicable here).In principle initializing at 1910 can introduce a cold-start error [18], as it introduces an abrupt change in forcing at 1910 (by ignoring longer-term responses of pre-1910 forcing changes).However, this bias is negligible (figure 1, compare red and black lines).Interannual variability in our emulation comes from variability in the estimated forcing.
Our model emulation is tested against future GHG simulations from CanESM2 and is found to track the actual GCM results during the historical period (figure 1, red and black lines), and for the future period is within the spread of simulations with different initial conditions (figure 1, red and blue lines).The reason for the difference in variability between the historical and future periods in figure 1 is because the results for the historical period represent means over all available initial value ensemble members (because the simple model forcing is based on the mean over all available CanESM runs for the historical period) whereas for the future period our emulation uses forcing derived from just a single GCM run (projection from HadGEM2-ES).As a result it is appropriate to test the emulation procedure by comparing the emulation (red line) with the spread of individual CanESM2 runs (blue lines).
Observationally constrained projections
Having obtained the requisite model-based projections of future global mean temperatures following the RCP4.5 and RCP8.5 scenarios, we calculate observationally constrained warming rates based on fingerprints calculated from the six models.We use the results of standard optimal detection analyses [19][20][21][22] in which observed temperature changes from HadCRUT4 [23] are regressed against simulated GHG (greenhouse gases), OTH (other anthropogenic) and NAT (natural) responses from the different models.In this way, scaling factors are derived which represent the amount by which the models' responses to the different forcings need to be scaled up or down to be consistent with the observed temperature change.The distributions of scaling factors we use here are taken directly from two previous analyses considering large scale (>5000 km, [24]) decadal mean changes in near-surface temperature over the 1861-2010 and 1951-2010 periods [19,20].Further technical details of the optimal detection analyses are contained in the two papers [19,20].Importantly, such analyses require an estimate of internal variability.For these analyses, multiple climate models are combined, both from long control runs in which climate forcings are held constant, and by taking differences between simulations of the same climate model with identical forcings but different initial conditions, in order to obtain multiple possible realizations of multi-decadal scale variability in the absence of external forcings.In all cases, the signals of climate change due to external forcings are estimated from averaging multiple simulations of the models with the same forcing factors but different initial conditions.Further details of the models analysed, including the number of single forcing ensemble members averaged, and details of the forcing factors included, are provided in tables 4 and 6 of [20].
To obtain observationally constrained estimates of future warming, distributions of scaling factors for GHG and other anthropogenic forcings (OTH) for a particular climate model are used to scale future projections of global mean temperature response to greenhouse gases (GHG) separately from the response to other anthropogenic forcings (OTH) from the same model.Natural forcings are assumed not to change in future (apart from a repeating constant 11 year solar cycle of solar irradiance).In addition, an estimate of decadal mean internal variability is added to the result to produce the possible range of temperatures in any one decade due to the combination of forced changes and unforced internal variability [5].The final result therefore takes account both of the uncertainties in predicting the response of the climate system to forcings and the actual trajectory of the climate system around the forced response due to unpredictable internal variability.We also take account of the correlation between uncertainties in scaling factors by considering their joint distributions; for example the upper (lower) percentiles of the distribution of scaling factors on the greenhouse gas contribution may be associated with the upper (lower) percentiles of the scaling factors on the other anthropogenic factors if the joint uncertainties allow large (small) greenhouse warming to be balanced by large (small) aerosol cooling.These joint distributions of scaling factors are then applied to the models' projections of global mean temperature response to greenhouse gases and to other anthropogenic forcings in future following the RCP4.5 and 8.5 scenarios [12] to derive likelihood distributions of future warming consistent with past warming.We apply both the individual model analyses separately for each of the models and also average the models and apply the multi-model scaling factors to the average of the six model projections (see [19,20] for further details of the optimal detection analyses from which we obtain the relevant scaling factors).
Results and discussion
The resultant uncertainty ranges (5%-95%) on future warming according to the RCP4.5 and RCP8.5 scenarios are shown in figures 2 and 3. Estimates obtained from individual models alone (green lines) are shown in each panel for the two observational analysis periods (1861-2010: solid lines; 1951-2010: dashed lines) and compared with the multi-model average based estimates of future warming derived from the 1861-2010 observational analysis period (black lines).The analyses based on individual models alone supports the multi-model average results, with the exception of the CSIRO-based analysis which gives much higher rates of possible warming (within the 5%-95% range) and one of the CanESM2-based analyses which gives much lower rates of possible warming.The CSIRO model was an outlier in the attributable trends estimated for the 1861-2010 period with the other 5 model results considered being much more consistent [19].Inconsistent uncertainty ranges imply there is an additional source of uncertainty not accounted for in our analysis due for example to errors in modelled patterns of temperature response to forcings.While further work is needed to investigate why the CSIRO model gives much higher attributable near-surface warming due to greenhouse gases than all other models investigated so far, the general degree of agreement between observationally constrained warming ranges estimated from individual model results and from the multi-model average support using the latter as our best estimate of likely future warming rates.
Based on the multi-model average results we find the 5-95 percentile ranges of warming consistent with past observed changes from 1951-2010 to be 0.35-0.82K and 0.45-0.93K by the 2020s (2020-9 decadal mean) relative to 1986-2005 according to the RCP4.5 and RCP8.5 scenarios respectively.However, given the two outlier results, there is still the possibility of much lower or higher rates of warming than the 5%-95% uncertainty ranges based on the multi-model average results.
In figure 4 we compare the observationally constrained ranges (5 and 95 percentiles, black solid lines) with the 5-95 percentile range from the full CMIP5 multi-model ensemble (grey shaded area).The 5-95 percentile ranges of warming from the multi-model ensemble (treating each model with equal weight) are 0.48-1.00K and 0.51-1.16K by the 2020s (2020-9 decadal mean) relative to 1986-2005 according to the RCP4.5 and RCP8.5 scenarios respectively.Our results show that taking the 95 percentile of the multi-model range substantially over-estimates the 95 percentile of range of uncertainty of future warming consistent with past temperature changes.Thus observations appear to rule out the upper most part of the CMIP5 multi-model ensemble.This is consistent with the uncertainty ranges of scaling factors on the greenhouse gas contribution to past warming from the original detection analyses [19,20] being systematically less than 1.
While we have shown these results are supported by a range of models with a range of different climate sensitivities, for two of the models we used (CanESM2 and CSIRO) the results were somewhat different.Our analysis accounts for gross errors in a model's transient climate response and net forcing over the past, but it assumes that an individual model's large-scale pattern (or 'fingerprint') of temperature response to a particular forcing is correct (allowing errors only in its magnitude).By averaging over many models the risk of model pattern error is reduced, as has also been demonstrated for modelled simulation of observed climatology [25].As a result, using the consensus model average for these fingerprint patterns, scaled by factors which correct for the extent to which such fingerprints are over-or under-estimated, produces a more robust estimate of uncertainty in future global warming than obtainable from the raw model data alone.Nevertheless additional uncertainties remain which have not been quantified and therefore not incorporated in our analysis, including due to errors in climate models' simulation of the patterns of near-surface temperature response to external forcings and errors in their simulation of natural internal variability over multi-decadal timescales.Furthermore, because climate model simulations including future increases in well mixed greenhouse gases only were not available, we applied an emulator to represent the global mean temperature response of a model to increasing greenhouse gases, and any errors in this emulation will provide an additional source of uncertainty in the analysis.As such, while our results are indicative that the upper part of the CMIP5 model range appears inconsistent with past warming, further improvements in models' representation of internal variability and the response of the climate system to external forcings as well as additional model simulations including future increases in well mixed greenhouse gases could provide improved estimates of likely future warming rates.
All the results given here are conditional on a particular evolution of anthropogenic emissions for each RCP.There is therefore an additional source of uncertainty in future warming due to forcing uncertainty, a key source of which is due to uncertainties in future aerosol emissions, with rapid warming possible if global aerosol emissions reduce substantially [26].
By use of more observational data it may be possible to improve such observational constraints on future warming.Simply waiting for another ten years of near-surface temperature data should reduce the uncertainties [3], but use of other observations in addition to near-surface temperatures, for example observed temperatures of the interior of the ocean, may help to improve such constraints sooner and help to identify which aspects of climate model simulations of the past may be in error, be it ocean heat uptake, climate sensitivity or net forcing [27,28].While a climate model's temperature response in future does not appear to be related to its simulation of current temperature climatology [13], its future warming does appear to be related to its simulation of past warming attributable to greenhouse gases and other anthropogenic forcings [6].Such observational constraints indicate that the upper rates of warming simulated by some climate models are less likely than indicated by the CMIP5 ensemble of opportunity.
Figure 1 .
Figure 1.Comparison of model emulation (red) for (a) RCP4.5 and (b) RCP8.5 for the CanESM2 GHG only runs (black: mean over historical period; blue: individual GCM runs for future period).
Figure 2 .
Figure 2. Global mean temperature change following the RCP 4.5 scenario as observed from HadCRUT4 (to 2010: black line), and in each panel as simulated by one of the 6 climate models (red line).The 5 and 95 percentiles of the observationally constrained projections based on the multiple model average are shown as black lines (after 2010) and the 5 and 95 percentiles of the observationally constrained projections based on the individual model are shown as green lines.Results from the analysis based on 1951-2010 [20] are shown as dashed lines and results from the analysis based on 1861-2010 [19] are shown as solid lines.Temperatures are plotted as anomalies relative to the 1986-2005 mean.
Figure 3 .
Figure 3.As figure 2 but for the RCP 8.5 scenario.
Figure 4 .
Figure 4. Global mean temperature change as observed from HadCRUT4 (black line to 2010), and with 5 and 95 percentiles of the observationally constrained projections based on multiple models (black lines from 2010).Grey shaded area shows 5-95 percentiles of the full range of the CMIP5 model simulations.Projections follow the RCP4.5 scenario (top) and the RCP8.5 scenario (bottom).
|
2017-10-02T00:28:55.880Z
|
2013-03-01T00:00:00.000
|
{
"year": 2013,
"sha1": "596c826e3514ebb88e208f22f3d3e9e3d174e391",
"oa_license": "CCBYNCSA",
"oa_url": "https://iopscience.iop.org/article/10.1088/1748-9326/8/1/014024/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4bb9bf5a13774bd59474dbad8ae81dc6422e1db4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
375633
|
pes2o/s2orc
|
v3-fos-license
|
Automated nucleic acid chain tracing in real time
A method is presented for the automatic building of nucleotide chains into electron density which is fast enough to be used in interactive model-building software. Likely nucleotides lying in the vicinity of the current view are located and then grown into connected chains in a fraction of a second. When this development is combined with existing tools, assisted manual model building is as simple as or simpler than for proteins.
Background
The structural analysis of RNA and DNA molecules and complexes has become important in recent years, with the PDB now reporting nearly 3000 deposited structures containing nucleotides (Berman et al., 2007). The interpretation of nucleotide electron-density maps presents different challenges to the interpretation of protein maps: the resolution of the data is frequently lower (Keating & Pyle, 2010), the monomers are larger and therefore more flexible, and in some cases the structures are very large (see, for example, Wimberly et al., 2000). While graphical software such as Coot (Emsley et al., 2010) can be used for building nucleotides, the tools available are less mature than for proteins. A new approach is described which allows interactive backbone tracing of nucleotides in Coot, which may be used in combination with existing tools such as RCrane (Keating & Pyle, 2012) to produce a complete and accurate model.
There are a number of existing packages for automated or semi-automated nucleotide building. The RESOLVE software, now incorporated in phenix.autobuild, will build nucleotide chains in a fully automated manner when the data are of sufficient quality (Terwilliger et al., 2008). The ARP/ wARP package will also build nucleotide chains, but not assign the base types (Hattne & Lamzin, 2008). The LAFIRE package includes software for extending and rebuilding existing nucleotide chains (Yamashita et al., 2013). The RCrane software performs semi-automated building, in which the software aids the user in the location of the ribose and phosphate groups and then builds a chain through the resulting groups, and can be used interactively from within the Coot graphical model-building software.
The methods described here are complementary to the RCrane development as they provide a method for the instantaneous building of segments of nucleotide chain in the region of an electron-density map currently being viewed in the Coot program. These segments may then be corrected and rebuilt automatically using the RCrane tools. The primary design consideration for the method was speed, enabling the user to build tens or even hundreds of nucleotides in a few minutes. For interactive use the software must therefore be able to find a nucleotide chain in the general vicinity of the current view position and trace it in both directions as far as the density supports in a fraction of a second. This is achieved through a carefully designed search function which can rapidly score electron density according to the likely presence of a nucleotide feature at that position.
Methods
The nucleotide chain-tracing algorithm has two fundamental stages: (i) finding nucleotides as a starting point for building and (ii) growing extended nucleotide chains. Finding nucleotides is performed using an optimized search algorithm and a search 'target' describing the electron-density feature to be recognized; these will be described in x2.1. Growing the nucleotides into chains involves converting the search targets into single-nucleotide or binucleotide fragments and then extending these fragments by adding additional nucleotides using a database of chain conformations; these steps are described in x2.2.
2.1. Finding nucleotides as a starting point for building 2.1.1. Fragment-search algorithm. In order to achieve fully interactive performance, careful consideration must be given to the search algorithm. The location of three-dimensional model fragments usually requires a six-dimensional search over three positional and three orientation parameters. While it is possible to factor out the orientation search by searching for spherically averaged density (Vagin & Isupov, 2001), this involves discarding information about the shape of the fragment. The alternative is to search over all three orientation parameters and then use a highly optimized positional search.
One approach to optimizing the positional search is to make use of fast Fourier transforms (FFTs), as employed in molecular-replacement calculations (Navaza & Vernoslova, 1995) and related methods (Cowtan, 2001). However, a threedimensional FFT still requires of the order of 3log(n) computations per grid cell, where n is the number of grid cells along one edge of the unit cell. FFT-based methods also tend to be calculated over the entire unit cell or at least the asymmetric unit (Read & Schierbeek, 1988), although it is possible to relax this constraint. When working interactively the user often wants to build the electron-density feature that they are currently viewing (and continuations of this feature in the case of long polymers); this provides an additional optimization which can be exploited by non-FFT methods. Optimal performance is then achieved by minimizing the average number of computational operations required per grid cell to identify the presence of a model fragment in a given orientation at that position. Kleywegt & Jones (1997) developed a method for locating the presence of molecular fragments using a scoring metric based on the electron density at the atomic centres of the oriented fragment after translation to the current grid position. However, this approach has the limitation that a large 'blob' of high density which is larger than the fragment will score highly even if it does not resemble the fragment in shape. In the FFT-based method of Cowtan (2001) it was found that better fragment discrimination could be obtained by using a scoring function which requires both high density where there are expected to be atoms and low density where there are expected to be none.
The latter approach may be adapted for use in a fast non-FFT-based approach by defining a set of probe points describing the 'fingerprint' of the search fragment in terms of a set of positions where the electron density must be high if the oriented fragment is present and a second set of positions where the electron density must be low if the fragment is present: these will be referred to as high and low probe points, respectively. A scoring function is then used to evaluate the electron densities at the high and low probe points and return a single value to indicate the presence or absence of the fragment.
Let the positions of the high probe points relative to the centre of the oriented fragment be x h , the positions of the low probe points relative to the centre of the fragment be x l and n be the number of probe points of each type. (For simplicity of implementation the numbers of high and low probe points are constrained to be equal.) A simple score may be obtained from the difference between the means of the densities at the high and low probe points, However, this scoring function does not meet the criteria for very fast evaluation since it must be evaluated for every probe point. To reduce the computation to an average of a few operations per grid cell a different form is adopted, The advantage of this form is that in most cases only part of the expression needs to be evaluated. Since the minimum can only decrease and the maximum can only increase with the inclusion of further probe points, the score can only become worse as it is evaluated. Evaluation therefore takes place one pair of probe points at a time, and as soon as the partial score drops below a given threshold, evaluation is terminated and the existence of the oriented fragment at that position is rejected. The score threshold may start at zero (implying that for a fragment to be research letters accepted the lowest high probe must be greater than the highest low probe), although other values are possible.
Three further optimizations are employed.
(i) The positional offsets are rounded to the nearest grid vector, so that the electron density is evaluated at a grid point rather than by interpolation between grid points.
(ii) A target number of matches is chosen and the score threshold is periodically adjusted upwards to retain no more than the target number of matches and allow the earlier termination of further scoring steps.
(iii) Fragment translations where the density at the first high probe point is less than one standard deviation above the mean are eliminated immediately.
Rotation space is explored with an 18 search step, giving a total of 2792 orientations. The number of translations within 6 Å of the view centre depends on the grid resolution, but is typically around a thousand. Pruning based on the first high probe point typically reduces this number by a factor of six. Initially, a little under half of the remaining points are pruned as each new pair of probe points is included in s minmax ; however, as the calculation progresses and the score threshold is raised more translations are subject to early pruning.
The approximations mean that the calculation is crude but very efficient. In practice, this scoring method is used to provide a list of candidate fragment locations which may then be re-evaluated using the more sensitive s mean score using interpolated electron-density values. The calculation is repeated for each possible orientation of the fragment by a search over the three rotational parameters.
2.1.2. Search targets. Each nucleic acid monomer provides three rigid groups which may be used as search fragments: the pentose sugar, the phosphate group and the base. The sugar and phosphate groups are used for identifying the main chain: this is in contrast to the method of Hattne & Lamzin (2008), in which the base is located first. For each of these search fragments a set of high and low probe points must be identified.
The probe points were determined using the electron density of nucleotides from the structure with PDB code 1hr2 (Juneau et al., 2001): a total of 316 nucleotides, with the conserved atoms of the search fragment superimposed. Less common conformations of the atoms of the search target were removed by successively removing fragments in which the O2 0 atom (for sugars) or C5 0 and C3 0 atoms (for phosphates) were more than 1 Å from the ensemble mean for that fragment, leaving 280 sugar fragments and 226 phosphate fragments. An electron-density map for the structure was calculated from the observed data at 2.2 Å resolution. The maximum and minimum across all of the fragments in this map for each grid point around the oriented fragment were determined and used to calculate a minimum and maximum map.
For every grid point in the minimum map, every instance of the fragment has an electron-density value greater than or equal to the value at that position in the minimum map. As a result, a high value in the minimum map is a strong indication of a conserved electron-density feature which is consistent with the presence of the fragment. Similarly, a low value in the maximum map corresponds to a position where no instance of the fragment has high electron density, which indicates a conserved electron-density hole consistent with the presence of the fragment. The minimum and maximum maps may therefore be used to locate high and low probe points, respectively.
High probe points are placed on atoms of the search fragment (O3 0 , C1 0 , C2 0 , C3 0 , C4 0 , O4 0 and C5 0 for the sugar, and O3 0 , P, OP1, OP2 and O5 0 for the phosphate), omitting atoms which are not strongly conserved (e.g. O2 0 ). Additional probe points are allocated to represent density not consistently associated with a single atom; thus, for example, in the case of sugar fragments the base is represented by a probe point on the N1 or N9 atom and a second point at the far side of the associated ring, but not on an atomic centre.
Low probe points are dispersed around the perimeter of the fragment. The contour level of the maximum map was set to the lowest value in the map within a 6 Å radius of the fragment centre, and a low probe was placed at this point. The contour level was then gradually increased and probe points were placed in new density features as they appeared, subject to the constraint that a new probe point should not be too close to an existing probe point so as to provide independent information.
The minimum and maximum maps, and the corresponding high and low probe points, are shown for each of the two search fragments in Fig. 1. The coordinates of the probe points are provided as Supporting Information.
2.2.
Growing an extended nucleotide chain 2.2.1. Converting the initial fragments to nucleotides. The initial candidate fragments are then grown into extended chain fragments. Candidate fragments which fail to grow to at least three nucleotides are rejected. This process comprises two steps. Firstly, the initial fragments must be extended to complete single nucleotides or binucleotides. Next, they are iteratively extended using an algorithm which successively adds additional nucleotides to either the 3 0 or the 5 0 end of the chain. High and low probe points (spheres and crosses, respectively) for (a) the sugar target and (b) the phosphate target. The average electron density over the reference set of nucleotides is shown.
Both of these processes involve a database of nucleotide chain linkages, which are determined from one (or potentially more than one) known high-quality nucleotide structure. The database consists of fragments of connected nucleotides, in which each nucleotide is represented by the main-chain atoms.
The key functionality of the database is to provide fragments matching one or more sugar rings (or phosphate groups). If a single sugar ring is provided, then a list of short chain fragments are returned with every sugar ring from the database superposed on the given sugar. If multiple sugar rings are provided then every possible chain fragment which is capable of linking those sugars is returned.
The default database is also constructed using the structure with PDB code 1hr2 (Juneau et al., 2001) and thus contains 316 nucleotides. Limiting the size of the database improves performance and has also been found to reduce the chance of the incorporation of incorrect rare conformations when data quality is poor. When the data are good a larger database can sometimes improve the results if interactive performance is not required.
Since the database relies upon oriented sugar rings for further building, the sugar rings located in the initial search may be used directly as a starting point for chain tracing. Phosphate groups, however, require an additional step. Every phosphate group from the database is superposed on the group identified from the electron density. The two neighbouring sugars are then scored against the density using s mean and the best-scoring binucleotide is stored as a starting point for chain tracing. The result of this step is a set of mono-nucleotides (from the location of sugar rings) and binucleotides (from the location of phosphate groups); however, in each case the terminal phosphate group is in an arbitrary conformation so that the fitted parts of the fragments actually correspond to nucleosides and 'suites', respectively (Murray et al., 2003).
The use of known structure fragments to describe the range of possible main-chain conformations is in contrast to the approach of Keating & Pyle (2010), who use the database of distinct sugar-sugar backbone conformers (or 'suites') determined by Murray et al. (2003). The latter approach is probably more efficient because redundant conformers are eliminated, and will be explored in future; however, the known structure database can also be used to bridge multi-nucleotide gaps using a method analogous to loop fitting in proteins (Cowtan, 2012).
2.2.2. Growing chains. Chains are then grown from the initial fragments by adding further nucleotides at either end. Each sugar ring from the database is superposed on the 3 0 or 5 0 sugar of a fragment, and the next (or previous) nucleotide from the database is extracted and scored against the map using the s mean score for both the new sugar and the intervening phosphate group. Building continues at each end of the chain until no sugar ring can be found whose score exceeds a threshold value. The threshold is determined by scoring 100 000 random translations and orientations of the search target in the current electron-density map to establish a probability distribution of score values. The threshold score is set such that the probability of a score exceeding this value is 0.1%.
The algorithm as described can often produce multiple overlapped chain traces starting from different initial fragments. Merging of these fragments and resolving branches is achieved using the method described for proteins in Cowtan (2006).
The sugar puckers are often ambiguous in the electron density, and the method as implemented does not attempt to resolve sugar puckers except to the extent that the sugar pucker is implicit in the sugar-phosphate-sugar backbone conformation. The effectiveness of this approach has not been investigated and therefore post-refinement using RCrane (Keating & Pyle, 2012) is advisable.
Search targets
The skill of the search target functions in locating sugar and phosphate groups was evaluated using a deposited structure, PDB entry 3cw5 (Barraud et al., 2008), consisting of 77 nucleotides. Experimental data were available to 3.1 Å resolution. Synthetic phases were created by starting from the refined structure and copying figures of merit (FOMs) from an experimentally phased structure, preserving the resolution and magnitude dependence of the FOMs. The mean FOM for the data to 3.1 Å resolution was 0.58. A random phase error was then selected from a distribution consistent with the FOM Comparison of the value of the search target score as a function of translational coordinate error for the correctly oriented sugar and phosphate targets, using either the s minmax or s mean scoring functions. The shading represents a density histogram of counts, with light shading for 1-4 or dark shading for 5+ counts per bin. Dashed lines show the regression of score on distance for distances less than 2.5 Å .
for that reflection and added to the phase. The performance of each search target was evaluated using the electron-density map calculated using the resulting phases and FOMs.
For each sugar or phosphate in the test structure, the search target was evaluated using the correct orientation and translations spanning a sphere of radius 4 Å around the true position. The values of s minmax and s mean were plotted against distance from the true position for each target, and are shown in Fig. 2.
The sugar search target shows significant discrimination using either the s minmax or s mean functions. The mean function is more effective in locating sugars; however, as discussed, it is less amenable to optimization.
The phosphate search target is less effective, with s minmax showing very little signal. The mean function provides a weak signal, but also starts producing false positives beyond 2.5 Å from the true position as the target function begins to pick up tetrahedral C atoms in the sugar ring.
The poor performance of the phosphate target is in contrast to that of Keating & Pyle (2012), in which the phosphates form a starting point for building. The difference arises from the form of the search functions. The phosphate is distinguishable by the higher density of the P atom; however, s minmax only reflects the lowest density at any of the high probe points and thus is blind to the phosphorus density. Similarly, in s mean the phosphorus density is diluted by averaging over the high probe points. As a result, the phosphate target is primarily of value when used in conjunction with the sugar target.
Model building
The method described here is optimized for interactive use, and so does not incorporate all of the model-completion steps or the time-consuming refinement step employed in automated model-building calculations such as ARP/wARP (Langer et al., 2008) or phenix.autobuild (Terwilliger et al., 2008). However, it is interesting to see how much of the model can be built in a single step with no refinement or improvement of the electron-density map. 35 nucleotide structures from the PDB for which experimental observations were available were used as test cases. For each structure, a simulated experimental data set was created using the same method as in x3.1. The resulting maps were used as a basis for finding search fragments and growing them into chains. The search algorithm was modified to search the whole asymmetric unit, rather than just a region around the current view centre.
The resulting partial models were then compared against the deposited structures and scored on the basis of the number of C1 0 atoms in the deposited structure for which a C1 0 atom was present in the partial model within 1.5 Å of the refined position. The results are shown in Table 1
Figure 3
Comparison of autobuilt nucleotides with the final structure for 3cw5 at 3.1 Å resolution. The backbone trace of the full structure is show using a worm representation, and backbone atoms for the autobuilt model are shown as bonds.
less common conformations. The deposited structure and the autobuilt model for the structure 3cw5 are shown in Fig. 3. In this case about half of the structure has been built. The test structures were also used to judge the utility of the phosphate target. Omitting the phosphate target from either the search or the growing steps decreases the total proportion of the structures built, although in the case of the growing step the difference is marginal.
Discussion
One notable result of this work is that fitting nucleotides appears to be a simpler problem than fitting proteins. This was initially surprising, given that nucleotides have greater torsional variability in the main chain (Murray et al., 2003) and typically higher thermal displacement parameters (Keating & Pyle, 2010), leading in turn to greater ambiguity in the features of the electron-density map. However, the rigid groups within each nucleotide are larger and the most common relationships between neighbouring nucleotides show less variation than might be expected from the torsional variability for individual bonds. Attempts to apply the same kind of techniques to proteins has not yielded comparable benefits except in the case of large secondary-structure features (Langer et al., 2006).
When applied to automated model building and refinement, the methods developed here are sufficiently fast that refinement becomes the rate-limiting step. The implementation of high-performance computing methods for crystallography (Sauter et al., 2013) might in future increase the speed of the refinement step, which would be of immediate value to comprehensive ligand searches and some molecular-replacement problems; however, with the developments described here the building of novel nucleotide structures may also benefit.
There is scope for further applications of fingerprint techniques. The determination of the high and low probe points may be automated either using ad hoc rules to emulate the manual approach adopted here, or through a more rigorous information theoretical approach. These are being explored in the context of carbohydrate model building, but should also be applicable to the location of rigid or semi-rigid ligands.
Software availability
The software is available as a part of the Coot modelbuilding package from version 0.7 using the scripting command find_nucleic_acids_local(6) or by a right mouse click on the top tool bar to add the 'NA build' button. From build 5199 it is also available through the 'Other Modelling Tools' menu. A slower implementation which includes model refinement is available in the CCP4 software suite from version 6.3.0 under the name Nautilus.
|
2016-05-12T22:15:10.714Z
|
2014-09-23T00:00:00.000
|
{
"year": 2014,
"sha1": "9eea233dc964adbe7f2a5e8b490228d90ca227f6",
"oa_license": "CCBY",
"oa_url": "http://journals.iucr.org/m/issues/2014/06/00/fc5003/fc5003.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce35de2b96bc7b91fca8d1c9e354d054a37506dc",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
18310090
|
pes2o/s2orc
|
v3-fos-license
|
Berberis Fruit Extract and Biochemical Parameters in Patients With Type II Diabetes.
BACKGROUND
Diabetes mellitus is a common medical problem. There is in fact a growing body of literature on plants used for the treatment of diabetes. Plant materials attracted considerable interest of scientists. In this respect, in the past few years, attempts were made to use natural plant products for the treatment of patients with diabetes.
OBJECTIVES
The aim of this study was to investigate the effect of Berberis fruit extract to achieve glycemic control in patients with Type II diabetes.
MATERIALS AND METHODS
This study was performed between July 2010 and April 2013. Thirty patients of type II diabetes admitted to Ayatollah Rohhani hospital were recruited. Patients' sera were collected for the assessment of glucose and HbA1c values. Biochemical analyses were performed before and after treatment by Berberis fruit extract. Biochemical parameters were measured by spectrophotometric method (Jenway uv/vis, 6505 model, Dunmow, UK). Glucose level was measured by glucose oxidase method kit (Pars Azmoon, Tehran, IR Iran). Serum total cholesterol and triglycerides were measured using standard biochemical kits (Pars Azmoon, Tehran, Iran). Blood glycated hemoglobin level was measured by using Elisa kit (Bioassay technology laboratory, Elisa kit). Experiments were performed in triplicate in at least three separate experiments.
RESULTS
Our findings demonstrated that patients with type 2 diabetes who received barberry fruit had significant reduction in serum glucose to 136.15 ± 32.8 mg/dL and decreased HbA1c levels to 7.07 ± 1.21 mg/dL, during the 8 weeks of study.
CONCLUSIONS
This investigation revealed that Berberis fruit extract has beneficial metabolic effects in patients with type II diabetes. Barberry may improve glucose catabolism via glycolysis pathway, stimulate insulin secretion or improve insulin function and finally decrease glucose uptake. Our results indicated that Berberis fruit regulates glucose metabolism in patients with type 2 diabetes.
Background
Major types of diabetes mellitus include type -I and type-II diabetes. Type-II insulin-resistant diabetes mellitus accounts for 90-95% of all diabetes. Previous studies demonstrated that hyperglycemia is the major component of metabolic dysfunction in type 2 diabetes (1,2). In this respect, a number of recent studies have suggested that Berberine can reduce body weight and improve glucose tolerance in mice and rat models (3). There are some reports in the literature showing that Berberine may increase glucose-stimulated insulin secretion (4). Furthermore, preliminary studies revealed that Berberis aristata was found to lower blood glucose in alloxan induced diabetic rats, reduce oxidative stress and modulate enzymes responsible for glucose metabolism (5). As reported by investigators, the plant fruits have been also used as food additive (6). As previously demonstrated, Berberine is a plant alkaloid used in traditional Chinese medicine and has been reported to have anti-hyperglycemic property in patients with type II diabetes (7). It was shown that Berberine, an alkaloid isolated from bark and root of Berberis vulgaris, exerted an anxiolytic effect in mice (8). Several experimental studies have reported that Berberis vulgaris fruit (barberry) is known for its anti-arrhythmic and sedative effects (9). Recent studies demonstrated that Berberine stimulates glucose metabolism via stimulation of glycolysis, which is related to inhibition of glucose oxidation in mitochondria (10). On the other hand, some recent evidences indicated that Berberine inhibits citric acid cycle pathways (11)(12)(13). Preliminary studies conducted by many investigators revealed that Berberine improves insulin function (14). Several experimental studies indicated that Berberine reduces insulin resistance (15). In the recent years, laboratory studies suggested that Berberine may have at least two functions regarding lower-ing the blood sugar, inhibiting absorption of sugars from the intestine and enhancing production of insulin (16). Recent studies have suggested that Berberine regulates glucose metabolism. The hypoglycemic effect of Berberine was similar to that of lowering glucose medications. Significant decreases in hemoglobin A1c were observed (17). Besides, Hb-Alc level was increased in patients with diabetes mellitus as a consequence of increased blood glucose contents (18). Given the above findings the recent clinical interest in to study the possible change level of HbA1c in patients with diabetes mellitus. On the other hand, it is interesting that glycosylated hemoglobin concentration has been suggested as an index of control in the management of diabetes (19)(20)(21)(22).
Objectives
The goal of this study was to investigate the effect of Berberis fruit on blood glucose level in patients with type II diabetes.
Preparation of Extracts
Berberis fruit (barberry) was collected from Larijan, located near Amol, Iran in 2010. Berberis fruits extract was subjected to optical spectra. Optical spectra of barberry fruit was recorded by using a spectrophotometer (Jenway uv/vis, 6505 model, Dunmow, UK) with one milliliter cuvettes. Berberis fruit extraction was performed based on a previously described method (5,6,8). The extracts were filtered and the dry matter was maintained desiccated at 5˚C until use, as described before (16,17). The same batch of extract was used in all the experiments. Berberis fruit was dried at room temperature and then powdered. To prepare powder concentrate, 1mg of dry Berberis fruit was put in capsule; totally, 2000 capsules were prepared.
Patients
The subjects of this study were patients admitted to medical center of Ayatollah Rohhani Hospital of Babol University of Medical Sciences from July 2010 to April 2013. This was a 2-month randomized, double-blind, and placebocontrolled study. Placebo was standard capsule without Berberis fruit. Written informed consent was obtained from all participants. No blood samples were drawn for the study unless an informed consent form was signed. The registration number was IRCT 3808, April 25, 2010 (clinical trials). Initial screening included a medical history, physical examination, serum glucose and lipid concentrations. During the 8-week run-in period, 30 patients with diagnosed type 2 diabetes were recruited. Seventeen patients were excluded due to not completing their follow up visits. The remaining 13 patients were randomly assigned to receive double-blind Berberis fruit or placebo.
Inclusion and Exclusion Criteria
Inclusion criteria were as follows: 1) Patients aged 30 to 65 years; 2) diagnosed type 2 diabetes, plasma glucose greater than 7 mmol/L; 3) dyslipidemia with triglycerides greater than 170 mg/dL, and total cholesterol greater than 220 mg/dL. A standardized interview was conducted by trained personal regarding lifestyle habits, including smoking, physical activity and diet. The exclusion criteria were as follows: 1) a history of liver dysfunction, including serum alanine aminotransferase greater than 120 IU/L, aspartate aminotransaminase greater than 80 IU/L; 2) kidney dysfunction (serum creatinine greater than 115 mol/L); 3) heart dysfunction; 4) diabetic ketoacidosis or hyperosmolar hyperglycemic non-ketotic coma; 5) psychiatric diseases; 6) chronic disorders requiring medication; 7) current pregnancy; and 8) all patients receiving drugs affecting carbohydrate and lipid metabolism. Finally, the health status of participants was confirmed by routine biochemical tests.
Treatment
Patients were randomized to receive Berberis fruit (1mg, twice daily) or placebo prepared in indistinguishable capsules. The Berberis fruit and placebo capsules were provided by Biochemical laboratory.
Clinical and Biochemical Measurements
Patients were visited after an overnight fasting of 10-14 hours. Past medical history of patients was assessed. Patients' sera were collected at the end of the experimental period to analysis biochemical parameters. Biochemical measurements of serum glucose, lipids, and glycated hemoglobin (HbA1c) were performed. Glucose was measured by glucose oxidase method kit (Pars Azmoon, Tehran, Iran). Serum total cholesterol and triglycerides were measured using standard biochemical kits (Pars Azmoon, Tehran, IR Iran). Blood glycated hemoglobin levels were measured by using Elisa kit (Bioassay technology laboratory, Elisa kit).
Statistical Analysis
Student t-test was used to analysis data before and after the intervention. All results were expressed as mean ± SD. Statistical analysis was performed using SPSS, version 18.0, and P values < 0.05 were considered as statistically significant.
Results
Our results showed significant improvement in glucose level after Berberis fruit consumption. This effect of Berberis fruit on glucose level may be attributed to achieving a satisfactory regulation of glycemia or changing glucose metabolism. Based on our findings, Berberis fruit reduced serum glucose level from 161.31 ± 28.95 mg/dL to 136.15 ± 32.8 mg/dL as shown in Figure 1. In addition, in treatment patients for 8 weeks, Berberine fruit reduced HbA1c from 8.10 ± 1.11 mg/ dL to 7.07 ± 1.21 mg/dL in patients with diabetes as shown in Figure 2.
Discussion
Despite recent advancements in plant medicine, there is still an enormous amount of work needed in this field. In this respect, Berberis fruit had a glucose lowering effect at two months in patients with type II diabetes. In addition, Berberis fruit decreased HbA1c in patients with diabetes. These results indicated that Berberis fruit is a hypoglycemic agent with beneficial effects on glucose metabolism. Given the benefits of Berberine in lowering blood glucose, we speculate that Berberine may be used for patients with type 2 diabetes. Our findings are consistent with those reported elsewhere (3,4). The precise mechanism of Berberis fruit regarding glucose-lowering property has not been fully understood. Berberis fruit may facilitate glucose catabolism due to induction of glycolysis pathway in the cell or Berberis fruit may inhibit the alpha-glucosidase enzyme (17). We hypothesized that Berberis fruit regulates glucose metabolism mechanism by improving glucose tolerance. It may inhibit carbohydrate digestion enzymes activity and decrease carbohydrate digestion process. Therefore, decrease glucose transportation cross the intestinal epithelium and decrease the glucose absorption and finally uptake it. This finding is in agreement with the results obtained by others (4,5). Further systematic investigations are needed to assess the chemical constituents, pharmacological actions, and toxicity of the Berberis fruit to prove its medicinal value. Additional studies are needed to characterize the bioactive compounds responsible for the observed function. Berberis fruit value still needs to be determined in human healthcare and detailed information on its usage; duration and dosage must be investigated. Several questions regarding the effect of Berberis fruit remain to be answered. In our ongoing research project on medicinal plants, we performed the present study to decrease blood glucose level.
In conclusion, the present study indicated that Berberis fruit helps to lower HbA1c, which is critical to control blood glucose level. In addition, Berberis fruit has an effect on blood glucose regulation, which might be through activation of carbohydrate metabolism enzymes. Nevertheless, the mechanisms of these encouraging results are not completely clear. To conclude, administration of Berberis fruit could attenuate the hyperglycemic state of patients with diabetes. This study had some limitations, first we did not have data on all biochemical factors, and second, our sample was relatively small.
|
2018-04-03T05:44:11.701Z
|
2014-04-07T00:00:00.000
|
{
"year": 2014,
"sha1": "983687640d8b5a5cb7cdad8aa8efb8cef97c572c",
"oa_license": "CCBY",
"oa_url": "https://brief.land/jjnpp/cdn/dl/c3b5b7a6-508a-11e7-8314-87ba08a24601",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "983687640d8b5a5cb7cdad8aa8efb8cef97c572c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.