id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
145591017
|
pes2o/s2orc
|
v3-fos-license
|
Blogging to Enhance Writing Skills: A Survey of Students’ Perception and Attitude
Several studies concur that the use of a blog can positively enhance learning in the second language classroom and that blogs can improve writing skills. Research has confirmed positive uses of the blog which include writing for an audience and peer review, the development of a student’s analytical skills and the development of a sense of community through a collaborative learning environment via weblog. This paper presents the results of a research project which was undertaken to investigate a group of 33 students in Universiti Kebangsaan Malaysia. Data were collected via online questionnaire survey related to their perception and perspective on the implementation of blogging activities to teaching writing skills. Results suggested that the participants have positive perceptions and attitude in using blog to improve writing skills and they perceived that blogging was an effective tool to teach writing in English that helped them improve their writing and kept them motivated.
Introduction
Teaching English as a second or foreign language has been a constant challenge due to the interference of the first language. Efforts to motivate learners must first look into the teaching methods among others, as implementing the conventional way of teaching English was found to be unmotivating. Especially when they are restricted to classroom learning (Allum, 2002) which would expose them to a limited scope of knowledge (Nadzrah & Kemboja, 2009), the situation does not reflect a positive trend in teaching and learning English.
In order to create an efficient learning ambience, language teachers need to focus on the core principles of learning community which include integration of curriculum, active learning, student engagement, and student responsibility (Darabi, 2006). Other scholars (Seitzinger, 2006) have suggested that language learning should be constructive as described by these features: 1) active and manipulative by engaging students in interactions and explorations with learning materials and providing opportunities for them to observe the results of their manipulations; 2) constructive and reflective by enabling students to integrate new ideas with prior knowledge to make meaning and enable learning through reflection; 3) intentional by providing opportunities for students to articulate their leaning goals and monitor their progress in achieving them; 4) authentic (or simulated) by facilitating better understanding and transfer of learning to new situations; and 5) cooperative, collaborative, and conversational by providing students with opportunities to interact with each other to clarify, share ideas, to seek assistance, to negotiate problems and discuss solutions.
Rapid development in information and communication technology has precipitated various changes pertaining to the methods of teaching and learning. For instance, the use of computers in the classroom has increased tremendously and it is quickly becoming one of the learning tools in language classes (Nadzrah, 2007). More recently, blog, which is a form of internet publishing, has become established communication tools and has been used by millions of users for variety purposes. The existence of blog has opened up a space for writers to share articles or materials in the weblog that are open for view to the audience with an internet access. This has given language learners the opportunity to express and share their ideas to the unlimited internet community in the World Wide Web. Such features of blog make it very popular and common in this era of technology advancement.
With its ease of use, conversational, informal format and collaborative nature, it is not surprising that blogging may be another means of engaging students in writing. Blogging is closely related to writing as researchers realized its potential as a tool in improving the process of writing. Campbell (2003) and Hiler (2002) suggested that blogs can be used by teachers and students as a forum for students to express opinions, co-produce ideas and share interesting information in order to communicate in an environment of English as a second language (ESL).
In language learning, blogging has been experimentally used as a tool to develop writing skills (Pinkman, 2005). Experience of writing on blogs may provide opportunities to help students to improve their knowledge in writing. Nadzrah and Kemboja (2009) found that blogs let students compose writing with specific purposes that can encourage them to enhance their writing in language. Most blog writers use their blog as a platform for self-expression and empowerment, and this helps them to become more thoughtful and critical in their writing (Armstrong & Retterer, 2008).
Blogging is also a form of writing exercise. The cycle of blogging activities such as making blog posts, viewing other bloggers' posts, commenting and reflecting on them are beneficial in polishing the writing skills. In situations where they cannot relate to certain words, they have the choice to refer to online dictionaries and using the Internet, they are able to keep the grammar in their writing intact. This creates an environment for an active learning (Darabi, 2006) among students that can present positive impacts on the writing skills as well as increase learner autonomy respectively.
Thus, this study was conducted to investigate how blogging might impact the aspects of writing and motivation as a whole involving a group of students in the Faculty of Education, Universiti Kebangsaan Malaysia. In order to facilitate the investigation regarding the students' perceptions and attitudes towards the use of blogs as a writing tool, the following research questions were formulated: 1) What are the participants' perceptions on the use of blog to teach English writing skills?; and 2) What are the participants' attitudes towards blogging as a writing tool in an English classroom?
Literature Review
Past studies have recorded mixed results of blogging on the English writing skills among ESL learners with most studies leading to a positive disposition as they claimed blogging has been found to improve writing skills (Downess, 2004;Hall & Davidson, 2007;Nadzrah, 2007;Pinkman, 2005). Students were found to believe that using blog in the class as a writing tool was a good idea as they claimed that they were able to write better and effectively when using blogs and that blogging has allowed them to be creative despite having limited proficiency in the language (Nadzrah & Kemboja, 2009). (2010) found that a blog can empower students to become analytical and critical writer, which in turn improve a student's self-confidence, while claiming that an online writing such as writing on blogs has many advantages to offer such as 1) encouraging feedback and representing both writing and reading activity; 2) stimulating debate and critical analysis and encouraging articulation of ideas and opinions; 3) offering opportunities for collaborative learning; 4) providing an environment in which students can develop skills of persuasion and argumentation; 5) creating a more student-centred learning environment; and 6) offering informal language reading.
Blackmore-Squires
McDowell (2004) supported the idea that educational blogging can enhance learning opportunities as he reported positive feedback from students on the use of blogs as learning tools as they increased interactivity and promoted reflective activities among students. In a study carried out by Blackmore-Squires (2010) regarding the use of blog as a tool to improve writing in the second language classroom, it was found out that the use of blog has encouraged learning through collaboration, which was in terms of communication between peers and tutors, as well as through learner autonomy (Blackstone et al., 2007). By using blog as a tool of communication, the learner actively constructs knowledge by translating ideas into words built upon the reactions and responses of others (Alvi, 1994). Furthermore, Campbell (2003) stated that a class blog run by the entire class is a collaborative effort of the class to create a platform for students to express themselves through writing.
As far as the affective is concerned, Blacksone et al. (2007) found that blogging activities boost nearly all the student participants' motivation, an element which has long been recognized as vital for language learning (Dornyei, 2003) and writing. Since blogs are authentic, interesting and communicative, they can serve a variety of purposes in a foreign language learning classroom (Pinkman, 2005). This is the reason why blogs have the potential to supplement and enhance traditional teaching methods (McDowell, 2004).
However, scholars have also reported negative findings such as Blackstone et al. (2007) as they highlighted that students who lack confidence may experience fear at having others read their thoughts. As blogs are mostly not www.ccsenet.org/ass Asian Social Science Vol. 9, No. 16; private, it is open for display to public and this makes student feel embarrassed for fear that others might see their mistakes. On the same note, Blackmore-Squires (2010) explained that students who suffer from computer phobia may find themselves frustrated with the blogging activities and this will eventually thwart their writing improvement. Thus, it was interesting to investigate how the participants in the study perceived the influence of blogging on their English writing skills.
Research Methodology
This study was conducted in Universiti Kebangsaan Malaysia (UKM) in May 2012 to investigate students' perception regarding the implementation of writing blogs for teaching writing. 33 third-years from the Bachelor of Teaching English as a Second Language, Faculty of Education were selected to be the participants with 7 male and the remaining 26 female students. The respondents were selected for having completed a course, Technology in Education, in their first year of study in which they were required to create their own blog account as well as to write a reflection based on their experience. The participants, thus, were familiar with blogging activities. At the present moment, they have had at least one registered blog and for some, a personal blog. For this reason, an introduction to blogging was not prerequisite to conducting the research.
Instrument and Data Analysis
The participants were requested to answer an online questionnaire posted on www.surveymonkey.com. The questionnaire was adopted and adapted from Fageeh (2011) on the use of blogs in developing writing skills and enhancing attitudes towards learning English among learners of English as a Foreign Language (EFL).
The 15 items in the questionnaire were constructed based on the research questions of this study. The items were grouped in such a way so as to address the three areas of students' background in terms of blog usage, perception on using blog in their writing and attitude towards learning writing skills using blog.
The instrument was separated into three sections: Section A, Section B and Section C. Section A was made up of 4 items to understand the participants' prior experience in blogging. There were 5 items in Section B which enquired about the participants' perception pertaining to writing on blogs. The final section contained 5 items which were designated to survey their attitudes in using blog to learn writing skills.
The study applied the 4-level Likert scale for the items in Section A and B -Strongly Disagree, Disagree, Agree and Strongly Agree. The 4-level scale was used to eliminate the neutral point in order to elicit a definite decision and by doing this, it provided a better measure of the intensity of participants' attitudes or opinion. The data collected were interpreted into percentage and mean to describe the students' perception and attitude on using blog as writing tools to enhance their writing skills.
Findings and Discussion
The first part of the findings presented the background of the participants in terms of the usage of blog as a whole. The second section reported their perception on using blog in learning writing skills and finally the last section highlighted their attitude towards learning writing using blog.
Students' Background in Terms of Blogging Experience
As indicated in Table 1, the majority of the participants which was 57.6 percent reported of writing more than 12 posts in a month with an average of 3 weekly posts. Representing the second highest group was the range of 5 to 8 posts and 9 to 12 posts in a month with 21.2 percent each. None of the participants wrote less than 4 blog posts which made them active writers as far as blogging is concerned.
In general, 51.5% (17) participants state that they wrote roughly 100-200 words in each of their blog post. This is followed by 33.3% (11) participants that wrote less than 100 words in each blog post. Whereas the rest of the participants, 9.1% (3) and 6.1% (2) stated that they wrote around 200-300 words and more than 300 words in a blog post respectively. Based on the second item as shown in Table 2, the majority 48.5 percent of the participants indicated that they spent less than an hour in reading blog every day. Meanwhile, 39.4 percent claimed that they spent between 1 to 2 hours reading blog and 12.1 percent spent between 2 to 3 hours daily in reading blog. The data were useful to understand that all the participants were indeed familiar with blogs as spending time on reading blogs written by other bloggers was part of their internet activities. More than 4 hours 0.0 0 As bloggers, it was within the participants' control as to whom the audience of their blogs were and as evident in Table 3, most of them (45.5 percent) targeted other bloggers as readers of their blog. 33.3 percent of the participants targeted students as their readers and the remaining 12.1 percent aimed for educator readers and 9.1 percent aimed for unique visitors to read their blogs. The results from the analysis of the first section positively suggested that the participants were quite familiar with blogging activities and have been doing so ever actively with varying length of words for each post. It is interesting to note that the number of words written by the majority of them was between 100 to 200 words and that was a good number. With the mentioned length and an average of 3 weekly posts, it is believed that the blog has managed to serve as a tool to be acquainted with writing skills in English. The participants were also found to be driven and motivated to write in order to meet the expectations of their target readers. This would greatly influence the rhetorical organization, content, and style of writing in their blogs. However, in terms of the time spent on reading others' blogs, it was clear that participants spent little amount of time to read online presumably due to the fact that they had been busy writing their own posts. Table 4 reported the participants' reaction towards five statements related to their blogging experience in terms of 1) writing style and register, 2) writing structure, 3) word choice and spelling, 4) grammar, and 5) editing.
Participants' Perception on Using Blog in Their Writing
Editing was acknowledged to be most important and it was the only component where all the participants were in agreement. 66.7 percent strongly agreed with it and the remaining 33.3 percent agreed that writing on the blog had motivated them to edit their writing by carefully revising their arguments and the presentation of ideas. Editing was considered crucial by the participants as writing is permanent and they need to keep their language intact especially when they were students of language background.
The second highest component with 97 percent agreement (42.4 strongly agreed and 54.6 percent agreed) was grammar. The participants believed that writing on the blog made them to be more careful with the grammar. Only 1 participant (3 percent) disagreed with the statement. The third highest context was related to word choice and spelling. Despite having the same number of agreement (97 percent) with the second highest component, only 12.1 percent strongly agreed while 84.9 percent agreed with the statement that writing on the blogs had made them check the choice of words and spelling more carefully. 1 participant disagreed with the statement. The fourth highest component was the structure of writing with 90.9 percent of agreement. 9.1 percent strongly agreed and 81.8 percent agreed with the statement that writing on the blog had made them more careful with sentence and paragraph structure. 3 participants who made up 9.1 percent disagreed as they believed that writing on the blog had not made them to be more careful with the structure of writing. The component with the lowest percentage of agreement was related to the writing style and register. 12.1 percent of the participants (n=4) strongly agreed and 72.7 percent (n=24) only agreed that writing on blog made them use the academic writing style and register. The remaining 15.2 percent disagreed to the statement.
Attitude towards Learning Writing Using Blog
Based on the findings as indicated in Table 5, it was obvious that most participants had positive attitudes in using blog in writing. 54.5 percent (n=18) of the participants strongly agreed to the statement claiming that they enjoyed writing on blogs to develop their writing skills. The data proved that teaching writing using blog could attract students' interest in learning writing due to the nature of the blog which was interactive as participants can include media such as pictures, music, video and application on their blog.
A total of 93.9 percent (n=31) of the participants agreed to the statement where they thought that their writing for argumentation and description could be improved by blogging on the Internet. The participants' positive view and responses in this matter indicated that blog was a powerful tool to practice writing which motivated them to enhance their writing skills. This finding suggested that blogging should be incorporated in the teaching of writing in the English language classrooms. Vol. 9, No. 16; item number 8, more than 60 percent of the participants strongly agreed that blogging was an effective way of teaching writing in English. This suggested that blog might be a more reliable way as compared to the conventional way of teaching writing. Teaching writing using blog provides a form of interaction between the teacher and the students for both formal and informal learning setting. Meanwhile, 39.4 percent (n=13) agreed and 57.6 percent (n=19) strongly agreed that blogging could improve the quality of academic writing. The nature of the blog with a public audience encourages the participants to pay more attention on the content and the use of language in their writing.
81.8 percent (n=27) of the participants believed that blogs could motivate them to engage in a more active and interactive writing. As the participants attempted to make an impact on the reader through discussions of topics that were important to them, their posting would make them feel emotionally connected and excited for feedback.
Conclusion
The results of this study suggested that the participants had positive perception on the use of blog to polish their writing skills and they had also portrayed a positive attitude in using blog to improve writing skills. Most of the participants agreed to varying degrees that writing on blog had made them use academic writing style and register correctly, apart from training them to choose the correct sentence and paragraph structure, decide on their word choice and spelling cautiously, check on their grammar and revise their style of presenting their arguments in writing. Obviously, the students perceived blogging as an effective tool in teaching writing in English which helped to improve and their writing and keep them motivated. It is suggested for further research to emphasize on the challenges of integrating blogging for teaching and improving writing as well as obtaining data from the lecturers.
|
2018-12-10T21:40:31.651Z
|
2013-11-28T00:00:00.000
|
{
"year": 2013,
"sha1": "77591317d05758445dc9ef2b7e3a0a5572b3d972",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5539/ass.v9n16p95",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "77591317d05758445dc9ef2b7e3a0a5572b3d972",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
216289807
|
pes2o/s2orc
|
v3-fos-license
|
Epidemiology and survival outcomes in stages II and III cutaneous melanoma: a systematic review
Aim: Management of cutaneous melanoma (CM) is continually evolving with adjuvant treatment of earlier stage disease. The aim of this review was to identify published epidemiological data for stages II–III CM. Materials & methods: Systematic searches of Medline and Embase were conducted to identify literature reporting country/region-specific incidence, prevalence, survival or mortality outcomes in stage II and/or III CM. Screening was carried out by two independent reviewers. Results & conclusion: Of 41 publications, 14 described incidence outcomes (incidence rates per stage were only reported for US and Swedish studies), 33 reported survival or mortality outcomes and none reported prevalence data. This review summarizes relevant data from published literature and highlights an overall paucity of epidemiological data in stages II and III CM.
and targeted therapies for treating metastatic disease has led to the assessment and use of such drugs in the adjuvant setting for stage III patients at high risk of recurrence following surgery [5]. The efficacy of such therapies is yet to be established in stage II patients; however, clinical trial data may guide their use in practice [6]. With these factors considered, analysis of the global epidemiology for clinical stages II and III CM populations is necessary to better inform disease burden and treatment protocols.
We conducted a systematic literature review with the objective of gaining insight into global epidemiological data for the stage II and/or III CM population. We analyzed published literature reporting country-or region-specific incidence, prevalence, survival and mortality outcomes. To our knowledge, no published systematic reviews of the global epidemiology and survival/mortality rates of stages II and III CM are available.
Materials & methods
The methodology for conducting this systematic literature review was documented in a review protocol. The process was conducted in line with the Preferred Reporting Items for Systematic Review and Meta-Analyses statement [7].
Search strategy Systematic searches of Medline and Embase were conducted up to 28 January 2019, to identify studies reporting outcomes for incidence, prevalence, survival and mortality in stage II and/or stage III CM populations. There were no restrictions on language, country, publication type and timeframe to initially keep the scope broad. A range of search terms related to incidence, prevalence, survival and mortality (e.g., death, fatal) and stage II and III disease (e.g., 'stage 3' or 'stage III' or 'stage three' or 'stage 3a' or 'stage IIIa' or 'stage three a' or 'stage 3b' or 'stage IIIb') were used (Supplementary Figure 1). Hand-searching was performed to supplement the electronic searches. This included cross-referencing relevant systematic reviews and reference lists of included peer-reviewed publications and free text keyword searching in internet search engines.
Eligibility criteria
Two reviewers independently screened titles and abstracts to determine eligibility for inclusion in this review. Any discrepancies were resolved by discussion, with a third reviewer assessing sources for which a decision could not be reached. Full-text English language publications were retrieved and assessed by the same method. All publications were screened against prespecified criteria, with studies reporting incidence or prevalence in stage II and/or III CM from a regional or national general population, included for review. Publications reporting incidence data that identified patients from a national database or registry were also included. Incidence rates and incident cases (where incidence rates were unavailable) were captured. An incidence rate is defined as the number of new cases divided by the number at risk in a specified population within a given period and is typically reported as cases per 100,000 persons per year [2]. An incident case describes a newly diagnosed individual at particular timepoint [8]. Studies that included patients with mucosal and uveal melanomas, subungual melanoma or melanoma of unknown primary region were excluded. Population sizes associated with survival or mortality outcomes were not considered a limiting factor for study inclusion. In order to summarize the most up-to-date published epidemiological data on CM, publications from the last 10 years only (2009 onward) were included.
Quality assessment
Methodological quality assessment of the included literature was conducted using two separate tools. The Joanna Briggs Institute Critical Appraisal Checklist for Prevalence Studies was used to assess bias in included studies which reported incidence and prevalence [9]. The Risk of Bias Assessment Tool for Non-randomized Studies was adapted to assess included observational studies reporting survival and mortality outcomes [10].
Results
The electronic search strategy identified 2835 publications. Three additional publications were identified from hand-searching. Following the removal of duplicate records, 1942 publications were assessed for eligibility. A total of 91 publications were eligible for inclusion based on title/abstract screening. In total, 41 publications were included either as full-text peer-reviewed publications (n = 35) or as conference abstracts (n = 6). A total of 14 publications reported incidence data and none reported prevalence data. A total of 33 reported survival and mortality rates in the stages II and/or III melanoma populations (Supplementary Figure 2). A total of 20 publications were excluded at full-text due to lack of relevant data; most of these publications reported outcomes combined across multiple disease stages (e.g., including stage I and/or IV) and therefore data specific to stage II and/or III could not be separately extracted. In addition, 20 publications were excluded because they reported data for a melanoma population that was not of interest for this review (e.g. patients with uveal melanoma). Seven publications were excluded for reporting duplicate data and three were not published in the English language (Supplementary Figure 2).
A narrative approach has been taken for this review due to differential reporting of incidence and survival data between studies.
Characteristics of included incidence studies
Outcomes from included incidence studies were captured and reported in two separate data tables: publications reporting incidence rates (Table 1) and publications reporting country-or region-specific incident cases (Table 2).
A single Swedish study reported age-adjusted incidence rates for the stage II melanoma population [13]. Unadjusted incidence rates for stages II and III melanoma populations were reported in two studies from the USA (Table 1) [11,12].
In the identified publications, data were mostly reported as the number of incident cases within a specific country/region over a specified timeframe (n = 14). Of these publications, five reported the size of the general population from which the incident cases were identified. For the remaining studies, we sourced country-specific population estimates using a recent United Nations report (Table 2) [26]. Three studies reported stage II/III data from the USA [12,24,25] and ten studies reported stage II and/or III in seven European countries covering Denmark [15], England [16], Estonia [18], Germany [23], the Netherlands [17], Sweden [13,19,21,22] and Spain [20].
Incidence rates in stages II & III melanoma
Overall, three studies reported incidence rates in stage II and/or III CM (Table 1). Stromberg et al. compared age-adjusted incidence rates between the Swedish western healthcare and Swedish southern healthcare regions from 2004 to 2013 in the stage II population, reporting a rate of 5.0 (95% CI: 4.7-5.3) and 3.6 (95% CI: 3.4-3.8) per 100,000 persons/year, respectively ( Table 1). The age-adjusted incidence rate was combined for stages III and IV so this data cannot be reported in this review. However, notably, disease mapping within the study showed a higher frequency of earlier stage tumors (stages I-II) in the western region and conversely, more advanced stage tumors (stages III-IV) in the southern region [13]. An incidence rate of 2.36 (SD: = 2.07, 0-19.4) per 100,000 persons/year for stage II patients between 2008 and 2012 was reported by Fleming et al. in a population-based US study (Table 1) [11]. The study explored the association between the density of primary care providers (PCPs) and melanoma incidence, using population data from the Surveillance, Epidemiology and End Results program (SEER) which represents approximately 30% of the US population [14]. Notably, in the counties designated Health professional shortage areas (n = 138), an incidence rate for stage II disease of 2.07 (SD: = 2.14, 0-12.9) per 100,000 persons/year was reported for the same period ( Table 1). The results showed a statistically significant correlation between a higher PCP density and overall higher melanoma diagnosis rate for this stage of disease [11]. When studying the same outcome in stage III disease, there was no statistically significant association between incidence and PCP density. For stage III disease, a rate of 1. (Table 1) [11]. Another US study which assessed incidence rates in the SEER program, showed an increase in the stage III incidence rate between 2010 and 2014 from 1.21 to 1.48 per 100,000 persons/year (based on American Joint Committee on Cancer [AJCC] 7th edition staging criteria; Table 1). Similar rates were observed when AJCC 8th edition staging criteria was used (2010, n = 1.23 per 100,000 persons/year; 2014, n = 1.47 per 100,000 persons/year) [12]. The statistical significance of this increase was not reported. Despite a small difference between incidence rates, AJCC edition had little observable effect on the incidence of stage III overall; however, at a substage level, the effect of AJCC staging criteria was more apparent. Table 1).
Incident cases in stages II & III melanoma Incident cases in stage II melanoma
Stage II incident cases were reported in 13 publications. We identified data from three US studies [12,24,25] and ten European studies (Table 2) [13,[15][16][17][18][19][20][21]23]. The number of incident cases were reported for a range of timeframes covering the period from 1989 to 2016, using data identified from national cancer registries and hospital records. Publications did not consistently report the size of the general population at the time of data collection, limiting interpretation of the number of incident cases. Additionally, the data source used did not always represent the total melanoma population within the region or country. For example, two studies reported a disparate number of incident cases for a US population; Bhatt et al. reported 59,424 stage II cases over a 12-year period from the National Cancer Database database [24] and Evans et al. reported 9985 cases over a 5-year period from the SEER program database. This difference is likely driven by the higher coverage of National Cancer Database (70%) than [27] that of SEER (30%) [14].
A number of retrospective studies analyzed the number of stage II incident cases over a series of discrete time periods (typically every year or every 5 years) [13,16,18,21,23]. Of these, three reported an increase in stage II incident cases over time [13,16,21]. These increases occurred in the southern and western healthcare regions of Sweden (Table 2). Adjusted incidence rate ratios indicated statistically significant increases for all melanoma stages [16]. In particular, they observed a 3% increase per year (95% CI: 2-4%) for stage II melanoma [16]. One study in Germany reported an overall decline in the number of cases (2002, n = 814; 2011, n = 720); however, the number of cases remained relatively constant from 2003 onward [23].
Fluctuations in incident cases over time in stage III CM were reported by four publications covering Estonia, Germany and Sweden [13,18,21,23]. Incident cases in Estonia from 1995 to 2012 were reported by Padrik et al. from the Estonian Cancer Registry. Data over the 18-year period were presented in three 5-year periods (1995-1999; 2000-2004; 2005-2009) and one 3-year period (2010-2012; Table 2). The number of incident cases peaked between 2005 and 2009 (n = 111); however, as the final timeframe was only 3 years (2010-2012; n = 71), it is not possible to determine a trend [18]. Data from the southern and western healthcare regions of Sweden showed a clear increase in the number of stage III cases. Significantly, cases nearly doubled over two specified time periods Table 2). In the Stockholm region, the highest number of incident cases were reported in 2007 (n = 51) before a gradual decline was observed [13]. Data for a German population also showed an overall increase in the number of cases (2002, n = 260; 2011, n = 392; Table 2) [23].
Two publications reported incident cases in stage III CM in the USA from the SEER database between 2010 and 2014 [12,25] [12]. Despite the difference in the number of incident cases reported in studies, a percentage analysis demonstrated that stage III patients comprised around 7% of all melanoma patients in each study [12,25].
Survival outcomes in stages II & III melanoma
A summary of publications reporting survival and mortality rates in stage II and/or III melanoma is presented in Table 3. Variables including cohort size, survival definition, treatment or diagnosis period and interventions given to patients make data interpretation challenging and prohibit direct comparisons. For the purpose of presenting a clear dataset, weighted means have been calculated for publications with complex subgroup data (Table 3). Subgroup breakdown is provided in Supplementary Tables 3-8.
Five-year disease-specific survival rates in stage II melanoma
Disease-specific survival (DSS) is defined as the percentage of people in a study or treatment group who have not died from a specific disease in a defined period of time [55]. The term melanoma-specific survival (MSS) is also supported by this definition and was used in several publications. A total of four studies from the USA or Japan reported either DSS or MSS rates in a stage II melanoma population (Table 3) [25,34,36,51]. A 5-year MSS rate of 81% was observed among a cohort of 738 patients treated between 1993 and 2013 in the USA [34]. Another US study by Evans et al. reported a 5-year DSS rate of 78% for 9985 stage II patients diagnosed between 2010 and 2014 [25]. A Japanese study comparing survival between patients who received adjuvant DAV-IFN-β therapy and those who did not between 1998 and 2009, reported 5-year MSS rates of 88 and 76%, respectively (Table 3) [51]. A propensity score-matched analysis used to adjust for confounding revealed no significant difference between the two study arms [51]. Overall, 5-year DSS and MSS rates in stage II ranged from 63-81% with most studies reporting a rate of over 70% (Table 3).
Five-year DSS rates in stage III melanoma
Ten studies reported 5-year DSS or MSS rates in a stage III population; four in the USA, four in Europe, one in Asia and one in Australia (Table 3) [25,28,32,37,40,43,44,46,51,53]. Single-center data reported by Bowles et al. showed a 5-year 52% traditional DSS rate in 760 patients treated between 1990 and 2001 in the USA (Table 3) [32]. Notably, stage IIIb and IIIc patients made up 82% of the stage III population. Further, substage analysis showed that stage IIIa patients had a higher 5-year DSS rate of 78% [32]. This study also reported conditional survival estimates, noting that the 5-year conditional DSS for all stage III patients increased from 45% at time of diagnosis, to 89% for survivors at 5 years. The largest increase in conditional DSS from time of treatment to 5-year survival, was in stage IIIc (39-78%) [32]. Martinez et al., examined a substantially larger cohort from SEER (n = 6868) and observed a 5-year MSS rate of 59% for patients diagnosed between 1988 and 2006 (Table 3) [37]. When stratified by time period of diagnosis, the 5-year MSS rate was 51% for those diagnosed between 1988 and 1999 and 62% for those Table 7) [37]. Based on univariate analysis, the treatment era was considered a statistically significant predictor of MSS (p < 0.001) [37]. A multicenter study reported a 5-year MSS rate of 48% in a cohort of stage IIIb and IIIc patients (n = 173) treated between 2003 and 2007 in The Netherlands (Table 3) [44]. Similarly, a single-center Dutch study, which assessed a larger patient cohort (n = 250) over a longer timeframe (2000-2015) reported a 5-year MSS rate of 59% in stage IIIb patients [43]. In a hospital-based Japanese study by Matsumoto et al., an MSS rate of 65% was demonstrated in stage III patients treated with DAV-IFN-β therapy, compared to a rate of 36% in stage III patients not receiving DAV-IFN-β therapy [51].
Overall, DSS or MSS rates in stage III ranged from 36% to 63%, with most studies reporting a rate of over 50% (Table 3).
Five-year disease-free survival or recurrence-free survival rates in stage III melanoma
Disease-free survival or recurrence-free survival (RFS) is the length of time following a primary cancer treatment that a patient survives without any further signs or symptoms of that cancer [55]. Disease-free survival or RFS was reported in eight of the studies which met inclusion criteria for this review (Table 3) [21,32,35,43,45,49,51,53].
In a study of stage III patients treated between 2009 and 2015 in the USA, Kurtz et al. reported a 5-year RFS rate of 77% (Table 3) [35]. There was a statistically significant difference in 5-year RFS between disease stages. Notably, 5-year RFS rate in the stage IIc population was lower than that reported in the stage IIIa population; however, disease recurrence was experienced at an earlier timepoint for stage IIIa patients [35]. Comparatively, a 5-year RFS rate of just 17% was observed in a cohort of 239 patients in Sweden diagnosed between 2005 and 2012, despite a 5-year overall survival rate of 57% for the same patient group (Table 3) [21]. Data from Germany showed a 5-year RFS rate of 57% in a larger cohort of 1669 patients diagnosed between 1976 and 2007 [45]. The study also reported that as time progresses post-diagnosis, the risk of developing recurrence significantly decreases (p < 0.05) [45].
Quality assessment of epidemiology studies Supplementary Figure 3 presents results from the Joanna Briggs Institute assessment. Strengths of the studies included their retrospective design, adequate sample size and the description of participant characteristics. Major limitations for many studies were the lack of clear population denominators, as this precluded calculation of incidence rates and cumulative incidence, in addition to lack of statistical analysis per melanoma stage. All studies employed a retrospective study design. Ten studies described valid methods to identify the condition, including the use of International Classification of Disease coding [11,12,16,18,19,[21][22][23][24][25].
Discussion
This review indicates a paucity of published incidence and prevalence data specific to stage II and III CM populations. Most of the included studies reporting incidence data are in US or Swedish population. Only three published studies provided incidence rates; Stromberg et al. presented an age-standardized incidence rate and the remaining two identified patients from the same data source (US SEER) [11][12][13]. There is a clear need for more epidemiological studies in this patient population. Despite the breadth of information that can be obtained from publicly available databases such as Globocan and IHME, these sources lack the granularity to obtain direct incidence/prevalence estimates by clinical stage.
Notably, our review yielded no published literature reporting incidence and prevalence of the stage II and III populations from Asian countries. This may in part be due to the rarity of the condition in this area of the world [56]. However, the review did capture mortality and survival data from four publications from China and Japan [49][50][51][52].
The apparent rise in incidence rates and incident cases of stages II and III CM reported by publications captured in this review could be suggestive of an overall increase in incidence in this patient population on a larger scale [12,13,16,18,21].
Most incidence data captured in this review were reported as incident cases. Unrepresentative population coverage of national databases combined with limited reporting of general population details in the included publications restricts interpretation of trends in incidence. Crude incidence rates could be determined from incident cases using population sizes extracted from the UN database but would have to be interpreted with caution [26]. Further, change in general population size over time is a significant consideration for included studies, which reported general population estimates for a timepoint toward the end of their study period, as a crude incidence rate calculated from the reported data would likely underestimate incidence at the beginning of the study period [13,18,21]. Comparing adjusted incidence rates is preferred, to account for differences such as age and time of diagnosis within a population [57].
Incident cases reported in East Anglia (UK), Estonia, Germany and in different healthcare regions of Sweden could be used to estimate the change in the number of newly diagnosed cases over time and, in some cases, to identify specific timepoints of peak incidence. However, changes in general populations over time were not described for the included studies that reported absolute incident cases. Moreover, clinical stage was reported as unknown for some patients, meaning absolute numbers are difficult to compare within melanoma stage [15,18,22,24]. Consequently, we cannot accurately conclude any trends in incidence based on the results from this review.
Other literature suggests a higher rate of earlier stage diagnosis, which may be due to improved diagnosis, screening programs and public health awareness. In a study which assessed the correlation between the rate of skin biopsies and incidence of melanoma in the USA between 1986 and 2001, a mean biopsy rate increase of 2.5-fold among patients aged ≥65 was observed. During this period, incidence of melanoma increased from 45/100,000 persons to 108/100,000 persons, with 1000 additional biopsies, equating to an extra 6.9 melanoma diagnoses [58]. Notably, data reported by Padrik et al. showed a peak in the number of incident cases of stage II CM (n = 327) compared with the lowest number of incident cases of stage III (n = 58) during the same period in Estonia [18]. Moreover, data published by Fleming et al. showed a higher density of primary care providers was correlated with a higher number of stage II incident cases, suggesting patient access as well as public health awareness campaigns play a role in influencing higher incidence in earlier stage disease [11].
Stromberg et al. reported disparate age-adjusted incidence rates for stage II disease between the southern and western healthcare regions of Sweden [13]. This could be explained by several factors, including risk behavior and UV-exposure. These factors were also discussed by Padrik et al., who referred to Estonia's increased accessibility and affordability of holidaying in sunny locations and use of tanning beds following an open market transition [18].
Changes to diagnosis or staging criteria over time may influence 'true' incidence of cancer stage. As demonstrated from results captured in the review by Tarhini et al., differences in incidence rates are observed based on the AJCC staging criteria used. Overall, an increase in incidence was shown regardless of staging criteria (7th edition vs 8th edition AJCC); however, a significant number of patients were reclassified in a higher stage III subgroup under 8th edition criteria [12]. These results could suggest movement toward a lower threshold for higher grade diagnoses over time. Notably, when assessing the different AJCC staging criteria used across longer timeframes over all studies, it becomes increasingly challenging to decipher trends.
Five-year DSS or MSS rates for stage II CM ranged from 63 to 81% and from 36 to 63% for the stage III. The ranges could be explained by variation in treatments given to patients. Chi et al. reported that surgical intervention and the use of adjuvant therapies were significant prognostic factors for patients with stages I-III melanoma [49]. Other reported factors known to influence survival rates include treatment era, adverse population characteristics and age [37,43,46]. Survival rates were generally reported from large registry data and single-center databases. It is likely that the same patient population was captured in different studies in this review, due to geographical and time period crossover. Additionally, when comparing across studies, different study populations had varying levels of substage breakdown reported, with some publications not reporting these data subsets at all.
Regarding patient survival as an outcome, conditional survival estimates may be an effective approach to account for changes in patient risk profiles over time, in order to more accurately predict longer-term survival. Bowles et al. reported an increase of 44% in 5-year DSS from time of diagnosis to 5 years post-diagnosis when applying this method [32].
Ultimately, for studies reporting survival and mortality, heterogeneity in the clinical characteristics and in the reporting of interventions patients received significantly limited comparison of outcomes, making it difficult to draw meaningful conclusions from the existing literature.
Conclusion
Overall, the aim of this review was to gain further insight into the epidemiology of stages II and III CM, as these outcomes inform clinical burden of the disease in this patient population. We have been unable to gain conclusive knowledge in this area or identify meaningful trends due to reporting limitations (specifically the reporting of incidence data). Further, this review highlights a gap in published research on the epidemiology of stages II-III CM. Ultimately, the findings presented here provide a platform for the planning of relevant studies that will generate detailed evidence to inform the treatment and management of stages II and III CM.
Future perspective
We predict that melanoma research will continue to evolve rapidly to keep pace with advancements in treatment of the disease. Currently, published epidemiological studies in stages II and III CM are lacking and much of the available data cannot be used to determine trends. Efforts to support the development of high coverage cancer registries are paramount to developing the evidence base for interventions with potential for better disease management. Comprehensive records including patient stage at diagnosis are required. The healthcare sector has a responsibility to collect data that will accurately inform national and regional population-based registries and the implementation of mandatory reporting would encourage quality coverage.
Supplementary data
To view the supplementary data that accompany this paper please visit the journal website at: interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
No writing assistance was utilized in the production of this manuscript.
Open access
This work is licensed under the Attribution-NonCommercial-NoDerivatives 4.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-nd/4.0/
|
2020-03-26T10:17:53.056Z
|
2020-03-19T00:00:00.000
|
{
"year": 2020,
"sha1": "c958da25d2fe308f778af15d73887c18f36bda61",
"oa_license": "CCBYNCND",
"oa_url": "https://www.futuremedicine.com/doi/pdf/10.2217/mmt-2019-0022",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ee95482f2946d5f4b5d31c31baa5ee029a651714",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234309514
|
pes2o/s2orc
|
v3-fos-license
|
Reality, Innovation And The Challenges Of Using Village Funds For Improving The Quality Of Life In The Community (Study in Some Villages On Kupang Regency)
Purpose: The purpose of this study is to determine the reality, innovations and challenges of Using Village Funds in Improving Community Quality of Life (Case Studies in Several Villages in Kupang Regency). Research Methodology: This research is included in qualitative descriptive research. The data collection techniques used were questionnaires, interviews and documentation studies. Results: The results of this study indicate that several villages that are on the poverty line in Kupang Regency show that the readiness of village officials and village communities in utilizing and using village funds is still low. Limitations: This research was only conducted in the village a survey of several villages located in the poverty line in Kupang Regency that is Oesao Village, Oebelo, Mata Air and East Baumata Village. Contribution: The results of this study are expected to be material for consideration and evaluation in the use of village funds in improving the quality of life of the people on Kupang Regency.
Introduction
The regional development of East Nusa Tenggara Province is temporarily being encouraged towards the welfare of the community and an increase in the quality of life. This is shown through the attention of the central government which is strengthened by the existence of village funds. Village funds began in 2015 until now in 2020 with the amount of village funds in 2019 of IDR 800 million. The absorption of this fund has been optimized through Ministerial Regulation No. 16 The results of the survey and data on priority locations for handling poverty and stunting 2020 (Bappeda NTT) in several villages located on the poverty line in Kupang Regency, consisting of the villages of Oesao, Oebelo, Mata Air and East Baumata. These four (4) villages show that the readiness of village officials and village communities in utilizing and using village funds is still low. This means that the village has not been able to manage large amounts of funds because it does not have human resources with adequate knowledge and expertise in transferring work programs that have been derived from the central government, to manage village finances, coordinate the implementation of village development and empower village communities.Oriented from this, the authors are interested in conducting research on the topic: Reality, Challenges and Innovations for Using Village Funds in Improving the Quality of Life of the Community (Case Studies in Several Villages in Kupang District).
Taking into account the problems that occurred in six villages in Kupang Regency, the researchers formulated several research problems as follows: 1. How is the reality, challenges in using village funds in improving the quality of life of the Community? 2. What innovations and strategies will be produced to overcome problems in the use of village funds? 3. Model for developing the use of village funds effectively and efficiently! To answer some of the problems above, there are several objectives that must be achieved as follows: 1. Identify the realities and challenges that occur when the community before getting village funds and after obtaining village funds, 2. Analyzing innovations and strategies for using village funds 3. The model for developing the use of village funds effectively and efficiently in improving the quality of life of the community.
Literature Review 2.1 State Of The Art
State Of The Art in this research that the level of attainment of the quality of human life needs to pay attention and instill three conceptual approaches, namely: The first approach, which is closely developed with psychological research, is based on the idea of the subject of welfare, meaning that striving for humans to be 'happy' and 'satisfied' with their lives is a universal goal of human existence. The second approach is rooted in the idea of capability, which means a person's life as a combination of activities and self and freedom to choose between these functions. The foundation of this capabilities approach has strong roots in philosophical ideas about social justice, reflects a focus on human goals and respects the individual's ability to pursue and realize goals he believes in, and plays the role of ethical principles in designing a good society. The third approach, developed in the economics tradition, is based on the idea of a fair allocation.
This level of novelty opens the horizons of thinking and acting of village fund managers to create and produce village products that are valuable in the eyes of the community. The village is a representation of the smallest legal community unit that has existed and is growing. As a form of State recognition of villages, especially in order to clarify the functions and authorities of villages, and strengthen the position of villages and village communities as development subjects, it is necessary to have structuring and regulating policies regarding villages which were realized by the issuance of Law Number 6 of 2014 concerning Villages 10).The 2019 State Budget Financial Note states that in 2019, the allocation of village funds will be directed at supporting more inclusive and transparent development.
Village Community Life
Providing greater opportunities for villages to take care of their own governance as well as equitable development implementation is expected to increase prosperityrefers to social conditions, social needs are met and social opportunities are createdand the quality of life of rural communities, so that problems such as inequality between regions, poverty, and other socio-cultural problems can be minimized 2). Many obstacles are still found in the distribution and use of Village Funds, causing the desired Village Development to be not implemented, one of which is poverty 1). In the planning and budgeting stage, the village government must involve the village community as represented by the Village Consultative Body (BPD), so that the work programs and activities that are compiled can accommodate the interests and needs of the village community and match the capabilities of the village1). Village development needs to be enhanced by empowering the local economy, creating access to local transportation in growth areas, and accelerating the fulfillment of basic infrastructure 3 ). To see the growth and development of the use of Village Funds, there are 4 variables that are the main highlights in the discussion of this research, namely: 1. Reality of Society Village communities are always synonymous with people who lack information and communication about the development of modern technology and have very low knowledge, skills and skills. Fulfilling the needs for clothing, food and shelter is always limited due to the lack of road access and transportation. Another factor is that government agencies do not pay attention to the needs and interests of village communities. For this reason, the village government must be more focused and united with the community to be more independent in managing the government with various natural resources owned, including the management of finances and assets belonging to the village12). Examples of cases that occur today are that the performance of the distribution and use of Village Funds is still facing obstacles 2). The process that should be in distribution, is via State Public Cash Account (RKUN) to Regional General Cash Account (RKUD) or from RKUD to Village Cash Account (RKDes) but this did not happen. Or the process of using funds must be through village-owned enterprises (BUMDes).
Village Fund Innovations
Innovation in a broad definition can be defined as a process resulting from the development of the use of knowledge, skills and experience both individually and in groups to create or improve a product in the form of goods or services that can provide added value in the fields of infrastructure, human resources, economy and socio-cultural. Definition of village innovationis the process of developing knowledge, skills and experience gleaned from the results of the work of the villages in implementing existing or recent village development in the form of goods or services that can provide added value in a sustainable manner, either through infrastructure development, human resource management, the economy and socio-cultural. The objective of the village innovation program is to improve the quality of the use of village funds through various development activities and empowerment of village communities that are more innovative and sensitive to the needs of village communities.
Challenge
The powerlessness of human resources in managing village funds has an impact ondevelopment and empowerment of rural communities. For this reason, the village government is obliged to prepare planning and budgeting that involves the village community with great responsibility by applying the principle of accountability in its governance, where all the end of activities in the implementation of village governance must be accountable to the village community in accordance with the provisions.
Improved Quality of Life
Quality of life is a broader concept than economic production and living standards. The quality of life includes the full set of factors that affect what we value in this life, beyond its material side which consists of: health, education, personal activities, governance, social connections, and environmental conditions. Each individual has a different quality of life depending on each individual in addressing the problems that occur in him. If you face it positively, the quality of life will be good, but negatively, the quality of life will be bad and will have an impact on poverty. The causes of poverty related to the human condition itself are a lack of belief in one's abilities, a reluctance to actualize the existing potential in the form of serious real work, and an unwillingness to give optimal respect to the cycles of time. According to the dominancedependency theory that the causes of poverty and underdevelopment are not just factors that exist in the community concerned such as lack of capital, low education, population density, lack of nutrition and so on. More than that these factors are only attributes of poverty, however poverty itself is rooted in a history of exploitation, especially what is practiced penetrating foreign or international capitalist powers, dominsai and profit-making. Literally poverty comes from the word poor, which means "no possessions". According to Koenrad in Sarosa (2006), exaggerating poverty will tend to forget what they (the poor) have. The poor are not "without" people. From an economic point of view, they are people who have "a little" and on the other hand they also have rich cultural and social capital. Experts say that countries or regions with high poverty levels are generally caught in a cycle of poverty. The poverty circle is a series of circular forces that interact with each other in such a way that it places a country / region that has a high poverty rate to remain in lagging control.
According to Sumodininggrat (2003), there are three strategies to empower the community, namely: 1) Creating an atmosphere or climate that allows the potential of the community to develop. 2) Strengthening the potential or power of the community. 3) Providing protection in the empowerment process must be prevented from becoming weaker.
In the context of poverty, society is not only approached as an object but must be viewed as a subject or actor grouped into lowincome groups of people.
Research Design
Epistemologically, in quantitative research, a paradigm is accepted that the main source of knowledge is facts that have occurred, and are captured by the senses and are in accordance with previous theories. Qualitative research is a humanistic research model, which places humans as the main subject in socio-cultural events. In Weber's view, human behavior is not necessarily a consequence of a number of views or doctrines that live in the heads of the perpetrators. Ontologically, social phenomena, culture and human behavior are not enough to record things that appear to be real, but must examine the whole in the totality of the context.
Research Approach
The research approach is considered the most appropriate because scientific research is basically an attempt to reveal natural phenomena in a systematic, controlled, empirical, and critical manner. If further translated into the language of statistics, the meaning of the research approach is an attempt to reveal the influence between variables. This study used an in-depth survey method (in dept interview). The research description contains descriptions but as a relational study the focus lies on explaining the relationships between variables.
Variable Operationalization
This study uses dependent variables, independent variable and variables mediation.The explanations for the variables in this study are as follows: 1. The independent variable (independent variable). Independent variables are variables that can influence or be a cause for other variables. In this study, the independent variables were reality, innovation and challenges in using village funds in improving the quality of life of the community with the instruments developed by Dwiyanto (2012); 2. Dependent variable. The dependent variable or dependent variable (Y) in this study is to improve the quality of life of the community.
Population and Sample
Population and sample are formed in the unit of analysis consisting of Village head, youth organization, PKK activator team, village community and the Village Consultative Body (BPD).For this reason, the type of probability sampling chosen is simple random sampling. The focus of research is on the community and village government who receive Village Fund by observingInnovation in the use of village funds in improving the quality of life of the community.
Data Analysis
The selected research locations were villages in Kupang Regency. Primary and secondary types of data are collected by means of data collection through questionnaires, interviews and documentation and through focus group discussions. The analysis tool was carried out by qualitative descriptions.
The Reality Of The Society
Society is formed because of a group of people who want to interact, interact and complement each other's needs. With the life of interacting and complementing each other, it opens the thinking horizons of certain groups to make new breakthroughs in facing the life of the world which is always advancing and developing. The village communities referred to in this research are the people of Oesao, Oebelo, Mata Air and East Baumata villages. Many problems faced by the community before the existence of village funds, namely lack of water, foodstuffs due to drought, many children affected by malnutrition and stunting, and unemployment.
However, in community life, there are limitations that can and cannot be entered by community groups because society consists of people who live in urban areas and rural communities. These limits consist of:
Structural factors
The social structure is a guideline for every community who is able to act as a partner or network of cooperation that functions to integrate the same characters in behavior. Urban communities have a more individual character meaning they work and fulfill their needs for the interests of themselves and their families, while rural communities are very complex in their lives because there are cultural, religious and ethnic demands that must be obeyed. Looking at this life, there are phenomena that make rural communities have a strong desire to urbanize to urban areas. The phenomenon is that in the city there are many jobs, not bound by rules or customs, culture and in the city it is also easy to meet the needs of the people every day. Human interaction with the environment is basically driven by the desire to fulfill basic human needs. All experiences that humans acquire as a result of their interaction with the environment are recorded in memory. Experiences that are good and beneficial will be practiced in life over and over again and are called habits. On the other hand, experiences that are unpleasant or detrimental are avoided, giving birth to the concept of taboo or known as pemali. The whole result of experience whether in the form of habits and abstinence is called knowledge. The systematic and repeated use of knowledge is called Science. Theoretically, meeting the main basic needs consists of:: a. Fulfillment of basic biological needs includes clothing, food, shelter, reproduction, health, and self-defense. b. The fulfillment of social needs includes the need to live together to achieve common and individual goals, the formation of communities and social groups as well as various social orders. c. Fulfilling integrative or psychological needs includes the need for ethics and morals, a sense of beauty and so on.
Individual Factors
A human being who wants to live alone or individually is very good so as not to be affected by negative factors that occur today (fraud, theft, corruption, etc.). But when he is in an organizational group, the individual person must be formed into a person who has organizational behavior that can act and interact with other people. Must have the ability to be able to carry out several activities in one job and be able to exploit it through the resulting performance so that village productivity can be achieved. Knowledge is the result of human experience obtained from the process of interaction with the environment. According to Charles Erasmus, that every person has essentially 2 (two) important elements, namely motive and sensory power. Sensory power is active which is obtained repeatedly from past experiences. The combination of motives and senses will produce desires and desires that will become behavior. Changes in a person's experience will provide opportunities for changes in his desires. Thus, repeated past experiences are an important element of giving a cultural pattern in the form of ideas (ideas, behavior and behavioral results).
Economic Factors
The main purpose of human life is to meet personal and family needs. This goal will be achieved when he works. City communities work in government agencies, the private sector, Non Govermental Organization/NGOs (LSM), and Micro, Small and Medium Enterprises (UMKM). The villagers work in the fields as farmers, fishermen and manual laborers. There is a very basic difference when they are paid a wage. But let's see that rural communities are the backbone of urban society to meet their food and drink needs.The main source of livelihood for dry land communities in NTT is farming. In farming, there are 2 (two) sources of life, namely dry land farming and raising livestock. Apart from farming and raising livestock, people who live on the coast have a livelihood of fishing in the sea and looking for marine life that can be consumed (fish, shellfish, snails, crabs, and seaweed) in the sea along the coast during tombeting (low tide) . The life of dry land farming is in the form of shifting cultivation, gardening (for tree crops or perennial crops) and utilization of yard land. There are 2 (two) types of land for cultivation, namely new fields, namely those that have just been opened by clearing shrubs and village forests then clearing by burning, and old fields, namely those that have been cultivated for several years. In cultivating cultivation, after the fields have been cultivated for several years, then they are not cultivated (bero) for several planting seasons because the soil fertility has decreased.
Availability of infrastructure factors
If it is associated with repeated past experiences, the East Nusa Tenggara (NTT) region is also related to the environmental conditions of dry land, choppy, hilly and mountainous topography as well as various recurring threats from the past that haunt the survival of the community. Thus, the environmental conditions of dry land which are characterized by drought, which carry the risk of crop failure, must always be taken into account by the people of NTT in their daily lives. This reality is experienced repeatedly and forms the senses as well as perceptions and mindsets of the community which in turn influence behavior, as part of the culture of the dry land communities with a dry climate.
Community Innovation "We cannot solve our problems with the same thinking we used when we created them." -Albert Einstein
Community innovation as a special form of place-based social innovation, in the specific geography of the community. The goal of social innovation -the resolution of complex social and environmental challenges -and a journey -to design new approaches that engage all stakeholders, leveraging their competence and creativity to design new solutions. As a dynamic 'living laboratory', communities offer the perfect platform for innovation. Community change will occur through effective, innovative innovations that require an appreciation of the problem it is trying to solve, as well as a deep understanding of the unique characteristics of a community -its place and people. Every innovation has two parts: the first is the discovery of the object itself; the second is the preparation of expectations so that when the discovery arrives, The results of the research through direct interviews with the village head and several communities involved proved that there were certain individuals or individuals who opposed the use of village funds for the benefit of the community and some were very sincere.
Discussion
Given that the role of humans in the organization is very important, it is necessary to have good cooperation in carrying out a village goal. No matter how good the plan is made by the manager, without the support of employees in carrying out the work, the goals to be achieved will not be achieved. An employee may or may not do the job assigned to him well. If the subordinates have carried out the duties assigned to him properly, this is what we want. But if the assigned task is not carried out properly, then we need to know the reasons. Maybe he is not able to complete the assigned job, but maybe he also does not have the drive (motivation) to work well. So that employees want to work hard and with high morale so as to increase village productivity, something is needed that can motivate employees, one of which is by paying attention to wages that are in accordance with the wishes of the employees. If the employees' wages are neglected by the Village, it will cause various problems for the Village, make employees lazy to work, carry out strikes, or maybe make efforts to move to other villages that better guarantee their welfare. On the other hand, if the village has wages and employee welfare that are well planned and well received by the employees, it is considered to be one of the factors that can motivate them to be able to increase employee productivity.
With this labor productivity is needed, because a person's productivity and skills develop through and in work. The low productivity and skills of a person, often caused by mistakes in placing in a job that is not finished with education and skills. This productivity problem is experienced by almost all large villages, as well as those classified as developing. In order for the resulting productivity to increase, the village can do it by providing a lot of wages and wages that motivate employees, and employee discipline must be improved. With an increase in productivity, of course, there will be great benefits such as the benefits obtained by the village. The background of this research problem is that increasing the productivity (innovation) of a village is one of the main goals. To achieve the success of this productivity increase, several factors are needed that can influence it. Due to time constraints, the authors here will only discuss the effect according to employee wages on employee work productivity. To be able to support the smooth running of products in the village, it is also necessary to have employees whose abilities are good and in accordance with the conditions of their work, so that they can carry out their work properly, because the abilities or skills possessed by an employee have a very strong influence on employee productivity.
The development of the quality of Human Resources is increasingly important. This is because villages that employ Human Resources, want good results and benefits and can keep up with changes and developments that occur in the Village. Motivation and work experience are things that play an important role in increasing work effectiveness. Because people who have high motivation and work experience will try their best so that their work can succeed as well as possible, will form an increase in work productivity. (Moekijat, 1999). Every village always wants the productivity of each employee to increase. To achieve this, the village must provide good motivation to all employees in order to achieve work performance and increase productivity. With the addition of a work experience that the employees have, it will provide a great relationship in the effort to achieve productivity levels. As for these efforts by providing several work facilities that are very supportive in increasing productivity to all employees.
These facilities include work clothes, guaranteed meals, recreation, places of worship, sports rooms, holiday allowances, treatment rooms, insurance, salaries, bonuses, overtime pay and so on. All of this is provided by the Village, so that all employees who work in it are truly guaranteed as well as can create a good motivation to achieve productivity levels. Education level and work experience also take precedence. Especially for the bookkeeping or office, at least it is limited to high school education. While the production department has a minimum junior high school education. However, in the employee recruitment process, the Village prioritizes prospective employees who already have work experience from similar villages.
In relation to the Village Fund provided by the Ministry for all villages in Indonesia, from the results of the above research the researchers relate it to the understanding, understanding and implementation of Village Funds in each Village. Village Fund Allocation (ADD) is part of the balance funds received by the regency/city. The amount is at least 10% of the balance fund after deducting the Special Allocation Fund.
General Village Finance Arrangements
All village rights and obligations that can be valued in money as well as everything in the form of money and goods related to the The section head has the following duties: a. Prepare an activity plan; b. Carry out activities and / or together with Village Community Institutions; c. Carry out expenditure actions that burden the budget; d. Control and report the implementation of activities to the Village Head; and e. Prepare budget documents for the implementation of activities. 12. Who acts as treasurer in village financial management?
The treasurer is a staff element of the village secretariat in charge of financial administration to administer village finances. 13. Community Quality Improvement A village government that has knowledge, experience and motivation and always collaborates in managing village funds with the community, directly and indirectly, it has improved the quality of life of the community to be prosperous and reduced poverty and increased job opportunities. This is indicated by: 1) Fulfilling basic needs such as food and nutrition, clothing, shelter, education and health (Basic Need Deprivation); 2) Conduct productive business activities, 3) Reaching access to social and economic resources; 4) Determining their own destiny and constantly getting discriminatory treatment, having feelings of fear and desire as well as apathy and fatalism; 5) Free yourself from mentally and culturally poor and always feel you have low dignity and self-respect (no freedom for poor).
Conclusion
1) The people of Oesao, Oebelo, Mata Air and East Baumata villages face many problems including lack of water, foodstuffs due to drought, many children who are affected by malnutrition and stunting, and unemployment. 2) Design a new approach that engages all stakeholders, leveraging their competence and creativity to design new solutions, as a dynamic 'living laboratory', the community offers the perfect platform for innovation with the use of the Village Fund.
3) The low level of education means that there are still several obstacles, especially in the management of village finances. For suggestions, The development of the quality of Human Resources is increasingly important. This is because villages that employ Human Resources, want good results and benefits and can keep up with changes and developments that occur in the Village. Motivation and work experience are things that play an important role in increasing work effectiveness. Because people who have higher education, high motivation and work experience will try their best so that their work can succeed as well as possible, will form an increase in work productivity.
Limitation and study forward
This research was only conducted in the village survey of several villages located in the poverty line in Kupang Regency that is Oesao Village, Oebelo, Mata Air and East Baumata Village. For further research, it is expected to use other indicators that are not used in this study and to increase the number of research samples, in order to obtain more in depth results.
|
2021-05-11T00:03:51.821Z
|
2021-01-15T00:00:00.000
|
{
"year": 2021,
"sha1": "5bd3b99e01db9e2003209fbca1f79c8e547638a8",
"oa_license": "CCBY",
"oa_url": "http://psychologyandeducation.net/pae/index.php/pae/article/download/1147/962",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "31df0acd965a9a53903fe4f3a27f55950cce06e9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Geography"
]
}
|
56332532
|
pes2o/s2orc
|
v3-fos-license
|
IN VITRO MULTIPLICATION OF VANGUERIA EDULIS AS AFFECTED BY CYTOKININS AND MEDIUM TYPE
Received: 27/2/2018 Accepted: 11/3/2018 ABSTRACT: This work was carried out in the Tissue Culture Laboratory, Horticulture Research Institute, Agricultural Research Center, Giza, Egypt during the period from 2015 to 2017, to investigate the effect of cytokinin type and concentration (BAP, 2-iP and kin, at either 0.0, 1.0, 2.0 and 3.0 ppm) as well as medium type (MS or B5) on micro-shoots multiplication of in vitro cultured rare ornamental plant, Vangueria edulis. Results that were significant could be briefed in the following: the first position was obtained by BAP at 1 ppm in regard to survival %, shoot length and shooting %; BAP at 3 ppm for shoot number, leaf number and shoot length; 2-iP at 1 ppm for shooting %; 2-iP at 2 ppm for shoot length and shooting %; and kinetin at 2 ppm for survival %. Using MS medium gave rise to higher values concerning survival %, shoot number, leaf number, shoot length and shooting % compared to using B5 medium. The highest position was occupied by the combinations between BAP 1 ppm + MS for survival %, shoot number, shoot length and, shooting %; BAP 2 ppm + MS for shoot number; BAP 3 ppm+B5 for shoot number and leaf number; 2-iP 1 ppm + MS for shooting %; 2-iP 2 ppm + MS for shoot number; kinetin at 1 or 3 ppm + MS for shoot length and kinetin 2 ppm + MS for survival % and shoot length. It is recommended to treat in vitro produced micro-shoots of Vangueria edulis with BAP at 1 ppm + MS medium to obtain the highest values during multiplication stage.
INTRODUCTION
Vangueria edulis Lam.(syn.V. madagascariensis J.F.Gmel.)belongs to fam.Rubiaceae, a tropical evergreen shrub or small tree native to Madagascar and continental Africa.Bears 5 cm fruits with a greenish skin and a flavor that is likened to a tart apple when unripe, to tamarind-like when ripe.Makes an attractive and rare fruiting specimen for the garden.It grows in sandy clay loams.Seed germination is difficult owing to the hard seed coat.(InterNet Site 1-3, 2018).
Cytokinins effect is the most noticeable in tissue cultures where they are used to stimulate cell division and control morphogenesis.When added to shoot culture media, they overcome apical dominance and release lateral buds (George, 1993 andGeorge et al., 2008).Kumar and Loh (2012) stated that one of the major components that have a significant effect on regeneration is the type and concentration of phytohormones in the medium.Optimization of the appropriate phytohormone concentrations in the medium can also be empirically determined in the earlier set of exploratory experiments.Kanwar et al. (2013) stated that cytokinins promote shoot proliferation by inducing cell division and enlargement.Lee-Espinosa et al. (2008) declared that doi: 10.21608/sjfop.2018.12818cytokinin concentration had a significant effect on the number of shoots regenerated from Vanilla planifolia 'Andrews'.Jana et al. (2013) mentioned that the addition of BAP with 1/2 MS significantly improved the Acampe papillosa shoot growth.George et al. (2008) reported that plant tissues and organs are grown in vitro on artificial media, which supply the nutrients necessary for growth.The most commonly used medium is the formulation of Murashige and Skoog (MS).This medium was developed for optimal growth of tobacco callus.Comparing MS to the elementary composition of normal, well-growing plants, they stated that the relatively low levels of P, Ca and Mg in MS are evident.The most striking differences are the high levels of Cl and Mo and the low level of Cu.Plant tissue culture media provide not only inorganic nutrients, but usually a carbohydrate (sucrose is most common) to replace the carbon which the plant normally fixes from the atmosphere by photosynthesis.To improve growth, many media also include trace amounts of certain organic compounds, notably vitamins, and plant growth regulators.Kumar and Loh (2012) stated that various mineral formulations are available to culture plant tissues.The major media include MS medium (Murashige and Skoog, 1962) and Gamborg's B5 medium (Gamborg and Eveleigh, 1968).Generally, the plant tissue culture media are made up of macroand micronutrients, vitamins, phytohormones, as well as sucrose.Adjuvants such as activated charcoal may be required for some species that show extreme cases of tissue browning on excision and secretion of polyphenolic substances from the damaged cells.
MATERIALS AND METHODS
This work was carried out in the Tissue Culture Laboratory, Zohriya Garden, Horticulture Research Institute, Agricultural Research Center, Giza, Egypt during the period from 2015 to 2017, to investigate the effect of cytokinin type and concentration (BAP, 2-iP and Kin, at either 0.0, 1.0, 2.0 and 3.0 ppm) as well as medium type (MS or B5) on micro-shoots multiplication of in vitro cultured rare ornamental plant, Vangueria edulis.
Explant source and preparation:
In the establishment stage, a few branches were collected from Vangueria edulis plants found in Zohriya Garden on July 20 th , 2015.These branches were cut to small segments, washed under current tap water, soaked and stirred in 3% soap solution for 15 min, rinsed under current tap water for 60 min before being rinsed three times under distilled water.Segments were then taken under hood in axenic conditions.Lateral buds were excised and sterilized by soaking in 20% clorox solution for 20 minutes.
Explants were then rinsed three times with a sterilized distilled water, before being inoculated individually in jars each containing 40 ml of autoclaved (at 121°C for 20 minutes under 1.05 kg/cm 2 pressure) full MS medium supplemented with BA at 3.0 ppm and adjusted to pH 5.8.Jars and were kept in the incubation room at 25/20°C (day/night) ±2°C, 70% relative humidity.Two fluorescent tubes/shelf were installed at 30 cm above explants to provide light intensity of 2200-2400 lux at explant level.
Experimental treatments:
One month later survived explants were used to investigate the effect of two media types (the first factor), i.e.MS and B5 media combined with different cytokinins treatments, BAP, 2-iP and kinetin, at different concentrations in addition to a cytokinin-free treatment as a control (the second factor), in a factorial experiment in a completely randomized block design.These treatments could be arranged as follows: 1. MS medium without cytokinins.
MS medium + BAP at 2 ppm.
Data recorded one month later were survival %, shoot number, leaf number, shoot length (cm) and shooting %.To statistically test results of these experiments, analysis of variance was carried out, as described by Snedecor and Cochran (1989).The means were compared by Duncan critical range at a probability level of 5% (Duncan, 1955).
Effect of medium type and cytokinin treatments on:
Survival % (Table , 1): The effect of medium was significant.Using MS medium gave rise to higher survival % compared to using B5 medium (41.13 and 14.44%, respectively).
Cytokinin
treatments significantly affected survival % of Vangueria shoots.The highest percentages were detected when BAP at 1 ppm or Kinetin at 2 ppm were used (44.50 and 38.94%, respectively).Treatments that induced the second category records were 2-iP at 1 ppm and kinetin at 1 ppm (33.33% for both treatments).
The interaction significantly affected survival %.The highest values were a result of incorporating MS medium with either BAP at 1 ppm or kinetin at 2 ppm (66.78 and 55.67%, respectively).The second position was occupied with values of using MS medium free of cytokinins or augmented with either 2-iP at 1 ppm or kinetin at 1 ppm (44.44, 50.00 and 44.44%, respectively).The lowest record resulted when B5 medium was supplied with BAP at 3 ppm (5.55%).Shoots inoculated on B5 medium either free of cytokinins or fortified with BAP at 2 ppm did not survive at all.
Shoot number (Table, 2):
Medium type significantly affected this character.Using MS medium gave rise to higher shoot number compared to using B5 one (2.2.53 and 0.32 shoots, respectively).
The effect of cytokinin treatments was significant.The highest shoot number was obtained when BAP at 3 ppm was used, followed in the second rank by result of applying 2-iP at 2 ppm (1.75 and 1.42 shoots, respectively).Results of all other treatments occupied the lowest category.
The combined effect of cytokinin treatments and medium significantly influenced shoot number.The highest number was induced by fortifying B5 medium with BAP at 3 ppm, followed in the same category by results of using MS medium supplied with BAP at 1 or 2 ppm; or using 2-iP at 2 ppm (2.2.67, 1.77 and 1.67 shoots, respectively).
Leaf number (Table, 3):
The effect of medium type was significant.Using MS medium gave rise to greater number of leaves compared to using B5 medium (4.53 and 1.23 leaves, respectively).
Cytokinin
treatments significantly affected leaf number.The highest number was detected when BAP at 3 ppm was used, followed in the second position by the This interaction was significant.The highest number resulted when BAP at 3 ppm was used, followed with significant differences by results of applying BAP at 3 ppm, 2-iP at 1 ppm or 2-iP at 2 ppm (6.72, 5.53 and 6.00 leaves, respectively).The lowest value belonged to explants inoculated on cytokinin-free medium (2.67 leaves).Treatments adopting B5 medium and was either cytokinin-free, or supplemented with BAP at 1 or 2 ppm; 2-iP at 1 or3 ppm; or kinetin at 1, 2 or 3 ppm failed to induce leaves at all.
Shoot length (Table, 4):
Medium type significantly affected shoot length.MS medium as resulted in longer shoots compared to B5 medium (2.48 and 0.32 cm, respectively).
The effect of cytokinin treatments on shoot length was found to be significant.The highest values were observed when BAP at either 1 or 3 ppm; or 2-iP at 2 ppm was used (1.57, 1.83 and 1.69 cm, respectively).The second position was occupied by kinetin at 1, 2 or 3 ppm (1.40, 1.38 and 1.39 cm, respectively).The lowest record belonged to BAP at 2 ppm (1.5 cm).This interaction was found to be significant.The longest shoot was induced when MS medium was combined with either BAP at 1 ppm or kinetin at 1, 2 or 3 ppm (3.13, 2.80, 2.77 and 2.78 cm, respectively).Shoots of the second rank were a result of using either cytokinin-free MS medium, or the same medium fortified with 2-iP at 1, 2 or 3 ppm (2.35, 2.58, 2.38 and 2.42 cm, respectively).The shortest shoots were obtained when B5 medium was supplied with 2-iP at 2 ppm was used (2.38 cm).Treatments using either cytokinin-free B5 medium, or B5 medium supplemented with BAP at 1 or 2 ppm; 2-iP at 1 or3 ppm; or kinetin at 1, 2 or 3 ppm failed to induce shoots at all.
Cytokinin
treatments exerted a significant influence on shooting % of Vangueria explants.The highest percentages were produced when BAP at 1 ppm; or 2-iP at either 1 or 2 ppm were used (27.83,25.00 and 22.17%, respectively).The second position was occupied by kinetin at 2 ppm (19.44%).The lowest records in the same regard were obtained when either BAP at 2 ppm or 2-iP at 3 ppm were applied (9.67 and 9.72%, respectively).This interaction significantly affected shooting %.MS medium supplied with either 1 ppm of BAP or 2-iP gave rise to the highest percentages (55.67 and 50.00%, respectively).MS fortified with kinetin at 2 ppm induced the second rank in this concern (38.89%).The lowest values were observed when B5 medium was supplied with either BAP at 3 ppm or 2-iP at 2 ppm (16.77 and 13.89%, respectively).Treatments using either cytokinin-free B5 medium, or B5 medium supplemented with BAP at 1 or 2 ppm; 2-iP at 1 or3 ppm; or kinetin at 1, 2 or 3 ppm failed to induce shoots at all.
DISCUSSION
Concerning the effect of medium type the best results were obtained by using MS medium when compared with B5 one.In this regard, two important factors may induce the medium effect, the total concentration of nitrogen in the medium and the ratio of nitrate to ammomium ions.There is a high proportion of NH 4 + nitrogen in MS medium (ratio of NO 3 -to NH 4 + , 66:34) and the quantity of total nitrogen is much higher than that in the majority of other media (George et al., 2008).The high total N and NO 3 -/NH 4 + ratio in MS basal salts may have the Concerning the effect of cytokinin type, in the micropropagation of numerous plants, benzylaminopurine was found to be more effective than kinetin, N6-(2-isopentenyl) adenine, and zeatin (Evaldsson and Welander, 1985).Papafotiou et al. (2010) mentioned that media supplemented with only BAP (0.5-1.0 ppm) favoured the development of shoots on Bauhinia variegata explants.Paudel and Pant (2012) remarked that increasing BAP concentration from 0.5 to 2.0 mg/l induced the maximum number of shoots in Esmeralda clarkei.Sheelavantmath et al. (2000) revealed that BAP at 1.13 ppm induced multiple shoots of Geodorum densiflorum within 4 weeks of culture.Neelannavar et al. (2011) remarked that BAP at 1.0 mg/l produced more number of better sized shoots of Vanilla planifolia.Nongdam and Chongtham (2011) noticed that shooting of Cymbidium aloifolium was best observed in MS medium supplemented with 1 mg/l BAP.Pant and Shrestha (2011) found that maximum number of healthy Phaius tancarvilleae shoots was observed on MS with BAP (1.0 mg/l).Gati et al. (1991) planted explants of China grass (Boehmeria nivea) on MS medium + BA, kinetin or 2-iP, at 0.1-0.7 mg l.They showed that BA at 0.5 mg/l produced the best growth and the largest number of shoots, but the shoots were shorter than that produced in BA at 0.1 mg/l.Chuenboonngarm et al. (2001) successfully propagated shoot tips of Gardenia jasminoides in B5 agar medium with 0-10 ppm BAP and 2-iP.The number of shoots in medium with 10 ppm BA was 7 times greater than those in medium without BAP, while the number of shoots in medium with 7.5 ppm 2-iP was 4 times greater than in 2-iP free medium.Mansseri-Lamrioui et al. (2011) studied the effect of BAP, 2iP and Kin at 1-8 mg/l on wild cherry (Prunus avium).They found that the use of BAP at 2-4 mg/l resulted in the highest percentage of sprouting, number of shoots and ratios of multiplication, followed by 2-iP and then kin.
This increase in multiplication as a result to using BA could be ascribed to the stimulatory effect of BA on cell division and enlargement.BA enabled germinating seeds and many excised tissues to regenerate on synthetic media by promoting cell division and enlargement (Letham et al., 1978 andDowiadar et al., 1996).
According to this study results, it could be observed that in most cases BAP was better than other cytokinins for in vitro multiplication of Vangueria edulis.Variation in the activity of different cytokinins can be explained by differences in the uptake rates reported in different genomes (Blakesly, 1991), varied translocation rates to meristematic regions, and metabolic processes in which cytokinin may be degraded or conjugated with sugars or amino acids to form biologically inert compounds as reported by Kaminek (1992).
|
2018-12-15T02:58:33.729Z
|
2018-03-31T00:00:00.000
|
{
"year": 2018,
"sha1": "8ec8004db97f2aec6a9e64eff2e54e9fae147dec",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.21608/sjfop.2018.12818",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "8ec8004db97f2aec6a9e64eff2e54e9fae147dec",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
54619633
|
pes2o/s2orc
|
v3-fos-license
|
A New Class of Asymptotically Non-Chaotic Vacuum Singularities
The BKL conjecture, stated in the 60s and early 70s by Belinski, Khalatnikov and Lifshitz, proposes a detailed description of the generic asymptotic dynamics of spacetimes as they approach a spacelike singularity. It predicts complicated chaotic behaviour in the generic case, but simpler non-chaotic one in cases with symmetry assumptions or certain kinds of matter fields. Here we construct a new class of four-dimensional vacuum spacetimes containing spacelike singularities which show non-chaotic behaviour. In contrast with previous constructions, no symmetry assumptions are made. Rather, the metric is decomposed in Iwasawa variables and conditions on the asymptotic evolution of some of them are imposed. The constructed solutions contain five free functions of all space coordinates, two of which are constrained by inequalities. We investigate continuous and discrete isometries and compare the solutions to previous constructions. Finally, we give the asymptotic behaviour of the metric components and curvature.
Singularities in general relativity
When Albert Einstein presented his theory of General Relativity in 1915 he did not give any non-trivial exact solutions to its field equations. Due to the complicated non-linear structure of the equations, he did not expect any to exist and calculated physical predictions using perturbation theory [1]. To his surprise, less than a month later, Karl Schwarzschild sent him a letter containing the Schwarzschild metric, a spherically symmetric solution of the vacuum Einstein equations. It is given, in Schwarzschild coordinates, as This solution contains, in these coordinates, an apparent singularity at r = 2m where the rr component of the metric diverges. This is the event horizon, which Schwarzschild set as the origin of his coordinate system. The solution also contains a real singularity at r = 0 which was found by David Hilbert in 1917. Hilbert considered both singularities real, as they could not be removed by an everywhere smooth and invertible coordinate transformation. In hindsight his requirement was too strict: The fact that in Schwarzschild coordinates the rr component of the metric diverges at r = 2m simply means that these coordinates are badly chosen, indeed to transform from coordinates which don't show the apparent singularity to Schwarzschild coordinates requires a transformation which diverges at r = 2m. In 1921 and 1922 Paul Painlevé and Allvar Gullstrand independently discovered a spherically symmetric vacuum solution containing only a single singularity at r = 0 [2,3]. It was not realized at the time that this solution can be obtained from the Schwarzschild one by a coordinate transformation, i.e. it describes the same physical spacetime. This was finally discovered by Georges Lemaître in 1932, who also correctly identified the r = 2m singularity as an apparent singularity caused by the choice of coordinates [4]. The singularity at r = 0 cannot be removed by a coordinate transformation as the Kretschmann scalar, given by R αβγδ R αβγδ , diverges there. This is a scalar quantity, constructed by contracting all indices of the Riemann tensor with itself, and is therefore independent of the chosen coordinate system.
Despite this advance, the status of real singularities, such as the one appearing in the Schwarzschild or the cosmological FLRW solutions, was unclear. It was widely believed that they were an artifact of the symmetry assumptions made to obtain explicit solutions and had no relevance for the real world [5]. The idea was that, similarly to the Newtonian case, if matter was not perfectly symmetrically rushing towards a central point, the resulting angular momentum would prevent the formation of a singularity.
The singularity theorems of Penrose and Hawking [6,7] proved the opposite. They state that, given a trapped surface, an energy condition, and an assumption on the global structure of spacetime (e.g. no closed timelike curves), a singularity, in the sense of geodesic incompleteness, has to form. As small perturbations of an explicit solution containing a singularity would preserve the trapped surface, the perturbed solution also contains a singularity. These theorems, however, do not give any information about the nature of the predicted singularities, or about the behaviour of the metric near them. Indeed they do not even predict diverging curvature, only the existence of some geodesics, which cannot be extended beyond a finite value of the affine parameter along them.
The BKL conjecture
In a series of works, beginning in 1963, Belinski, Khalatnikov and Lifschitz (BKL) conjectured, based on heuristic arguments, that the dynamics of a generic spacetime containing a spacelike singularity would drastically simplify when the singularity is approached [8,9]. They claimed that time derivatives of the metric would dominate compared to space derivatives, causing different spatial points to effectively decouple and turning the Einstein equations into a system of ODEs at each point. The solution of these ODEs is a generalisation of the Kasner metric, an explicit, homogeneous (but anisotropic) solution of the Einstein equations describing a spacetime which expands in some directions and contracts in others. It is given by The behaviour predicted by BKL consists of a series of time periods (often referred to as Kasner epochs) during which the metric behaves at each spatial point as the Kasner metric, but with spatially varying exponents. At the end of a Kasner epoch the Kasner exponents p j change rapidly to a new configuration causing an "oscillation" as previously expanding directions contract. As the singularity is approached, the Kasner epochs get shorter and shorter and the transition between epochs becomes sharper.
Chitre [10] and Misner [11] introduced a representation of the BKL behaviour as a (chaotic) billiard motion in an auxiliary space of the same number of dimensions as the space part of the spacetime. A "particle", representing some parts of the metric, moves along straight, null, lines in a flat Lorentzian space and is elastically reflected off of (asymptotically) infinitely high potential walls. The straight line motion represents a Kasner epoch while the (asymptotically) sharp reflections correspond to the transitions between epochs. This billiard approach is described in detail by Damour, Henneaux and Nicolai in [12].
Rigorous results concerning this chaotic case of the BKL conjecture are sparse: The only known example of a spacetime which shows the full chaotic BKL behaviour was constructed by Berger and Moncrief [13]. They applied a solution generating transformation to a homogeneous cosmological solution, yielding a U (1) symmetric one. The resulting solution shows chaotic behaviour but it is very restricted, containing no free functions, and only three arbitrary constants.
Numerical investigations do, however, provide strong evidence supporting the BKL conjecture [14]. More recent simulations have shown that, while generically the spatial derivatives do become negligible, there are exceptional points at which they instead increase exponentially, giving spikes in the metric components [15]. In the class of Gowdy spacetimes explicit (non-chaotic) solutions exhibiting this behaviour have been found [16]. The appearance of these spikes hints at more complicated detailed behaviour within the general dynamics predicted by BKL.
Asymptotically simple behaviour
Belinski and Khalatnikov argued that coupling a massless scalar field to the Einstein equations would reduce the BKL behaviour to a simpler, non-oscillatory one, described by a single Kasner epoch, which is sometimes called AVTD (Asymptotically Velocity Term Dominated) [17]. This was rigorously proven, including the case of a stiff fluid, by Andersson and Rendall [18].
If a p-form field is added to the scalar one, the resulting behaviour is either simple (single Kasner epoch) or chaotic, depending on the coupling constant between them. This was shown by Damour, Henneaux, Rendall and Weaver [19].
In the billiard picture, the addition of matter increases the dimension of the auxiliary space, as the particle describes not only the metric components but also the values of the matter fields. In addition, the evolution equations for the matter fields add additional potential walls. If a null line in the auxiliary space, which does not intersect any of the walls, exists, the resulting behaviour is simple, as a single Kasner epoch lasts up to the singularity.
The addition of matter is not necessary for non-chaotic behaviour: Demaret, Henneaux and Spindel [20] argued, using similar heuristic arguments as BKL, that in 10 or more spatial dimension AVTD behaviour is generic.
Even in lower dimensions, where BKL predict chaotic behaviour in the generic case, solutions which show non-chaotic behaviour exist. They are characterized by symmetry assumptions or conditions on their asymptotics. These assumptions cause some of the potential walls in the billiard picture to vanish at least asymptotically.
This reduction was first proven for the polarized Gowdy subclass of the T 2 symmetric spacetimes by Chruściel, Isenberg and Moncrief [21,22]. It was later extended to a larger class of Gowdy spacetimes by Kichenassamy and Rendall [23] using a newly introduced "Fuchsian" method, which was then applied to more general T 2 symmetric spacetimes by Isenberg and Kichenassamy [24]. An extension to so-called "half-polarized" T 2 spacetimes was achieved by Clausen and Isenberg [25]. Ames, Beyer, Isenberg and LeFloch [26] extended the previous results on T 2 spacetimes to lower regularity. All results on T 2 symmetric spacetimes focused on the case of 3 + 1 dimensions.
The results obtained using Fuchsian methods do not necessarily provide generic solutions in the class of metrics considered, only the existence of families of solutions which contain a number of arbitrary functions. As these functions specify the asymptotic behaviour of the solution, there is no obvious link with functions in the initial data. Within the class of Gowdy spacetimes, genericity of AVTD behaviour was proven by Ringström [30].
This work
All previous results on simple behaviour in the vacuum case were obtained by starting with an ansatz for the metric which included one or more continuous symmetries. In the billiard picture this causes one or more of the walls to vanish identically at all times.
Here a new class of non-chaotic vacuum solutions will be constructed without starting from such an ansatz. Instead, the decay of certain parts of the metric, defined by writing it in socalled Iwasawa variables, will be required. This causes some of the walls in the billiard picture to vanish asymptotically. The approach is based on work by Damour and de Buyl [31] who gave a precise statement of the BKL conjecture using this decomposition of the metric. Their work is an extension of [12] by Damour, Henneaux and Nicolai. The new class of solutions includes the polarized Gowdy ones, but not the other classes mentioned above. It is at the same time more general, as it includes free functions which depend on all space coordinates, and more specific, as some asymptotically free functions in e.g. the "half-polarized" T 2 case are here assumed to become constants in space.
In sections 2 to 7 the relevant parts of [31] are described: Section 2 describes the conventions and choice of gauge used and introduces the Iwasawa decomposition of the metric. In sections 3 and 4 the action and Hamiltonian in the Iwasawa variable form are given, including the potential "walls". Section 6 states the Fuchs theorem, which is the main tool used in the construction of the new class of solutions. In section 7 the evolution equations are written in Iwasawa form and the approach to constructing solutions with specified asymptotic behaviour, as used e.g. by Rendall, is detailed. Finally, in section 8 the new class of solutions is constructed. In section 9 the new solutions are analysed: In sections 9.1 and 9.2 possible isometries of the solutions are investigated and in section 9.3 their relationship with previously known classes is described.
In the appendices some of the calculations are given in more detail: In Appendix A the derivation of the Iwasawa form of the Hamiltonian is given in full. Appendix B contains comments on the form of the evolution equations used. In Appendix C the Iwasawa form of the momentum constraint equations is derived, following [31]. In Appendix D the evolution equations for the constraints are derived in the chosen gauge. Appendix E gives the asymptotic behaviour of the metric components and curvature for the new class of solutions. Appendix F shows that the new solutions can be constructed for arbitrary values of the cosmological constant.
Conventions, Iwasawa decomposition
We work with a (−, +, . . . , +) signature. Greek indices α, β, γ, . . . run from 0 to D = d + 1, Latin ones a, b, c, . . . from 0 to d. The metric (in D = d + 1 dimensions) is written in the form i.e. with vanishing shift vector and lapse N (τ, x i ) = det g ij . This pseudo-gaussian gauge has the unusual property that changes of the spatial coordinates also change the slicing of the spacetime.
The spatial metric is then decomposed into Iwasawa variables β a and N a i as Here the β a and N a i are functions of all coordinates (including time) and N a i vanishes for all a > i and is 1 for a = i (i.e. N a i is upper triangular with ones on the diagonal). As the determinant of N is 1, the determinant of the spatial metric only depends on the β a and is given by The β a are referred to as "diagonal degrees of freedom" while the N a i are the "off-diagonal degrees of freedom" (in fact both are relevant for all the metric components except g 11 ).
This decomposition corresponds to a Gram-Schmidt orthogonalization of the coordinate coframe dx i .
The Iwasawa variables β a and N a i have the advantage that they explicitly separate parts of the metric which have different asymptotic behaviour: As we will see later, the N a i go to constants as τ → ∞ while the β a approach linear functions.
The Iwasawa coframe and its dual are defined as The structure functions of the Iwasawa coframe, denoted C a bc , are defined by and are given in terms of the N a i as In the θ a coframe the metric takes the diagonal form
Action and Hamiltonian
Starting from the Einstein-Hilbert action whereḡ µν is the spacetime metric with determinantḡ andR its Ricci scalar, the action can be written in Hamiltonian form as where the π ij are the conjugate momenta to the spatial metric components, defined by and H is the Hamiltonian density given by This derivation is done e.g. in Appendix E of Wald [32]. The Hamiltonian density can now be written in terms of the Iwasawa variables and their conjugate momenta π a , corresponding to β a , and P i a , corresponding to N a i (note P i a = 0 for a ≥ i) which are defined as
This gives
where G ab = (δ ab (d − 1) − 1)/(d − 1), N = (N a i ) and P = (P i a ). The d × d matrix G ab is the inverse of G ab = − c =d δ c a δ d b , which will appear later. In d = 3 dimensions they are explicitly given by The sum in the second term of (3.2) contains the potential "walls" which will be discussed in detail in the next section. The derivation of (3.2), including the individual terms in the second part, is given in Appendix A. The kinetic term K only contains the conjugate momenta of the diagonal β a variables, the ones for the N a i are included in the "potential" term V. This makes sense because asymptotically the N a i tend to constants while the β a show linear behaviour, as will be demonstrated later.
The potential walls
The structure of the potential term V in the Hamiltonian density (3.2) is crucial for the asymptotic behaviour. It is that of a sum, with each term consisting of a prefactor which, importantly, does not depend on β a and an exponential term of the form exp(−2ω A (β)) where ω A is some linear form depending on the wall in question. Depending on the kind of wall, the index A can be a single or a multi-index The walls are split into two categories: The so-called "dominant" and "subdominant" walls. The dominant ones are defined as the minimal set of walls such that if their linear forms are positive, all the others are as well. Crucially for the billiard picture, the coefficients c A are positive for the dominant walls.
The form of (3.2) allows the following "billiard" interpretation of the asymptotic dynamics (e.g. [12]): A "particle" with coordinates β a moves through a Lorentzian space with metric G ab (from the kinetic part K of the Hamiltonian) in a potential of the form V. The behaviour of the summands in the potential is dominated by the exponential terms exp(−2ω A (β)). The β a can be decomposed as β a = ργ a with G ab γ a γ b = −1 and a heuristic argument, in analogy to the exact Kasner solution, gives ρ → ∞ as τ → ∞. In the limit, the potential walls become infinitely sharp as −2ω A (β) = −2ρω A (γ) → ±∞. As long as ω A (β) > 0 the potential is negligible and the β a evolve linearly. At the points where ω A (β) becomes positive the potential diverges and, because c A > 0 for the dominant walls, the particle is reflected. The subdominant walls do not influence the behaviour as they lie behind the dominant ones.
This picture depends on the assumption that ρ → ∞ as τ → ∞ and that none of the walls vanish (either completely or asymptotically). The following does not depend on these assumptions, as the billiard picture will not be used.
In the vacuum case there are two types of potential walls: The "symmetry walls", coming from the kinetic terms of the off-diagonal metric components and the "gravitational walls" coming from the curvature term in the Hamiltonian density. The derivation of their exact form is given in Appendix A, here only the result is stated.
Symmetry walls
These come from the parts of the first two terms in the Hamiltonian density (3.1) which are not contained in the kinetic term K in (3.2). The part of V containing the symmetry walls is where the multi-index A from (3.2) is (a, b) and runs over all a, b ∈ {1, . . . , d}, a < b. The coefficients (c A ) = (c ab ) are given by (P j a N b j ) 2 /2 and the linear forms (ω A ) = (ω sym ab ) by The walls with the forms ω sym a a+1 are the dominant ones among the symmetry walls, because if they are positive then β a+1 > β a ∀a and therefore all the other ω sym ab (β), a < b are positive as well.
Gravitational walls
These come from the curvature term in the Hamiltonian density (3.1). The gravitational walls split into two classes: The contribution to V coming from the first class is given by i.e. the index A = (a, b, c) is a multi-index running over all a, b, c ∈ {1, . . . , d}, a = b = c = a. For d = 3 the sum in the expression for α abc (β) vanishes: There are only three possible values for the indices which are all occupied by a, b, and c, leaving no possible value for e. In this case only α abc (β) = 2β a remains. The second class of gravitational walls has a more complicated form, their contribution is and (all sums explicitly indicated) where the comma denotes the Iwasawa frame derivative e a , defined in (2.2) and given in terms of partial derivatives as X ,a = (N −1 ) i a ∂ i X. Here A is a single index a ∈ {1, . . . , d}. This term contains second derivatives of β a and N a i (it contains first derivatives of C a bc which contain first derivatives of N a i ) as expected from a curvature expression. The linear forms µ a of the second class can be written as a linear combination of the ones of the first class, α abc , by µ c = (α abc + α bca )/2 .
This means the first class of walls is dominant and the second subdominant. This is fortunate as the coefficients of the second class of walls, F a , can be negative while those of the first class, (C a bc ) 2 /4, are always positive.
Complete Hamiltonian in Iwasawa variables
The complete Hamiltonian density in Iwasawa form (equation (3.2) with the expressions for the walls inserted) is and where the derivative operator " ,a " is defined as ,a = (N −1 ) i a ∂ i .
Equations of motion and constraints
For a Hamiltonian density of the form H[q(x, t), p(x, t), ∂ x q, ∂ 2 x q] the evolution equations are given byq .
Here the variation is taken after choosing lapse and shift, which depend on the metric (the lapse is given by √ det g). Appendix B shows that this does not change the resulting equations. In the case of the Iwasawa variable Hamiltonian (3.2) this leads to where the components (ω A ) a of the linear form ω A appearing in the second equation are defined as (ω A ) a = ∂ω A (β)/∂β a . The Hamiltonian and momentum constraints are The Iwasawa variable form (5.3) of the momentum constraints is derived in Appendix C.1.
Definition 1 (Fuchsian System). A system of partial differential first order equations on V = M × R, M an analytic manifold which can be extended to a complex analytic manifoldM , with f linear in the first order spatial covariant derivative D x u, A and f extendable to holomorphic maps in x and u (onM ) and continuous in t with the real part of all its eigenvalues greater than −1.
Theorem 6.2 (Fuchs theorem). A Fuchsian system has a unique solution u, analytic in x ∈ M , C 1 in t and such that u = 0 for t = 0 in a neighbourhood ofM × {0}.
Replacing t in (6.1) by t = t 1/µ gives and, as the eigenvalues of µ −1 A are simply those of A divided by µ, the following corollary holds.
Corollary 6.3. The theorem holds for The condition that f be linear in the spatial derivatives D x u can be relaxed to admit an arbitrary analytic dependence by adding v := D x u as a new variable. Differentiating (6.1) gives an evolution equation for v, which is linear in D x v. Together with (6.1) this is a system of the form forû = (u, v) and with the block lower triangular matrix The eigenvalues λ of fulfil and are therefore exactly the eigenvalues of A. Therefore the system (6.3) fulfils the conditions of definition 1 and is Fuchsian, providedf depends analytically onû, i.e. f depends analytically on D x u, and D x A is uniformly bounded.
Corollary 6.4. The existence theorem 6.2 holds for f depending analytically on A change of variables t = e −µτ → τ in (6.1), with M a domain in R n , gives the form of the theorem used here.
with A analytic in x and uniformly bounded, µ > 0,f analytic in x, u and D x u, continuous in τ and bounded in τ for τ → ∞ and with all eigenvalues The matrices we will consider in the following will be constant and therefore the relevant conditions will be the boundedness off as τ → ∞ and the condition λ > −µ on the eigenvalues of A.
In order to obtain a more precise description of the decay of the solution, we defineū = e ντ u, 0 < ν < µ. (6.4) becomes which is again Fuchsian, as the conditions onf are unaffected and the eigenvalues of A + ν1 are shifted up to compensate the change in µ. (6.5)
Strategy and evolution equations
We will use the strategy introduced by Kichenassamy and Rendall in [23] and used in [31] to prove the existence of solutions of the Einstein equations with non-chaotic asymptotics. As a first step we consider a simplified system of evolution equations, which is supposed to model the asymptotic behaviour, and which can be easily solved. Then we write down the equations for the differences between solutions of this system and those of the full one, following from the full evolution equations. If this system can be shown to be Fuchsian (i.e. if it is of the form (6.4)) then, by the Fuchs theorem, a unique asymptotically vanishing solution exists. This implies that a solution of the full system of equations exists, which asymptotically approaches the specified solution of the simplified system. The constraints will be treated separately, in sections 7.2 and 7.3. Quantities relating to the asymptotic system will be marked with a subscript [0] , e.g. β a [0] . The Hamiltonian of the asymptotic system is obtained by discarding all wall terms in the full Hamiltonian (3.2), leaving only This gives the asymptotic evolution equations Now, the differencesβ a ,π a ,N a i andP i a are defined as the real solutions minus the asymptotic ones (e.g.,β a = β a − β a [0] ). Inserting them into the full evolution equations (5.1) gives the following equations for the differences: This system of equations is not directly in Fuchsian form, as the right-hand side contains second order spatial derivatives of the variables. By defining B a j := ∂ jβ a and N a ij := ∂ jN a i these can be expressed as first order derivatives. This is only possible if theβ andN equations (7.3a) and (7.3c) do not contain spatial derivatives, as otherwise new second derivative terms would appear in the evolution equations for the new variables. Theβ equation (7.3a) obviously doesn't contain spatial derivatives while for theN equation (7.3c) the sum over the walls only includes the symmetry walls with coefficients (P j a N b j ) 2 /2, as the others are independent ofP . The additional evolution equations for the new variables are given by To ensure that the right-hand side of the equation for B a j decays appropriately we replaceπ bỹ π defined asπ a := e τπ a with > 0. The evolution equation for B a j then becomes The evolution equation forπ a , which replaces equation (7.3b) is then with the additional term on the left-hand side and the exponential factor on the right-hand side coming from ∂ τπa = ∂ τ (e τπ a ) = π a + e τ ∂ τπa . The full system of equations is now given by the 2d + d(d − 1) + d 2 + d 2 (d − 1)/2 equations (7.3a), (7.7), (7.3c), (7.3d), (7.5) and (7.6).
The asymptotic behaviour of the terms on the right-hand side is dominated by the exponential
for all
A (for all walls) the system fulfils the decay condition required by the Fuchs theorem (Corollary 6.5). Equation (7.7) includes the exponentially growing term e τ but as can be chosen arbitrarily small, and therefore smaller than the minimum of ω A (p • ), this does not affect the conditions.
In order to be a Fuchsian system, the condition on the matrix A also has to be fulfilled. For this system the matrix A is given by In addition to the conditions ω A (p • ) > 0, the asymptotic Hamiltonian constraint, defined as also constrains the values of the p a • . For vacuum in dimension d < 10 the conditions ω A (p • ) > 0 ∀A cannot be satisfied together with the asymptotic Hamiltonian constraint H • = 0 [20]. Therefore it is expected (e.g. [31]) that the generic solution in the vacuum case is chaotic.
In d = 3 dimensions it is easy to see why the conditions are not compatible: The linear forms of the dominant walls (these are the only relevant ones) are (two symmetry walls from (4.2) and one dominant gravitational wall from (4.3)). The asymptotic Hamiltonian constraint is The condition that the three linear forms (7.9) are greater than 0 implies p 3 • > p 2 • > p 1 • > 0 and therefore H • < 0 which conflicts with the Hamiltonian constraint (7.10).
Asymptotic momentum constraints
The asymptotic momentum constraints are obtained from the full momentum constraints . This gives bc are the structure functions of the asymptotic Iwasawa coframe, defined as in (2.3) if the asymptotic Hamiltonian constraint is fulfilled, because the matrix G ab is symmetric and constant: The asymptotic constraints are therefore preserved under the asymptotic evolution given by (7.1).
Relationship between asymptotic and full constraints
We want to show that if the solution (7.2) of the asymptotic evolution system fulfils the asymptotic constraints, the corresponding solution of the full evolution equations fulfils the full constraints (5.2), (5.3).
The evolution equations for the full constraints coming from the full evolution equations (5.1), in Iwasawa variables, are with H a = e 2β a H a (derivation in Appendix D). The right-hand side of (7.12) can be rewritten as with µ a (β) = b =a β b the subdominant gravitational wall forms. DefiningH = e ητ H, with η > 0, gives the system which is Fuchsian if η < 2µ a (β). The term ∇ a H in the second equation is equal to g∂ a (H/g) + H O(C a bc ) = g∂ a (H/g) + H O(1) where the first part comes from the density character of H and the second from the connection coefficients in a non-coordinate basis. The system (7.14) is homogeneous and therefore the unique solution such that guaranteed by the Fuchs theorem is H a =H = H = 0. We therefore need to check that (7.15) holds, i.e. that the constraints are asymptotically fulfilled. The differences between the asymptotic constraints and the full ones consist only of terms which vanish asymptotically: The Asymptotic Hamiltonian (7.1) is exactly the part K of the full Hamiltonian (3.2) which does not contain the exponential wall terms exp(−2ω a (β)), which go to zero if the conditions ω A (p • ) > 0 are fulfilled. The asymptotic momentum constraints were obtained from the full momentum constraints by discardingπ b , which is an exponentially decreasing term if the symmetry wall conditions (4.2) are fulfilled. This means that if the asymptotic constraints are fulfilled, the full constraints H and H a vanish asymptotically. To verify (7.15) we still need to make sure that the definition ofH does not change the asymptotic behaviour. This is the case as η can be chosen arbitrarily small, and therefore smaller than 2(β a − β b ), a > b, while still preserving the Fuchsian form of (7.14).
This means, provided the solution of the asymptotic evolution equations satisfies the asymptotic constraints, (7.15) is fulfilled. As the evolution equations for the full constraints are a homogeneous Fuchsian system, the unique solution which vanishes asymptotically is the zero solution. Therefore it suffices to impose the asymptotic constraints at one time (as they are preserved by the asymptotic evolution), to guarantee that the corresponding unique solution of the full evolution equations satisfies the full constraints at all times.
Construction of the new class of solutions
While generic solutions in the vacuum case are expected to be chaotic, there exist examples of vacuum spacetimes which show non-chaotic behaviour. As described in the introduction, all previous examples were at least U (1) symmetric. These were constructed by starting with a symmetric ansatz for the metric, postulating asymptotic behaviour for its components and proving, via some sort of Fuchs theorem, that solutions with this asymptotic behaviour exist.
Here, no symmetries of the metric will be assumed. The idea is to choose an ansatz for the N a i such that some of the walls in (7.3) asymptotically vanish. This means that their linear forms can be negative but the resulting exponentially increasing term is countered by an exponential decrease of the coefficients c A .
Ansatz and evolution equations
The following ansatz is chosen for N a i : N a • i is a constant, upper triangular matrix, with ones on the diagonal, which depends neither on space nor time. This ansatz for N a i will cause the dominant gravitational walls to vanish asymptotically. It can be simplified to N • = 1 by the space coordinate transformation x i → y i (x j ) defined by which does not affect the β a . P i a , β a and π a are decomposed as before in (7.2), giving d + 2d = d(d + 3)/2 functions of space P i • a , p a • and β a • . As before, the second derivatives on the right-hand side of the evolution equations are eliminated by defining B a j := ∂ jβ a and N a s ij := ∂ jN a s i andπ is replaced byπ a := e τπ a . The evolution equations forβ a ,π a , N a s i ,P i a , B a j and N a s ij are now The additional term on the left-hand side of the N a s i equation and the exponential factor on the right-hand side come from with eigenvalues 0, > 0 and γ > 0 and therefore fulfils the conditions for a Fuchsian system for all allowed decay coefficients on the right-hand side. To show the system (8.2) is Fuchsian, each term on the right-hand side has to be shown to be exponentially decreasing.
To simplify the argument the decay rates as τ → ∞ of the coefficients c A and their derivatives, with the ansatz (8.1) for N a i , are now listed: Here c sym stands for the symmetry wall coefficients (P j a N b j ) 2 /2, c d.g. for the dominant gravitational ones, (C a bc ) 2 /4, and c s.d.g. for the subdominant gravitational ones defined in (4.5). As theβ a equation (8.2a) has no terms on the right-hand side we consider first theπ a equation (8.2b). The key terms are the exponentials exp(−2ω A (β)) and the c A and their derivatives. The overall space derivatives in the second and third part are innocuous as they can only bring down polynomial expressions in τ from the exponentials.
Beginning with the (relevant terms of the) first part, A c A e −2w A (β [0] ) , we look at the three kinds of walls (symmetry, dominant gravitational and subdominant gravitational) separately: symmetry The coefficients do not decay (see (8.3)). The whole term decays exponentially only if i.e., as is arbitrarily small, if the symmetry wall conditions are fulfilled (this means that dom. grav. The coefficients decay as e −2γτ . The whole term shows exponential decay if where the α abc are the linear forms of the dominant gravitational wall, defined in (4.3).
subdom. grav. The coefficients do not decay, the whole term only decays if with µ a the linear forms of the subdominant gravitational walls, defined in (4.4).
The second and third parts of theπ a equation (8.2b) include only the subdominant gravitational walls, as these are the only ones containing derivatives of β. To guarantee exponential decay in these terms, the subdominant wall conditions µ a (p • ) > have to be fulfilled, as in (8.6).
The N a s i equation, (8.2c), contains only the symmetry wall term, as the P i a derivative of the other coefficients vanishes. Because of the exponentially increasing term exp(γτ ), decay requires that The first part of theP i a equation (8.2d) (containing N a i derivatives of the coefficients) decays exponentially if the following conditions are satisfied for the different wall types: symmetry The symmetry wall conditions ω sym ab (p • ) > 0 have to be fulfilled (as in (8.4)).
subdom. grav. µ a (p • ) > 0 ∀a (as in (8.6) for all indices a, b, c and the asymptotic Hamiltonian constraint a =b Additionally, the asymptotic momentum constraint equation has to be fulfilled. This will be discussed in detail later.
d = 3 case
In 3 + 1 dimensions the conditions (8.9) are explicitly (without removing redundant ones) and the asymptotic Hamiltonian constraint is Let us show that they can be fulfilled simultaneously: The Hamiltonian constraint (8.12) gives The conditions (8.11a) follow from (8.11d) by choosing < γ/2. Likewise, (8.11b) follows from (8.11e) by choosing < −2p 1 • (as γ > 0 and p 1 • < 0). The second and third condition in (8.11c) follow from the first and from p 3 11a)), as does the first when inserting (8.13) and choosing sufficiently small. The second condition in (8.11d) follows from the first. The remaining ones are now (the last two conditions in (8.11e) follow from p 3 • > p 2 • > 0 and γ > 0). 2(p 2 • follows from p 2 • > 0 after inserting (8.13). The last remaining condition, 2(p 3 • . Summarizing, the conditions on p a • (at each spatial point) are (8.14) The remaining free parameters are the six functions of the space coordinates P i • a and β a • .
Constraints
In addition to the evolution equations, the constraints (5.2) and (5.3) also have to be fulfilled. The asymptotic Hamiltonian constraint was already included in the conditions discussed in the last section. We start by considering the asymptotic momentum constraints and then show that the full constraints are satisfied if the asymptotic ones are.
Inserting the ansatz (8.1) (with N • = 1) into the asymptotic momentum constraint (7.11) (all terms containing C a [0] bc vanish as the asymptotic limit of N is constant in space).
The full constraints are fulfilled if the asympotic ones are, following the arguments of section 7.3. The condition that the (modified) evolution equations for the constraints (7.14) are of Fuchsian form requires that the subdominant gravitational wall conditions are satisfied, which is the case here (see (8.9)). To ensure the full momentum constraints converge to the asymptotic ones the symmetry wall conditions have to be fulfilled, which is also the case. Finally, the Hamiltonian converges to the asymptotic Hamiltonian if all terms after the first one in (4.6) vanish asymptotically. For the terms coming from the symmetry and subdominant gravitational walls this is ensured by the decay of the exponential terms, as for the general ansatz. For the dominant gravitational wall terms it follows from the decay of the coefficients, which contain spatial derivatives of N a i , and the resulting inequalities (8.11e).
Analysis of the constructed solutions
In the following we will need the asymptotic behaviour of the metric components of the constructed solutions. This is derived in Appendix E.
From the results above there exist numbers γ > 0 and ν > 0 so that the leading order behaviour of the metric components, and those of its inverse, is where O ν := O(e −ντ ), the K a i are functions depending only on the spatial coordinates and with the behaviour above preserved under differentiation in the obvious way. The precise form of the K a i is given in Appendix E.
Remaining coordinate freedom
We wish, now, to analyse how the ansatz (8.1), with the choice N • = 1, constrains the remaining coordinate freedom.
In [12] it is asserted that transformations mixing time and space coordinates are prohibited by the choice of lapse and shift and the assumption that the singularity is approached as τ → ∞. Presumably this should follow from the resulting equations (assuming a transformation τ, x i →τ (τ, x j ), y i (τ, x j )). However, the assertion is not clear. An attempt was made to construct a Fuchsian system by starting from the transformation law of the Christoffels. Defining Inserting this into the γ = 0 equation of (9.5) gives This system is unfortunately not of Fuchsian form, as many of the Γ α βγ diverge as τ → ∞ (see Appendix E).
Assuming nevertheless a transformation of the form which implies that the Jacobi determinant of the spatial transformation is constant (in space and time) and thatτ is an affine function of τ . Starting with a general spatial coordinate transformation x i → y i (x j ) the metric components g kl in the new frame take the form with O(X) = O(exp(−2p 2 • τ )) denoting higher order terms. Assuming ∂x i /∂y i = 0, which is necessary to preserve the conditions p 3 • > p 2 • > p 1 • (8.11a), this impliesβ 1 = β 1 − log(∂x 1 / ∂y 1 ).
=O(X)
+Ñ 2 3 e −2β 2 (9.13) and yieldsÑ 2 3 = ∂x 2 / ∂y 3 ∂x 2 / ∂y 2 + O(X) (9.14) which implies ∂x 2 / ∂y 3 = 0. The conditions on the coordinate transformation are now Under such a coordinate change the asymptotic functions N a • i , p a • and β a • transform as As the p a • remain unchanged, the conditions (8.14) are unaffected. Therefore γ can be chosen to have the same value in the new coordinates, i.e.γ = γ.
The transformations (9.16) reduce the possible isometries of the constructed solutions: Spatial isometries would have to be of the form (9.15) as otherwise the asymptotic evolutions would not match (aÑ • = 1 would not give an asymptotically diagonal metric and transformations which exchange the order of the β a would change the asymptotics of the diagonal terms). As p 2 • and p 3 • are only constrained by the inequalities 0 < p 2 • < ( √ 2 − 1)p 3 • but otherwise free functions of all space coordinates, which are not influenced by coordinate transformations of this form, we have the following proposition: Proposition 9.1. For a generic choice of the asymptotic functions p 2 • and p 3 • , the corresponding solutions have no (continuous or discrete) isometries φ : M → M of the form τ (φ(q)) = τ (q), ∀q ∈ M , i.e. involving only the spatial coordinates.
Killing vectors
In this section we wish to investigate continuous symmetries of the constructed solutions. This requires an analysis of the Killing equations: (9.17) We will seek Killing vectors of the form X = X τ ∂ τ + X i ∂ i , with and we will assume that the behaviour above is also preserved under differentiation. This ansatz is more general than the assumption of purely spatial isometries in the previous section, which would imply X τ = 0. It does, however, only include isometries which can be described by Killing vectors, i.e. continuous but not discrete ones. The τ i killing equation states Contracting with g ik gives i.e. the τ derivative of the spatial components of the Killing field decay exponentially. Considering the τ τ component of the Killing equation gives The term of order one inside the bracket is non-zero for large times unless The ii component of the Killing equation gives which is certainly non-zero unless the highest order term, containing p i • τ , vanishes. This requires There are no solutions X µ fulfilling these conditions in general: The second equation of (9.20) and (9.22) can be combined into A i k X k = 0, with A containing the derivatives of p 2 • , p 3 • and σ β• . If the determinant of A is non-zero the only solution is X k • = 0. If this holds in the neighbourhood of a point, the Killing vector vanishes everywhere. We conclude that our solutions will not have any Killing vectors of the form (9.18) in general.
Considering now a general Killing vector field X µ , satisfying we start with the expression ∇ τ ∇ α X β . Using the definition of the Riemann tensor we obtain Using the antisymmetry of the Riemann tensor and the first Bianchi identity gives Applying (9.23) and using the definition of the Riemann tensor again leads to and therefore ∇ τ ∇ α X β = R γτ αβ X γ . Introducing we thus have the following system of equations for the pair (X, F ) if X is a Killing vector: This has the general structure of a Fuchsian system, but does not fulfil the conditions given in section 6. By redefining some of the X α and F αβ the equations can be brought to a form where all terms on the right-hand side decay exponentially, but the equation for X 0 contains the term Γ 0 00 X 0 on the left-hand side, which gives a negative eigenvalue −(p 1 To still satisfy the conditions of the Fuchs theorem all terms would have to decay faster than exp(−τ (p 1 , which is not possible.
Relationship with previously known solutions
The first class of vacuum spacetimes for which asymptotically simple behaviour was shown was the polarized Gowdy class [21,22]. This is a class of solutions containing two commuting spacelike Killing vector fields (the polarization condition) with constant t hypersurfaces which are compact without boundary and orientable (the Gowdy condition). The topology of spacelike slices of these spacetimes is constrained to one of T 3 , S 2 × S 1 , S 3 or a Lens space L(p, q). In the following only the T 3 case will be considered. The metric for this case is given by where a and W are functions of t and θ which are 2π periodic in x.
Redefining the t coordinate as t = e −τ transforms the metric to which is directly in the gauge used here: The shift vanishes and the lapse e 2(a−τ ) is equal to the determinant of the spatial part of the metric. Comparing with (10.1) shows The results of Chruściel, Isenberg and Moncrief show that solutions of this form are parametrised by two functions of the θ coordinate, π and ω, appearing in the asymptotic expansion of a and W [21]. The expansion is where τ 0 is a constant and α a function of θ which can be determined from π and ω. This gives for the p a These satisfy the asymptotic Hamiltonian constraint (7.8) but not necessarily the inequalities (8.14). These would imply √ 2 − 1 < π < 1 but there are solutions of the form (9.25) for any function π. The assumption that the metric coefficients only depend on the θ space coordinate and N a i ≡ δ a i causes some of the potential walls to vanish: The coefficients of the dominant gravitational walls are proportional to N a i,j and therefore vanish identically. For N a i to be constant, P i a has to vanish, which causes the symmetry walls to vanish. The coefficients of the subdominant gravitational walls (4.5) contain terms which are not proportional to N a i,j . These do, however, contain spatial derivatives of the β, most of which are zero here. As β only depends on t and θ, only one of the walls, with linear form µ 1 (β) = β 2 + β 3 , remains. Therefore, assuming that the metric coefficients depend only upon θ and that N a i ≡ 1, the only conditions left in (8.14) are • can still be independently specified.) More general (non-Gowdy) T 2 symmetric spacetimes have also been shown to exhibit simple asymptotic behaviour. These take the general form ds 2 = e 2(η−U ) (−αdt 2 + dx 2 ) + e 2U (dy + Adz + (G 1 + AG 2 )dx) 2 + e −2U t 2 (dz + G 2 dx) 2 , with η, U , α, A, G 1 and G 2 depending only on t and x [26]. To obtain simple behaviour either polarization, corresponding to A = const, or half-polarization, corresponding to a restriction on the asymptotic behaviour of A, has to be assumed. In both cases the resulting spacetimes are not contained in the class constructed here. The functions G 1 and G 2 tend to constant (in t) functions of x, but they appear in the N a i in the Iwasawa decomposition. This conflicts with the assumption N → 1 (or → const) made in constructing the new class. In this sense the new class is therefore more restricted than the polarized and half-polarized T 2 classes. However it includes free functions depending on all space coordinates, not just one. The Killing vectors of the Gowdy and general T 2 -symmetric spacetimes are of the form considered in section 9.2, as they do not include derivatives with respect to t. These are therefore in general not present in the class of solutions constructed here.
Conclusion
We have constructed a new class of four-dimensional (analytic) solutions to the vacuum Einstein equations which show asymptotically simple behaviour near a spacelike singularity, approached as τ → ∞. The metric takes the form with β a and N a i depending on all coordinates τ , x i and behaving asymptotically as where γ and ν are positive constants. The class of solutions includes three completely free functions of all space coordinates, β 2 • , β 3 • , P 2 • 1 (P 2 • 1 does not appear in (10.1) and (10.2) but influences the exponentially decaying terms, as detailed in Appendix E) and two functions, p 2 • and p 3 • , also depending on all space coordinates, which are constrained by the inequalities The Kretschmann scalar of the solutions behaves as with C K a positive constant, defined in E.12. As p 1 • + p 2 • + p 3 • > 0 the curvature tensor grows uniformly without bounds along all causal curves.
For a generic choice of the free functions, the solutions have no continuous or discrete spatial isometries, and no continuous isometries described by Killing vectors The construction is unaffected by the presence of a cosmological constant, as demonstrated in Appendix F, giving the same asymptotic behaviour for all values of Λ.
Appendix A. Derivation of Iwasawa variable Hamiltonian
Here we will give the derivation of the Hamiltonian density in Iwasawa form (4.6), from the standard form of the Hamiltonian (3.1).
In the following the spatial metric and its inverse will be used in their Iwasawa forms Appendix A.1. Kinetic and symmetry wall terms The kinetic term K and the symmetry wall term come from the first two terms in the Hamiltonian (3.1). These are The conjugate momenta in Iwasawa variables, π a and P i a , can be expressed in terms of π ij as and We start by considering the first term in (A.2), π ij π ij . Lowering an index in the first component and raising one in the second (using the Iwasawa form (A.1) of the metric) gives The double sum can be split into a diagonal and off-diagonal part Raising the index k on π j k in the first, diagonal, part leads to where in the last step the definition of π a , (A.3), was used. This, together with the second part of (A.2), gives which is the kinetic part K of the Hamiltonian (3.2). Raising the index k in π j k in the second, off-diagonal, part of (A.5) gives b =a This is symmetric in a and b and can be written as which is the potential term coming from the symmetry walls.
Appendix A.2. Gravitational wall term
The gravitational wall term comes from the term −gR in the Hamiltonian (3.1). We will calculate the curvature scalar in the Iwasawa frame. The Cartan formulas for the connection one-form ω a b are where γ ab = δ ab exp(−2β a ) = δ ab A 2 a , with A a := exp(−β a ), is the metric in the Iwasawa frame. We will also use the definition of the structure functions ω a b can be obtained by considering the expression (no summation) Starting again from (A.9) but using (A.6) gives Lowering the upper index on ω with γ ab and using (A.7) in the form ω ab (e c ) = −ω ba (e c ) + δ ab (A 2 a ) ,c (with , c denoting the frame derivative by e c ) we obtain Setting (A.10) equal to (A.11) and raising one index using γ ab = δ ab A −2 a gives ω a b as The curvature scalar can now be computed using (A.14) We start by calculating a,b A −2 b dω a b (e a , e b ). This gives (with summation over all indices which occur more than once) The second term, a,b,c A −2 b (ω a c ∧ ω c b )(e a , e b ), gives (again with all sums implied) Adding the two expressions and substituting (A 2 a ) ,b = −2A 2 a β a ,b we obtain the curvature scalar Multiplying this with −g = − exp(−2 a β a ) gives the gravitational wall terms (4.3) and (4.4) in the Iwasawa variable Hamiltonian.
Appendix B. Iwasawa evolution equations and Einstein equations
To obtain the evolution equations (5.1) the variation was taken after choosing lapse N and shift N a with the lapse given as √ det g, i.e. dependent on the metric, and the shift vanishing. The general Hamiltonian density, with lapse and shift still free, is [32]). Taking the variation of the HamiltonianH = Hd 3 x with regards to N and N a gives the Hamiltonian and momentum constraints, respectively: Varying now with respect to π ij and g ij gives the Einstein equations in Hamiltonian form aṡ (Equations (E.2.35) and (E.2.36) in Wald [32]).
Choosing N a = 0, either before or after varying, just removes the terms containing N a in the evolution equations. Choosing N = √ det g before varying adds an additional term in (B.5). This term is, however, proportional to −R + (det g) −1 π ij π ij − 1 2 (det g) −1 (π i i ) 2 which is zero, by the Hamiltonian constraint (B.2).
The terms in (B.5) which contain covariant derivatives of the lapse also vanish, as the determinant of the metric is covariantly constant.
The transformation to Iwasawa variables is a point canonical transformation and therefore doesn't change the equations.
Appendix C. Derivation of Iwasawa variable momentum constraints
In this section we will give the derivation of the momentum constraints in Iwasawa variables and the definition of their asymptotic equivalent, following section 3.2 of [31].
Appendix C.1. Full momentum constraints
We start with the momentum constraints in the form (see e.g. equation (E.2.34) in Wald [32]). The calculation is simpler when done in the Iwasawa frame (2.2), so we first calculate the Iwasawa frame components of π ij , in terms of the Iwasawa variable conjugate momenta P i a and π a . These are denoted byπ ab and defined as Starting fromġ ij π ij =β a π a +Ṅ a i P i a and writing the left side in Iwasawa variables gives Using the diagonal form of the metric in the Iwasawa frame, γ ab = exp(−2β a )δ ab , we obtaiñ π a a = − The strictly lower triangular part ofπ b a ,π b a [−] , is given explicitly bỹ Because of the symmetry ofπ ab this also gives the upper triangular part viã Finally, also including the diagonal term from (C.3), we arrive at where the last term comes from the fact thatπ b a is a tensor density of weight 1 and Γ a bc are the connection coefficients in the Iwasawa frame, given by Γ a bc = 1 2 σ g aσ (g σc,b + g bσ,c − g bc,σ − C σbc + C bσc + C cσb ) (no implicit summation) with the comma denoting the derivative in the Iwasawa frame.
which is the momentum constraint (5.3).
Appendix C.2. Asymptotic momentum constraints
The asymptotic momentum constraints are obtained from the full ones by discarding the contribution of the strictly upper triangular part ofπ b a : Indeed, this part (the last line of (C.6)) contains exponential terms which vanish asymptotically if the symmetry wall conditions are fulfilled.
and inserting into (C.9) gives which is the asymptotic momentum constraint (7.11).
Inserting the ansatz (8.1) (which implies C a bc = 0 asymptotically) and the asymptotic evolution of the β a , β a The last term, which contains a time dependence, is zero if the asymptotic Hamiltonian constraint G bc p b • p c • = 0 is fulfilled. Expressing the Iwasawa frame derivative in terms of the coordinate derivative ∂ i leads to which is (8.15) with arbitrary (constant) N a • i .
Appendix D. Evolution equations for the constraints
Here we will give the derivation of the evolution equations for the constraints in our choice of gauge. Our treatment is similar to, but not identical with appendix A of [31].
The first order action corresponding to the Einstein equations is given by withÑ the "rescaled lapse" defined asÑ = N/ √ g and N i the shift vector. In our choice of gaugẽ N = 1 and N i = 0. From (D.1) the equations of motions, the Hamiltonian constraints and the momentum constraints can be obtained by varying with respect to g ij ,Ñ and N i respectively. We will compare the resulting equations with those coming from variation of the standard Einstein-Hilbert action S H = d D x √ −ḡR, which is (neglecting boundary terms) withḠ denoting the Einstein tensor. As the spacetime metricḡ µν is defined, in terms ofÑ , N i and g ij , as the variation of S H with respect to g ij ,Ñ and N i following from (D.2) is given by where O(N k ) denotes terms proportional to N k which vanish in our gauge. Here √ −ḡ =Ñ g and δg = gg ij δg ij were used.
From the first order action (D.1) we obtain Identifying (D.4) with (D.6) and (D.5) with (D.7) yields We now consider the (vanishing) divergence of the Einstein tensor ∇ νḠ ν µ = 0. Using the identity this can be rewritten as Expressing the components of the Einstein tensor using (D.8), (D.9) and (D.10) gives for µ = 0 others do not depend on P i a . Their coefficients are c ab = (P j a N b j ) 2 /2, which gives, inserted into (E.1), Inserting the asymptotic behaviour of P and N gives We consider first term A. As P j • a is of order 1 whileP j a decays as e −ντ by the Fuchs theorem, the first term dominates. For term B the situation is similar. The δ c j term is of order one for c = j (the other terms in the j-sum are irrelevant as they share the same decay rate) while the second term decays as e −τ (γ+ν) . This leaves For C the situation is different, as the sum over c also influences the factor exp(−2(β c − β a )). We will consider the three components of N a s i separately. In the case N 1 s 2 , i.e. a = 1 and i = 2, only the summand with c = 2 survives in the sum, as N c s i = N c s 2 is zero for c = 2 and c = 3 because of the upper triangular form of N s . The evolution equation therefore becomes Integrating this and using the fact that N 1 s 2 = O(e −ντ ) from the Fuchs theorem to eliminate the integration constant gives For N 2 s 3 the situation is similar: N 3 s 3 vanishes, leaving only the δ c i term and giving and therefore For N 1 s 3 two summands remain: For c = 2 the term with N 2 s 3 and for c = 3 the one with δ c i . The evolution equation becomes Inserting N 2 s 3 from (E.6) gives ∂ τ N 1 s 3 = γN 1 s 3 + e −2(β 3 and after integrating From equations (E.5), (E.6) and (E.7) the full matrix N a i and its inverse (N −1 ) i a are given by where O ν := O(e −ντ ). Setting the leading order behaviour of the metric components, and those of its inverse, is with C denoting τ -independent quantities, possibly different for different components. For most components an explicit first order term is given above. This is the case if the highest order term appearing in (E.10) does not vanish and therefore only the first order terms of the metric, given in (E.9) contribute. For the other terms, e.g. Γ 1 20 , the highest order term cancels and lower order terms of the metric could potentially become important, necessitating a more detailed analysis.
Here the behaviour of these terms is simply given as decaying faster than the vanishing highest order term, as this is sufficient to determine the behaviour of the Kretschmann scalar in the following.
Their behaviour is given by The Kretschmann scalar, defined as R αβγδ R αβγδ , behaves as K = R αβγδ R αβγδ = (C K + O(e −ντ ))e τ 4(p 1 As the inequalities (8.14) require p 3 • > p 2 • > 0 the coefficient C K is positive and the Kretschmann scalar diverges as τ → ∞ for all constructed solutions.
Appendix F. Solutions including a cosmological constant
Solutions of the form presented above can also be constructed with a nonzero cosmological constant. Here we will show the resulting changes in the arguments above.
The presence of a cosmological constant changes the action to This leads to an additional term 2Λg in the Hamiltonian. As the determinant g of the spatial metric behaves asymptotically as g = e −2(σp • τ +σ β• ) (1 + O(e −ντ )) (F.2) the term decays exponentially. It contributes an additional term to the evolution equation for π a , (8.2b). This equation includes the diverging prefactor e τ but the conditions (8.9) guarantee that the new term decays fast enough to compensate it. Therefore the presence of a cosmological constant introduces no new conditions on the free functions from the evolution equations. The cosmological constant also appears in the Hamiltonian constraint, but not in the momentum constraint. As the additional term in the Hamiltonian decays it does not change the asymptotic Hamiltonian constraint or the condition on the p a • arising from it.
Finally, Λ appears in the derivation of the evolution equations for the constraints in Appendix D. Equations (D.8) and (D.10) change respectively to The additional terms containing Λ cancel in the final evolution equations (D.11) and (D.12). Therefore the arguments regarding the constraints in section 7.3 remain unchanged. We conclude that the solutions with the asymptotic behaviour given in Appendix E, as obtained from the Fuchs theorem above, exist for all values of Λ, with the free functions and the associated conditions unchanged from the Λ = 0 case.
|
2015-12-10T16:55:33.000Z
|
2015-07-15T00:00:00.000
|
{
"year": 2015,
"sha1": "ebe92e056675ae8185cde7265fae250845681c0e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1507.04161",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ebe92e056675ae8185cde7265fae250845681c0e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
199174408
|
pes2o/s2orc
|
v3-fos-license
|
Communities of Practice as a Professional Development Tool for Management and Leadership Skills in Libraries
Management literature has been assessing the value and impact of Communities of Practice (CoP) for almost three decades.1 CoP are most commonly defined as “Groups of people who share a concern, a set of problems, or a passion about a topic, and who deepen their knowledge and expertise in this area by interacting on an ongoing basis.”2 Other definitions expand to include amorphous groups where people come and go as they need or desire, virtual groups, and groups that include people from outside the organization.3
Introduction
Management literature has been assessing the value and impact of Communities of Practice (CoP) for almost three decades. 1CoP are most commonly defined as "Groups of people who share a concern, a set of problems, or a passion about a topic, and who deepen their knowledge and expertise in this area by interacting on an ongoing basis." 2 Other definitions expand to include amorphous groups where people come and go as they need or desire, virtual groups, and groups that include people from outside the organization. 3 this article, we examine a CoP as a professional development opportunity.In August of 2012, the University of Wisconsin-Madison Libraries gathered a small group of people to improve their management and leadership skills with the intent to work through the book Be a Great Boss, One Year to Success.This workbook authored by Catherine Hakala-Ausperk, is designed to enable readers to reflect and practice elements necessary to be a better boss.Through meeting and dialoging as a group, we formed a deep trust where difficult questions could be asked and honest conversations ensued.All participants felt that this group was a positive experience which enhanced our learning and growth as supervisors, yielded unexpected benefits of strong bonds within the group, and provided a system of ongoing collaboration and support.
While CoP and working collaboratively are common in libraries, CoP are rarely used as structured staff development tools to increase the management and leadership skills of their members.The CoP as a professional development format is unlike other training programs locally and nationally in that the learning takes place over an extended period of time.It incorporates follow up discussions to ensure practice and implementation of what was learned.Quite often, after a professional development training, staff have good intentions to implement aspects of what they have learned.However, due to a variety of reasons including workload, unanswered follow-up questions, or lack of accountability, the implementation never takes place or is not sustained.The slower pace and accountability to others in the Libraries CoP enabled members to identify areas to work on and take action to implement these ideas in order to obtain their goals.This article will look at the successful implementation of a CoP to enhance management and leadership skills, including the challenges and accomplishments.
Literature Review
Jean Lave and Etienne Wenger first coined the term after describing a situational learning process as "legitimate peripheral participation 4 ."Their 1991 book, which investigated apprenticeship primarily within several professional communities, defined legitimate peripheral participation as "…the process by which newcomers become part of a community of practice.A person's intentions to learn are engaged and the meaning of learning is configured through the process of becoming a full participation in a sociocultural practice.This social process includes, indeed it subsumes, the learning of knowledgeable skills." 5 They focused on the practice of participation and how members evolve through their shared learning and experiences.
Building on the concept of CoP, Brown and Duguid include the importance of shared stories and learning within organizations.One important distinguishing aspect of their work focuses on the importance of storytelling to building work culture and innovation.They define core characteristics of stories that "have a flexible generality that makes them both adaptable and particular.They function, rather like the common law, as a usefully underconstrained means to interpret each new situation in the light of accumulated wisdom and constantly changing circumstances" and that stories "also act as repositories of accumulated wisdom." 6Stories become a critical step for organizational learning within communities of practice.
Since the introduction of the concept of CoP, much more has been written and investigated related to the development of these communities of shared learning and experience.Most notably, Wenger continued expanding on the concept of communities of practice and in 2002, along with McDermott and Snyder, defined communities of practice as "groups of people who share a concern, a set of problems, or a passion about a topic, and who deepen their knowledge and expertise in this area by interacting on an ongoing basis." 7Their definition is not limited to people who work together, but includes people with shared interests and who see value in their community: …they typically share information, insight, and advice.They help each other solve problems.They discuss their situations, their aspirations, and their needs.They ponder common issues, explore ideas, and act as sounding boards…they accumulate knowledge, they become informally bound by the value they find in learning together.This value is not merely instrumental for their work.It also accrues in the personal satisfaction of knowing colleagues who understand each other's perspective and of belonging to an interesting group of people. 8nger, McDermott, and Snyder argue some of the most crucial benefits of CoP are the relationships, knowledge, and culture CoP develop within their community that can expand outwards.For organizations, these are crucial tenets.Because CoP can develop among anyone with shared interests or communities, it is no wonder businesses see value in their cultivation and development.
While the development and success of CoP have strong historical backing and research, the discussion of their use in libraries and library staff development is not common.In academia, and especially in libraries, CoP are often written about in context to their impact on student learning or instructor benefits (i.e.information literacy), rather than in staff development.Although libraries frequently work in collaborative environments built around subject specific meetings, shadowing, internships, and formal/informal mentoring relationships, the term "communities of practice" is rarely used, even though it could be based on its definition.In their article, "Communities of Practice at an Academic Library," Kristin Henrich and Ramirose Attebury focus on the creation of CoP in a library, in this case for peer mentoring.They argue that groups within the library can be an "indispensable part of a learning organization." 9The formation of these communities can bring about shared learning, experiences, and ideas.Henrich and Attebury also argue that best practices need to include some form of internal leadership and initiative (while being aware of power dynamics), good communication, and a sense of community. 10
Evolution: From Book Club to CoP
The CoP discussed in this paper began as a book club at the UW Madison Libraries after an employee -who would become a member of the first CoP cohort -brought Catherine Hakala-Ausperk's book to the attention of library administration.From that small act, ideas flowed and the concept of forming a group to work through the chapters of the book formed.In August of 2012, four library supervisors were identified who would be open to the concept of learning in a CoP and would bring a unique perspective to it based on their experience, roles within the organization, and the management skills they had already demonstrated.A member of administration with human resources experience initially acted as the facilitator.This group was a grass-roots effort with common goals in mind: to learn more about management skills and to develop and practice those skills.Because of the time commitment and the small financial obligation to purchase Hakala-Ausperk's book for the group, sponsorship by the Libraries' Executive Committee was sought.This original CoP did not set out to resolve any specific problems in the libraries but was intended to be a vehicle for learning.When the group was formed, we didn't even think of ourselves as a CoP; if anything, we thought of ourselves as a "book club" with learning objectives.However, our goals were clear from the start: we wanted to slowly and methodically learn from the book and each other and implement what we learned.Because we met every month, we could follow up with each other and be accountable to the group to achieve the goals we individually set for ourselves.The original intent of the group was to meet once a month for several hours to cover a chapter in the book.The book is laid out into 12 chapters, each chapter consisting of four units which build around the chapter topic, all which relate to an aspect of library management.Ideally, the book is set up to complete in one year.Each person was committed to working through the chapter before the meeting.At the meeting, we discussed the chapter, how the information applied to our work, and what aspect of the management lesson we wanted to work on during the following month.
The members of this first group had each worked for the libraries for several years but were not familiar with one another due to the size of the library staff and participants being located in multiple buildings.Because of this, the group members needed to build trust with each other which happened over the first few months as we shared our stories, current management struggles, and questions.As we moved through the chapters of the book we also shared how we were each working to improve our skills as managers and leaders.The ability of group members to be vulnerable with each other led to strong bonds and the meetings of this group became an event on the calendar that participants would look forward to.In addition, this group formulated ideas on areas of improvement for the libraries and turned those ideas into action.For example, we identified the libraries mentorship program as languishing and developed ideas on how to rejuvenate this program which led to launching a revised, and much stronger, mentorship program.Had it not been for the CoP, this cohort may never have had the chance to gather and engage in honest conversations around topics that had impact far beyond our units.
Although the Libraries at the University of Wisconsin-Madison still has a formal staffmentoring program, the members of our "book club" wanted to develop a CoP for staff free from the power dynamics that are normally at play in mentoring relationships.We envisioned a community where staff with different values, perspectives, and experiences could grow and innovate together.Our goal was to create a space where all participants would be able to learn through participation, storytelling, experience, and relationship building.We aimed to create "collaborative, informal networks that support professional practitioners in their efforts to develop shared understandings and engage in work-relevant knowledge building," which we officially names the GLS Supervisor Community of Practice. 11though the group's intention was to meet every month to discuss a chapter, this plan was altered in many ways.The group had originally agreed that it would not meet unless all members could be present.Considering the busy schedules of the group, this proved to be an ongoing challenge.Many meetings were cancelled at the last minute and rescheduled due to unavoidable conflicts.Also, when the group gathered we would often check in to see if anyone had any topics they wanted to discuss.There were many times that we would end up talking about a topic that we were currently dealing with in the library instead of addressing the book chapter, thus delaying the completion of the program.
As the initial group progressed, it became obvious that we could merge Be a Great Boss, One Year to Success with a growing initiative on campus called Leadership@UW. 12 Leadership@UW works to facilitate a shared vision and common language for leadership at UW-Madison.We learned many lessons in the first iteration of the group and this led to a more focused effort for the second group which included incorporating Leadership@UW from the start, more focus on self-assessment and replacing some of the chapters of the book which were not as relevant to our library system with more appropriate topics and readings.
Implementation
After two years of meeting and developing a cohort based on trust and shared learning, the members of our informal "book club" recognized the benefit of sharing their experiences, growth, and community beyond our small group.Therefore, in 2015, with the support of the Libraries' Executive Committee, two CoP for supervisors were developed.One group focused on supervisors of permanent staff and the other on supervisors of students.A division was made between permanent and student supervisors due to the differences in policies and laws which govern the two groups and the variance in complexity of supervising permanent staff versus a transient workforce.A GLS libraries-wide call went out seeking applicants to our new CoP.The groups were intentionally not limited to current supervisors but welcomed anyone who wanted to learn more about supervision.From this call, we were able to create two groups of five members each, with two members from the original group acting as facilitators.Originally, we could not place all of those who expressed interest but due to turnover soon after the groups were formed, everyone who originally expressed interest received a spot on a CoP.The two new groups -the Permanent Staff Supervisors Community of Practice and the Student Staff Supervisors Community of Practice -officially launched in January 2016.
Each newly formed CoP was comprised of members from different departments and management experiences throughout the organization.In many instances, members had no previous experiences or opportunities to work closely with the other members of their CoP, even though they may have been colleagues for years.We intentionally organized membership around staff with similar supervisory positions and with a willingness to engage in practice and learning in order to develop their supervisory experience.Another key component was ensuring each group was free from power dynamics sometimes experienced in other learning opportunities or meetings throughout our organization.We felt strongly that supervisors should not be in a community of practice with their staff, in order to allow for an honest and truthful sharing experience, so they were intentionally not included in the same CoP.To begin building trust among members, a large part of the first meeting was used to establish ground rules for the community.We started with the following: • What is shared here stays here, what is learned here leaves here.
• Share air time.
• Listen.Attempt to understand other people's perspectives.
• Respect the ideas of others, even if you don't agree.
• Inquire to understand.
• Create safe space for discussion.• Clearly state goals for meetings and tasks.
• Expect members' full engagement and participation in and between meetings.
Although not required, each group interestingly ended up adopting the same suggested ground rules above and decided not to add additional ones.Another method used for building trust was to maintain a static membership.Unlike other CoP, we chose not to add new members throughout the yearlong program.We purposefully kept it a small, consistent group, with the intention of having CoP "graduates" lead the next iteration of our communities.We had intended the communities to last around a year but, like the first cohort, these groups continued much longer, even after the book was completed.They morphed into true communities of colleagues who continued to share best practices and learn together throughout the years.
In addition to the communities continuing to meet after they had completed the book, we often found that meeting topics fluctuated each month.While the topics for each meeting were tied to chapters in the book or Leadership@UW, topics changed with pressing issues or experiences that members wanted to discuss.Perhaps the most impactful trait of our communities was to practice action learning: "…working on real problems, focusing on learning, and actually implementing solutions." 13Self-reflection, assessment, and a willingness to discuss pressing issues allowed our group to develop a culture of sharing and growing together as a community of supervisors.Ultimately, we gained the benefit of developing a trusted group of colleagues who are now resources for each other if/when issues or opportunities develop.
Impact
In January 2018, we implemented an informal, anonymous, and comment-based Google Form survey, sent to all CoP participants, to gauge the expectations and impacts of our Supervisor CoP's.Out of 15 potential respondents, from all cohorts, we had 12 members respond.While the survey focused on several factors we wanted to assess, themes related to shared learning, professional growth, and community were prevalent throughout all responses.
Respondents were asked why they decided to participate in the CoP and to specify expectations they had related to this experience.Overwhelmingly, respondents expressed interest in learning from others and potential growth as reasons for joining.One respondent envisioned "a safe community for discussion of confidential and potentially sensitive topics" and one that "would encourage one another to grow in our roles as supervisors."Others shared expectations of "growth and learning" while sharing their "practices as a supervisor as well as gleaming [sic] information from my fellow supervisors." Aside from one respondent's concerns regarding the inability to consistently schedule meetings throughout the months and a few comments on the book's contents, in general, respondents expressed that the expectations they had prior to participating in the CoP were mostly met: "My experience in my CoP has largely mirrored my expectations, though we have spent more time sharing and supporting one another than I had expected, which is wonderful." Statements like this and another stating, "All supervisors should have a community of practice.Supervisors are generally too isolated," helped demonstrate that there was a need in the libraries for a community of sharing for supervisors, and that the community we set out to create was achieved.In fact, since the creation of our initial reading group, each of those members have taken on additional supervision of staff.We cannot state this is a direct effect of the CoP or rather the selection of highly motivated and knowledgeable staff.Still, as one respondent stated, "I think we need to be doing more to foster connections between different units and libraries in the GLS [University of Wisconsin-Madison Libraries].Being a supervisor/manager should include opportunities for lifelong learning!"The Libraries agree with this sentiment and continues to support the success of these CoP through having the Executive Committee host graduation celebrations, including a certificate of completion.
Future
As we look forward to building new cohorts we are asking ourselves better assessment questions in an effort to continue to ensure these CoP are productive and provide opportunities for growth and reflection.Better utilization of tools to self-reflect and ensure participants are working on improving at least one area of supervision each month could also improve the overall learning and experience.
As observations of the CoP continue, we are even more aware of the key role of the facilitator in helping the groups overcome the first and largest challenge: building trust.We are also aware of power dynamics between the group and the facilitators; it is important that the facilitators are aware of the power dynamics and how they must use this power to ebb and flow with the group.In the past, the facilitator has held a higher management position than the members of the group.The facilitator needs to understand how this power dynamic can affect people's willingness to comment on certain topics and that, at times, the group may be seeking advice from the facilitator(s) due to their role within the organization.Facilitators in the second iteration acknowledged they initially talked more than they wanted to in an effort to help the group to build trust and community.Facilitators also need to understand the strengths that both introverts and extroverts bring to the group and how to use those strengths to support the group.Through this process, we learned of the necessity for skilled facilitators and the responsibility of administration to ensure that those who lead future groups received some training in facilitation.
Moving forward, we are partnering facilitators from the first and second cohorts to launch four additional CoP, two for supervisors of students and two for supervisors of permanent staff.We aim to keep the participation of each group at approximately four group members and two facilitators.
The group also feels that Hakala-Ausperk's book is a useful tool and aligns to UW-Madison's leadership definition provided by the Leadership@UW Framework to guide discussions and learning.Additional supplemental readings will be chosen based on input from past participants and others to identify the most relevant articles.
Although this level of professional development is time consuming and labor intensive, participants expressed that it is a good use of their time for developing and implementing management and leadership skills and networks which will be essential to their future success.As one participant commented, "This was an excellent experience and one I would highly recommend to others.I think bringing in the campus Leadership Framework (Leadership@UW) tied in nicely to what we were doing and added more community to our effort."Support from libraries' administration and management also sent a clear message to participants that the Libraries were invested in them and supported their development opportunities.A comment on the survey clearly reflected this view: "... except to say that I'm so grateful to be a part of the organization that sees value in this type of group and that gives the attendees the space to be involved."CoP at the UW-Madison Libraries have made a positive impact on employees' abilities to learn and implement management and leadership skills while also building a network of trusted colleagues who can help them along their path of life-long learning.31, no.6 (2005)
Bibliography
Brown, John S., and Paul Duguid, "Organizational Learning and Communities-of-Practice: Toward a Unified View of Working, Learning, and Innovation," Organization Science 2, no. 1 (1991) Cox, Andrew M., "What are communities of practice?A comparative review of four seminal works."Journal of Information Science
|
2019-08-02T22:40:07.604Z
|
2019-06-13T00:00:00.000
|
{
"year": 2019,
"sha1": "ec1adf849a4afd9300ce67459e1e24bdb1037f41",
"oa_license": "CCBY",
"oa_url": "https://llm-ojs-tamu.tdl.org/llm/article/download/7347/6517",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ec1adf849a4afd9300ce67459e1e24bdb1037f41",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
255439439
|
pes2o/s2orc
|
v3-fos-license
|
Digitally Deprived Children in Europe
The COVID-19 pandemic has completely changed the need for internet connectivity and technological devices across the population, but especially among school-aged children. For a large proportion of pupils, access to a connected computer nowadays makes the difference between being able to keep up with their educational development and falling badly behind. This paper provides a detailed account of the digitally deprived children in Europe, according to the latest available wave of the European Union – Statistics on Income and Living Conditions (EU-SILC). We find that 5.4% of school-aged children in Europe are digitally deprived and that differences are large across countries. Children that cohabit with low-educated parents, in poverty or in severe material deprivation are those most affected.
Introduction
The COVID-19 pandemic has completely changed the need for internet connectivity and for technological devices across the population, but particularly among children. In an attempt to halt the spread of the virus, many countries moved part or all of their teaching online, 1 accelerating the process of adoption of digital technologies in education, such as blending digital tools with traditional teaching methods (Guallar Artal et al., 2021). Therefore, nowadays, for many children, having a computer connected to the internet makes the difference between being able to keep up with their education and falling badly behind. Issues of access are linked to the larger body of research surrounding digital exclusion, digital inequalities and the digital divide (Chen, 2013;DiMaggio & Hargittai, 2001;DiMaggio et al., 2004;Helsper, 2021; van Deursen & van Dijk, 2010;van Dijk, 2005). Other aspects during the COVID-19 pandemic have also been relevant -such as having the opportunity to stay socially connected with family, friends and peers at a time when physical distancing was imposed in most countries (Ellis et al., 2020;Ezpeleta et al., 2020).
But not all children in Europe have either a computer or an internet connection. As a matter of fact, our results based on data from the European Union -Statistics on Income and Living Conditions (EU-SILC), show that, on average, 5.4% of children in Europe are digitally deprived: that is, they live in a household that cannot afford to have a computer and/or live with adults who claim they cannot afford to have an internet connection for personal use at home. However, the differences across European countries are large. For example, in Iceland, only about 0.4% of children are digitally deprived, whereas in Romania and Bulgaria the figure soars to 23.1% and 20.8%, respectively.
Much of the work on digital inequality -or, more specifically, the digital divide -has focused on access (the 'first-level' digital divide), which was assumed to be largely resolved (Paus-Hasebrink et al., 2019; van Deursen & Helsper, 2015). According to van Deursen et al. (2011), 'the binary classification of access in terms of physical access (having a computer and an internet connection or not) is considered to have been superseded and replaced by a divide that is supposed to concentrate on a large number of more complex variables and relations ' (p. 126). This prompted a move to focus research on digital use and digital competencies, understood as 'digital skills' and often referred to as the 'second-level' digital divide (Hargittai, 2002;Ronchi & Robinson, 2019). The shift from access to skills and usage was seen as necessary in order to reflect changes in society, where digital skills were becoming more important . However, the pandemic has shown us that the assumption that 'now everybody has access to and can use the internet' (van Deursen et al., 2011, p. 126) is inaccurate; instead, it has served to demonstrate that children still face inequalities in access, leading to digital exclusion -or what we call 'digital deprivation'. This paper answers the following questions: Who are the digitally deprived children of Europe? Where do they live? Is the risk of digital deprivation among children heterogeneous across Europe? What are the associated risk factors of children's digital deprivation? We consider six vulnerable groups: (i) those who live in a loneparent household; (ii) those who live in a poor family; (iii) those living in severe material deprivation; (iv) those with parents of non-European immigrant origin; (v) those with low-educated parents; and (vi) those in a large family. To the best of our knowledge, no previous study has determined the level of digital deprivation among children in Europe, and nor has any identified the socio-economic characteristics that increase the likelihood of a child being digitally deprived. Understanding who the digitally deprived children are and what the existing differences in risk are across
Digitally Deprived Children in Europe
Europe is crucial if we are to design effective policies to combat the digital divide, and if we are to ensure equal educational opportunities for all children in Europe, irrespective of their socio-economic background.
We find that digital deprivation affects particularly children in severe material deprivation, those that cohabit with low-educated parents and those who live in poverty. However, the characteristics that describe a digitally deprived child are heterogeneous across countries (as is the strength of the association). For instance, while cohabiting with parents of non-European immigrant origin is positively associated with digital deprivation in most contexts, this is not the case in Eastern Europe or the Baltic countries, where this characteristic is negatively associated with the probability of being a digitally deprived child. Also, living in a large family is positively associated with digital deprivation in most contexts, apart from in the Englishspeaking countries.
The section that follows this introduction reviews the existing literature on digital exclusion. Section 3 introduces the data used and our definition of digital deprivation. Section 4 shows our results in terms of the big differences in the prevalence of digital deprivation across European countries. We also provide a detailed account of the individual and household socio-economic characteristics associated with children's digital deprivation. Finally, Section 5 summarizes the main findings and proposes some policy recommendations.
Literature Review
The development and increasing use of digital technologies has affected the lives of children and young people, and has, in turn, raised concerns about the emergence of new digital inequalities and the intensification of existing ones. These concerns have led to considerable work that has focused on digital exclusion or digital inequalities -often perceived in terms of a digital divide. The first report on the digital divide from the National Telecommunications and Information Administration (1999) focused on the 'have nots' in rural and urban America. It served as a foundation for the initial work on the digital divide. That work was rather technical, pursuing a binary understanding of access, with a focus on demographics (van Deursen et al., 2011). According to Robinson et al. (2020), this techno-deterministic approach to access was seen as oversimplistic, assuming as it did that access alone led to a reduction in inequalities (Katz & Aspden, 1997). This prompted further work on the digital divide and led to a new understanding of the concept, to different definitions and to a focus on different levels of the digital divide (DiMaggio & Hargittai, 2001;Gunkel, 2003;Hilbert, 2011;Selwyn, 2004;van Dijk, 2005van Dijk, , 2006. Despite the heterogeneity of the definitions and levels of the digital divide, for the purposes of this article we adhere to the definition provided by the Organisation for Economic Co-operation and Development (OECD, 2001a), which interprets the digital divide as 'the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard both to their opportunities to access information and communication technologies (ICTs) and to their use of the internet for a wide variety of activities'. 2 Several frameworks exist that seek to explain the dimensions and factors that influence the digital divide (DiMaggio & Hargittai, 2001;Helsper, 2021;van Dijk, 2006). What many of these frameworks have in common is that, in seeking to explain the digital divide, they incorporate one or more of a number of dimensions -material, motivational, skills and usage. Given our attention to digital deprivation, this article focuses on the material dimension. It is concerned with the root causes of deprivation, which are economic in nature (Pacione, 2009). While material deprivation is linked to the complex poverty problem (Pacione, 2009), it is defined as 'the enforced lack of a combination of items depicting material living conditions, such as housing conditions, possession of durables, and the capacity to afford basic needs' (Shamrova & Lampe, 2020, p. 2). The material dimension is closely linked to the idea of digital inclusion, which is broadly defined as different strategies designed to ensure that all people have equal access, opportunities and skills to benefit from digital technologies and systems (ITU, 2022). The material dimension, as we see it, is linked to digital deprivation, which is a socio-economic phenomenon that describes the gap in access, but that can also affect usage (OECD, 2000(OECD, , 2001b. The broader thinking around digital inclusion and the digital divide has evolved over the years and is currently grouped into three categories: binary internet access (first-order digital divide), digital skills (second-order digital divide) and the outcomes of internet use (third-order digital divide) (Scheerder et al., 2017). It is the first-level divide that we are interested in, as it has implications for the other two levels. The outbreak of the COVID-19 pandemic meant that digital exclusion due to lack of access has become more pronounced than ever before. Furthermore, with the growing importance of digital technology in our everyday lives, access to digital technology has become a basic need; and without that access, the gap between the economically rich and poor will most likely increase over time (Helsper, 2021). Nevertheless, 'for access to be a true indicator of inclusion in digital societies, it should be high quality as well as ubiquitous' (Helsper, 2021, p. 52). Moreover, of course, without access no one can use the internet (or other digital technology), and so the ability to develop skills, motivation, and general use is impeded. As van Dijk (2006) points out, the digital divide did start to attract growing attention; but from about 2005, interest began to wane, as the developed countries ensured that a large part of their population had access to electronic devices. However, the COVID-19 pandemic has shown that access issues are still important, and this has reignited the debate among scholars, who again focus attention on this aspect of the digital divide. The recent work includes research by Seifer (2020), Gibson et al. (2020) and Martins van Jaarsveld (2020), all of whom study the effects of the digital divide in the COVID-19 crisis among the elderly population. Among
3
Digitally Deprived Children in Europe school-aged children, Rodicio-García et al. (2020) find that 14.8% of students in Spain recognize that they do not have enough resources to follow online education, while Stelitano et al. (2020) explain that students of colour who live in great poverty or in rural areas of the US report having had less access to the internet at home during school closures. Furthermore, research by Kuc-Czarnecka (2020) indicates that there are areas of Poland that are especially vulnerable to digital deprivation. It is still hard to predict how the post-COVID-19 world will look (Kufel, 2020), but the increasing pervasiveness of digital technology is a reality that all countries face (Kuc-Czarnecka, 2020).
Aside from the more recent academic research driven by the pandemic, earlier studies analysed several dimensions of the digital divide. For instance, the work of Longley and Singleton (2009) matched the 2004 Index of Multiple Deprivation (IMD) with a classification of ICT usage in the United Kingdom. They suggested that the lack of digital engagement was linked to high levels of material deprivation. In this respect, the authors developed a cross-classification of material deprivation and ICT usage. Yelland and Neal (2013) went beyond the classical digital-divide dichotomy between the 'haves' and the 'have nots' and looked at how the lives of families in low socio-economic areas of Australia improved as a result of their being given a computer and internet access. Students noted that they could complete school work and communicate with friends, while parents saw an increase in all family members' confidence and active participation in their communities. 3 Gordo (2003) found similar results and emphasized that closing the digital gap could benefit people who live in poverty.
Recent research by the International Computer and Information Literacy Study (ICILS) shows that, on average, students with a better socio-economic background have significantly higher Computer and Information Literacy (CIL) scores (Fraillon et al., 2020). Students' CIL scores are shown to be associated with access to computers at home and years of experience using computers, according to the authors. In all participating countries, students with two or more computers at home have statistically significantly higher CIL scores than students with one or zero computers at home (Fraillon et al., 2020). Demographic and socio-economic factors are drivers of digital exclusion in terms of access (Sanz & Turlea, 2012). Research has also shown that young people with better access to ICT at home or at school, and those with a more positive attitude towards ICT, have greater digital skills (Haddon et al., 2020). Harris et al. (2017) studied the information technology (IT) usage of 1,351 Australian children aged between 6 and 17 years. In their research, they found socio-economic status to be a determinant of how children use IT. In high socio-economic neighbourhoods, children were involved in IT activities, reading, playing musical instruments and engaging in physical activities. By contrast, in low socio-economic neighbourhoods, children were more exposed to TV, electronic games, mobile phones and non-academic computer use at home. However, the authors did not address the digital divide in terms of ICT access, as they considered that the digital divide lay in how (not whether) children used devices.
Additionally, research by Livingstone et al. (2005) and Livingstone and Helsper (2007) examined inequalities in internet access and usage among children aged 9-19, using the UK Children Go Online survey. The results of this research showed that more deprived regions had lower levels of internet access. The same was true of children with disabilities, who also had lower levels of internet access. Ting-Feng et al. (2014) explored the digital divide among students with learning disabilities, and found that while there was no disadvantage in terms of internet access, there was in terms of digital literacy. Similar results were reported by Vicente and López (2010), who showed that people with disabilities are less confident about their online activities and skills. Jackson et al. (2008) and Judge et al. (2006) found a positive relationship between narrowing the digital divide, ICT use and academic performance. Finally, Chinn and Fairlie (2004) studied the determinants of computer and internet use in high-income and low-income countries, including a wide range of economic, demographic and policy factors. They found that the global digital divide was mainly explained by income disparities, communication infrastructures, access to electricity, the institutional environment and demographic characteristics (James, 2008). However, few studies have approached the digital divide from a cross-country perspective, as we do in this paper.
Importantly, none of the literature reviewed on digital exclusion uses up-to-date data to study the prevalence of digital deprivation across European countries and over time. Also, we have been unable to find any recent studies that tackle the socio-economic and demographic characteristics that define the phenomenon in Europe. Thus, we aim to fill this gap in the literature by providing a recent detailed account of who the digitally deprived children in Europe are and what socio-economic characteristics they share.
Data
The data set used in this paper is the EU-SILC in its cross-sectional form, provided to researchers by Eurostat. This survey aims to collect comparable microdata on all aspects of Europeans' living conditions, including (among other things) on income, material deprivation, labour market, demographic and educational characteristics, childcare and housing costs. The data is collected by national statistical bodies, following a common framework. However, a degree of flexibility is allowed: the information can be either extracted from registers or collected from interviews, using five possible modalities -face-to-face interview (PAPI or CAPI), telephone interview (CATI), self-administered by respondent or proxy interview. Generally, Eurostat gives priority to face-to-face personal interviews (GESIS, 2022). Most of the analysis focuses on data relative to 2019 as it is the last wave that allows a comparative analysis of most European countries. 4 Data for Iceland and the UK is not provided, and so we use data relative to 2018 for these countries. The EU-SILC has several Digitally Deprived Children in Europe advantages for the purposes of our research: (i) it allows a comparative analysis across Europe, with evidence for 32 countries; (ii) it provides very detailed information on the socio-economic background of children, as it includes data on household income, parental characteristics (such as labour market attachment), household structure, material deprivation, etc.; and (iii) it allows us to track changes over time, as it covers a relatively long period -most countries have participated since 2004.
The information relating to digital deprivation is contained in two variables. HS090 collects, at the household level, the answers to the question 'Does your household have a computer?' Household respondents can answer 'yes' or 'no'. If the answer is negative, the question continues as follows: 'If you do not have a computer: (a) Would you like to have it but cannot afford it or, (b) Do you not have one for other reasons, e.g. you do not want or need it?' 5 PD080 collects, at the individual level, the answers to the question 'Do you have an internet connection for personal use when needed?' In this case, all adult members in the household can answer 'yes' or 'no'. And, again, if the answer is negative, they are asked whether it is because of unaffordability or for some other reasons. The data documentation clarifies that such internet access can be via smartphone, other wireless handheld device (e.g. a tablet), video games console, laptop, desktop computer or TV. 6 We define as 'digitally deprived' those children that either live in a household that cannot afford to have a computer and/or live with adults who cannot afford an internet connection. 7 Importantly, there are other databases that collect a wider array of digital indicators; but in such cases we do not have as detailed information on the socio-economic background of children as the EU-SILC provides. Furthermore, this is the only data set that we know of that records enforced lack; thus, it is clearly stated that the members of the household would like to have a given item, but cannot afford it (Mack & Lansley, 1985;Marlier et al., 2007). 8 5 According to the data documentation, 'possessing the item does not necessarily imply ownership: the item may be rented, leased, provided on loan or shared with other households in (e.g.) a complex apartment and not necessarily owned. If the item is shared between households, the answer is YES if there is adequate/easy access (i.e. household can use the durable whenever it wants) and NO otherwise … A computer includes a portable computer or a desktop computer, but does not include machines dedicated to video games that do not have any broader functionality. If a computer is provided ONLY for work purposes, this does not count as possessing the item' (Eurostat, 2020, p. 196). 6 In the data documentation, more detail is provided: 'Example of internet activities for personal use: social networking, sending/receiving emails, using services related to travel and accommodation, creating web pages, blogs, internet banking, reading or downloading online music, video, news, etc. looking for information, telephoning or making video calls, buying/selling goods or services, taking part in online consultations or voting on civic or political issues, etc. The household member is considered to have internet connection for personal use at home only if all the needs for personal use he/she are fully fulfilled by this connection.' (Eurostat, 2020, p. 345). 7 Guio et al. (2012) considered that a household was deprived only if it lacked both a computer and an internet connection. Guio et al. (2017) only accounted for the lack of an internet connection under the argument that many individuals can now access the internet using other devices such as smartphones or tablets. While this is true, in this study, we want to consider both indicators given the large number of European children that now receive part or all their teaching online. 8 Such information allows us to disregard families that, because of their life style, do not want to have an internet connection and/or a computer.
Our sample considers children from above the age of 5 and below the age of 17, thus covering the period of compulsory education in the vast majority of countries analysed. We consider children who live with at least one parent. 9 As Table 1 shows, the average age of the children was 11 years, and the parents were on average aged 42. Also, 48% of the sample were girls. Mean household size was 4.22 members, and 19% of children lived in a single-parent household. 10 Furthermore, 20% of children in the sample were poor. Following the European Commission guidelines for the measurement of poverty in Europe, a household is defined as being in poverty if the equivalent household income is below 60% of the median of the same distribution. The modified OECD equivalence scale that gives a weight of 1 to the first adult, 0.5 to any other adults in the household and 0.3 to children below the age of 14 is used. Also, 6% of children live in 'severe material deprivation', according to the definition of the European Commission. That is, out of nine possible items, they lack at least four. 11 Finally, 15.2% of the children have at least one parent of non-European immigrant origin; 13.1% cohabit with parents who had not acquired education above the level of primary school or compulsory lower secondary school (ISCED 2011, level 0-2); and 24.2% live in a large family, with at least three children under the age of 18 in the household., 12,13 In single-parent households, we only consider the education of the mother or the father cohabiting with the child. 13 We do not distinguish between urban and rural areas, because the information is missing for 10.01% of the sample.
3
Digitally Deprived Children in Europe
Digital Deprivation in Europe
This section presents the main results of our study. First, we discuss our findings regarding the prevalence of child digital deprivation across Europe and how it has changed over time. Then we present the results regarding the socio-economic characteristics that are associated with child digital deprivation both at the European level and by country cluster.
Children's Digital Deprivation across Countries and Over Time
As mentioned above, 5.4% of school-aged children in Europe are digitally deprived, according to the latest data available -albeit the differences across countries are very large. Figure 1 shows the percentage of children who live in a household that cannot afford to have a computer and/or cohabit with adults who cannot afford to have an internet connection. The choropleth map shows two country clusters with a certain North-South divide. On the one hand, in Northern and Continental Europe, as well as in the Baltic countries and the UK, the percentages of digitally deprived children are very low -as low as 0.4% in Iceland, 0.7% in Estonia and 1.1% in Norway. None of the countries in this cluster have percentages above 3%. On the other hand, the prevalence of the phenomenon is much higher in the Mediterranean countries, and particularly in Eastern Europe (as indicated by darker colours on the map). In Romania, 23.1% of children are digitally deprived, and in Bulgaria 20.8%. The percentages are not as high in Hungary or Serbia, but more than 1 child in 10 is faced with the problem. Among the Mediterranean countries, it is in Spain that the percentage is highest (8.8%). 14 Figure 2a and b show the relative importance of the two indicators used: Fig. 2a refers to computer unaffordability and Fig. 2b refers to internet connection unaffordability. The maps indicate that, of the two items, it is the inability to have a computer at home which mostly drives the overall results. In this case, it should be noted that again a certain North-South divide emerges, with a greater prevalence in Mediterranean and Eastern European countries. On the positive side, there are several countries where no household with children reports being unable to afford an internet connection at home -see, among others, Finland, Norway, Iceland and Austria.
Despite the high figures for digital deprivation among school-aged children in Europe, an analysis of the trend over the past five years indicates that the great majority of countries -and particularly those most affected by the problemhave moved in the right direction. Figure 3 shows the percentage of children who were digitally deprived in 2015 and in 2019, with the arrow indicating the change 14 Figure 6 in the Appendix shows the percentage of digitally deprived children in Europe in 2020 for those countries with available data. The change between 2019 and 2020 is not statistically significant in most countries, except for Belgium, Portugal, Spain and Sweden (where we can document a slight decrease in the percentage of digitally deprived children) and Slovenia (where a small increase is to be found).
1 3 in direction. For example, Romania has reduced the number of children affected by digital deprivation from 36.5% to 23.1% in this five-year period. In the case of Bulgaria, the change has been from 27.9% to 20.8%. Important advances have also taken place in Portugal, Greece and Italy, and to a smaller extent also in Serbia, Hungary and Spain. For the countries at the bottom of the figure, the change is not significant -largely because the problem was negligible in 2015.
Who are the Digitally Deprived Children in Europe?
The previous section showed the great heterogeneity of digital deprivation prevalence among school-aged children in Europe, and how it has changed in the past Fig. 1 Percentage of digitally deprived school-aged children (6-16), Europe, 2019. Note: Data for the UK and Iceland refers to 2018. In Austria, Denmark, Estonia, Finland, Germany, Iceland, Luxembourg, Malta, the Netherlands, Norway and Slovenia, fewer than 30 observations define the digitally deprived population, and therefore the results should be interpreted with caution. Source: Authors' computation, using data from EU-SILC, 2019 (released November 2021)
3
Digitally Deprived Children in Europe five years. In this section, we aim to identify the socio-economic and demographic characteristics that define a digitally deprived child in Europe. Is it parental education? The number of children in the household? Or the fact that the family receives income below the poverty line? In our analysis, we consider six household characteristics that could potentially be associated with digital deprivation: the child lives (1) in a single-parent household; (2) in a poor family; (3) in a severely materially deprived household; (4) with at least one parent of non-European origin; (5) with parents that have at most lower secondary education; and (6) with at least two other siblings under the age of 18. In order to be able to work with a larger sample, and given that some characteristics relate to minorities, the results refer to the last five waves of data -that is, the period between 2015 and 2019. 15 Furthermore, and on account of the small number of observations that define the digitally deprived population in some countries, as well as presenting the results at the European level, we show them by country cluster. 16 Our systematic exploration of the demographic and socio-economic characteristics associated with digital deprivation is based on logistic regressions, which we Table 4 in the Appendix presents summary statistics for this five-year sample. 16 We consider six country clusters: Southern Europe (Spain, Greece, Italy, Portugal, Cyprus and Malta), Northern Europe (Finland, Sweden, Norway, Iceland and Denmark), Eastern Europe (Hungary, Poland, the Czech Republic, Romania, Serbia, Croatia, Slovenia, Slovakia and Bulgaria), Continental Europe (Belgium, Austria, Switzerland, Germany, France, Luxembourg and the Netherlands), the Anglophone countries (the United Kingdom and Ireland) and the Baltic area (Estonia, Lithuania and Latvia). present in Table 2 and in Figs. 4 and 5 in the form of odds ratios. 17 Our dependent variable takes value 1 if the child is digitally deprived and 0 otherwise. Our parameters of interest are those associated with the six at-risk groups mentioned above. Control variables include individual characteristics of the child (gender, age and age squared), household size, parents' average age (and its square), year dummies (to control for changes over time) and country dummies (to control for time-invariant country characteristics). 18 Standard errors are robust and clustered at the country level. 17 An odds ratio above the value of 1 implies that a given characteristic is positively associated with digital deprivation, while a value below 1 implies a negative association. 18 In single-parent households, average age refers to the age of the parent present in the household.
3
Digitally Deprived Children in Europe As shown in the first column of Table 2 and in Fig. 4, at the European level one characteristic clearly stands out as being very closely linked to children's digital deprivation: living in severe material deprivation. On average across Europe, that increases the risk of suffering digital deprivation by a factor of 6.9 among schoolaged children. Being poor and having low-educated parents are also relevant factors -these variables multiply the risk of being digitally deprived by a factor of 2.9 and 3.3, respectively. All other risk factors considered are positive (albeit at a lower level) and statistically significant at 99%. As for the control variables, we find no statistically significant differences between boys and girls, while the risk of being digitally deprived increases with age and decreases with parental age.
Next, we move to the results by country cluster (see Fig. 5). With very few exceptions (remarked on below), we find that the large bulk of the risk factors considered are positively linked to digital deprivation -though the strength of the association varies by context. In all groups of countries, the characteristic most strongly associated with digital deprivation is living in a household with severe material deprivation. For example, in Eastern Europe it multiplies the probability of being digitally deprived by 7.6; in the Baltic countries by 7.2 and in Continental Europe by 4.4. Cohabiting with low-educated parents is of particular importance in Eastern Europe, the Mediterranean countries and the Baltic area: in all those clusters, the probability of being digitally deprived increases by at least a factor of 3.5. Poverty is also a strong determinant of digital deprivation among school-aged children, though with a similar effect in all the country clusters analysed. With the sole exception of Northern Europe (which shows high risk), living in a large family has a more muted effect, and does not differ statistically from zero in the English-speaking countries. The same is true for living in a single-parent household, with relatively low risk in Southern and Eastern Europe and in the Baltic countries. In this last case, the associated odds ratio is not precisely estimated. Finally, and interestingly, having parents of non-European immigrant origin reduces the likelihood of digital deprivation in Eastern Europe and the Baltic area, while it increases the probability in all other contexts.
For the interested reader, Table 5 in the Appendix presents qualitative results from similar regressions as those in Table 2, but at country level, with the objective of providing a more nuanced picture of the determinants of digital deprivation across Europe. 19 In this case, we only consider countries where the digitally deprived sample of children is above 150 observations for the five-year period. The main takeaway from these results is that severe material deprivation, poverty and low parental education are positively associated with digital deprivation in all the countries analysed, with the results statistically significant at 99% confidence level in all cases. The degree of association with digital deprivation for the rest of the characteristics varies much more, with the results less precisely estimated.
Conclusions
This paper provides a detailed account of the digitally deprived children in Europe. We use the cross-sectional form of the latest wave available of the European Union -Statistics on Income and Living Conditions (EU-SILC), which refers to 2019. To the best of our knowledge, this is the only database that records enforced lack, and it allows us to identify households that are digitally deprived 1 3 Digitally Deprived Children in Europe because of unaffordability. We consider lack of access to a computer and lack of access to an internet connection at home. We find that 1 child in 20 in Europe is digitally deprived, with substantial differences from country to country. For example, in Romania, 3 children in 10 live in digital deprivation. In Bulgaria, the figure is 2 in 10. Thus, we document an important problem of access to the tools necessary for education in today's Europe: for many children, having a computer connected to the internet makes all the difference between keeping up with their education and falling badly behind. As a result, inequalities are likely to be exacerbated.
Of the two items considered (access to a computer and internet access), inability to afford a computer is far more prevalent than inability to afford an internet connection. The phenomenon is particularly widespread in Southern and Eastern European countries, and it particularly affects children who live with low-educated parents, in a poor household and/or in severe material deprivation. Nonetheless, the heterogeneity of the characteristics that describe a digitally deprived child across countries is worth noting. For example, in Eastern Europe, having parents of non-European immigrant origin is not associated with a higher probability of being a digitally deprived child, whereas in the remaining country clusters it is.
Computer and internet access can benefit children. Those who have access can be said to have a better opportunity to develop their interests, confidence and skills; as a result, these children and young people can benefit more fully from digital technologies, because they have both a better understanding of them and the opportunity to develop the digital competences required in today's world. Digital exclusion can impose a burden on children in terms of inclusion and participation in the online environment. Access to ICT can also be important in terms of mental health, which has been found to be worse among those children who experience problems of digital exclusion (Metherell et al., 2021).
Our study highlights the fact that access to a computer and to an internet connection are not guaranteed for all European children. It provides new findings on who these children are and where they live. In this way, we aim to contribute to the creation of evidence-based policies that can play a part in reducing the existing digital inequalities in access. Current and future policy efforts should target and support children who share the socio-economic characteristics associated with digital deprivation. If we want to achieve equality of opportunity in education, we should begin by providing equal access to education, which now implies having access to a computer and internet connection. Furthermore, schools, as a part of communities, carry an element of continuity; both in time of crisis and in the future, there exists the challenge of how to secure educational activities to support learning and continuity in children's lives. Therefore, it is crucial for children to have access not only to the internet, but also to the digital tools that are essential for their education and that can further the development of their digital skills. Digital deprivation limits not only children's access to information, but also their opportunity to develop the digital skills they need in the 21st century -technical, information, communication, collaborative, creative, critical-thinking and problem-solving (van Laar et al., 2017). Furthermore, van Laar et al. (2017) remind us that 'the dynamic changes in the types of jobs demanded by the knowledge society pose serious challenges to educational systems, as they are currently asked to prepare young people for jobs that may not yet exist' (p. 584). Certainly, ensuring that children have access to digital technology in order to start developing these crucial digital skills is an important first step.
Finally, our study is limited by the information included in the EU-SILC database regarding new technologies. To improve the monitoring of the situation of children, both during and after the COVID-19 pandemic, the database should incorporate more variables, including (but not limited to) whether the household has more than one computer available; whether children own or can use any other devices; internet speed; and the cost of internet access. It is of the utmost importance to learn the extent to which devices in the household are truly available to children; how frequently those devices are used and for how long; the rules governing sharing of the devices; and even children's own satisfaction with the use of technology. Certainly, the increasing prominence of digital technology, which is even more evident following the COVID-19 pandemic, means that it will be ever more important to identify the extent and depth of digital deprivation across Europe. This knowledge will allow governments to target areas of need and to develop appropriate legislation to combat deepening societal inequalities. Digitally Deprived Children in Europe Table 5 Results of the logistic regressions (odds ratios) for the probability of being digitally deprived at country level, Europe, 2015-2019 Note: ***, ** and * represent statistical significance at 1%, 5% and 10%, respectively. Data for the UK and Iceland refers to the period 2015 to 2018. We do not consider countries with fewer than 150 observations for the sample of children in digital deprivation Source: Authors' computation, using data from EU-SILC, 2015-2019 Lone-parent household In poverty In severe material deprivation
|
2023-01-06T05:06:45.501Z
|
2023-01-04T00:00:00.000
|
{
"year": 2023,
"sha1": "b362af896ada2196aacf5eb43a3f547ddd240dec",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12187-022-10006-w.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b362af896ada2196aacf5eb43a3f547ddd240dec",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
235488599
|
pes2o/s2orc
|
v3-fos-license
|
Modeling mangrove responses to multi-decadal climate change and anthropogenic impacts using a long-term time series of satellite imagery
Determining the effects of simultaneously operating climate change and anthropogenic impacts on mangroves remains a challenge for the Persian Gulf and the Gulf of Oman. The objective of this study was to spatially quantify the spatial extents and biomass of dwarf and tall mangroves in response to drought intensities (Standardized Precipitation Index, SPI), sea-level rise (EC), land use/land cover change (LULC), and surface runoff and freshwater inflow to upstream mangrove catchments along the coasts of southern Iran over a 35-year period (1986–2020). The study drew on a long-term time series of 105 Landsat satellite images, maritime data, rainfall data, field surveys, and models to quantify independent variables (i.e., the WetSpass-M model to quantify surface runoff) and mixed models analysis to determine the important drivers of change. Although mixed model analysis indicated that sea-level rise was the main climate change driver of the development of spatial extents and biomass of both tall and dwarf mangroves, its influence was embedded in the larger temporal climate context of the region. Our findings show that spatial extents and biomass development are neither tightly coupled nor developed temporally or directionally synchronously in dwarf and tall mangroves. This study also shows a precipitous decline in rainfall amounts, SPI, and freshwater runoff volumes starting in 1998. The ensuing longterm drought that is still ongoing but decreased slightly in intensity in the last few years shaped the overall spatiotemporal development pattern of the mangrove structures over the entire study period. The correlation between biomass and the independent variables was similar for dwarf and tall mangroves and was positive with SPI and runoff amounts and negative with EC in all three study sites.
Introduction
Climate change and land use/land cover changes (LULC) can disturb hydrological and sedimentary regimes that largely control mangrove structure and biomass (Jia et al., 2018). Climate change, reflected in increased air temperatures, and intensities of storm events as well as altered patterns in rainfall, drought occurrences, and ocean circulation, has led to a decline in the area, structure, production potential, health, survival, and functioning of mangrove ecosystems around the world (Gilman et al., 2008;Mafi-Gholami et al., 2020a, b). Rising sea levels associated with climate change pose a particularly grave hazard to the long-term survival of mangroves if they cannot expand landward (Abdollahi et al., 2017;Mafi-Gholami et al., 2019, 2020a. LULC changes can affect the timing and balance of monthly and seasonal surface and subsurface flow regimes and volumes of runoff entering coastal areas, amplifying climate change effects on mangroves (Eslami-Andargoli et al., 2010). Synergies among simultaneously occurring multiple stressors and disturbances (Gilman et al., 2008;Lewis et al., 2011) can severely decrease the viability of mangrove seedlings and the structure, growth, and net production of mature mangroves, resulting in diminished spatial extents and ecological functions (Eslami-Andargoli et al., 2010;Zhang et al., 2012;Hutchison et al., 2014;Asbridge et al., 2016;Godoy et al., 2018;Mafi-Gholami et al., 2020b).
Quantifying the effects of reduced runoff volumes on the structure and production of mangroves requires detailed long-term regional records of river discharge flows into coastal environments, which often do not exist in much of the developing and underdeveloped world (Eslami-Andargoli et al., 2010;Asbridge et al., 2016;Hu et al., 2018;Gao et al., 2019). To derive a time series of surface runoff from long-term rainfall data and LULC changes in upstream mangrove catchments, remote sensing may be a promising approach, offering comprehensive regional coverage and high temporal resolution for monitoring both mangroves and climate change over a long time (Hu et al., 2018;Pirasteh et al., 2020;Finger Dennis et al., 2021;Wang et al., 2021;Thomas et al., 2021). Satellite and aerial imagery are more accurate, cost-effective, and efficient than in-situ measurements for detecting changes in the extent and biomass of mangroves and LULC changes that occur at large scales and over long periods of time (Zhang et al., 2017;Bihamta et al., 2020;Lucas et al., 2020;Lymburner et al., 2020;De Jong et al., 2021;Finger Dennis et al., 2021).
In this study, we used long-term rainfall and sea-level data and longterm satellite imagery combined with extensive field surveys to prepare a 35-year time series (1986-2020) of drought intensity, sea level rise, surface runoff values, and LULC of upstream catchments to model the development of large-scale spatial extents and biomass of some of the remaining dwarf and tall mangroves in the Middle East. Previous studies documented a significant decline in rainfall amounts in 1998, switching from wet to dry conditions in the region, and average annual rates of sealevel rise of 3.4-4.8 mm yr − 1 on the Persian Gulf and 10.8 mm yr − 1 on the Gulf of Oman with further increases in sea-levels and temperatures and decreases in rainfall forecast over the coming decades (Mafi-Gholami et al., 2017;Mafi-Gholami et al., 2020a, b). It is unclear, however, to what extent recent LULC changes due upstream development of infrastructures and agricultural lands have further reduced freshwater volume inputs to the coasts, and thus exacerbate adverse climate change effects on the remaining mangroves (Etemadi et al., 2016).
In support to address adaptation planning for climate change impacts on mangroves worldwide (Ellison, 2015), we present a novel, largescale, spatially explicit approach for simultaneously quantifying climate change and anthropogenic impacts on a regional scale that can be applied in developed and developing countries. The specific objective of this study was to quantify the relationship between the spatial extents and biomass of dwarf and tall Avicenna marina mangroves to rising sea levels, drought occurrences and surface runoff volumes in the Persian Gulf and the Gulf of Oman region over the past 35 years. We also quantified to what extent rainfall amounts and human-induced LULC changes in upstream catchments have impacted surface runoff volumes. Because pure mangroves form dwarf and tall structures that occupy different locations along the land-sea continuum, we hypothesized that the combined climatic and anthropogenic effects differentially affected the two mangrove structures.
Study area
The study area includes three pure mangroves sites on the northern coasts of the Persian Gulf from 25 • 34 ′ 13 ′′ N to 27 • 10 ′ 54 ′′ N and the Gulf of Oman from 58 • 34 ′ 07 ′′ E to 55 • 22 ′ 06 ′′ E, encompassing a total area of 10025.55 ha (Fig. 1). Long-term climatic data of the study area show an average annual rainfall of 90 mm, an average annual temperature of 27.2 • C with minimum and maximum temperatures of 8.9 • C to 22.4 • C (January) and 30.1 • C to 48 • C (July), and an average annual relative humidity of 35%.
Methodology framework and data sources
This study combined different data sources to relate mangrove responses to climate change and anthropogenic impacts over the most recent 35-year (1986-2020) period (Supplemental Fig. 1). Field data sampled in 2017 provided the basis for estimating mangrove biomass that was regressed against NDVI values derived from Landsat images. Estimating the biomass change point that separates sample plots of dwarf from tall mangroves and applying the corresponding NDVI threshold to the time series of Landsat images produced maps depicting spatial extents and biomass of both mangrove structures. The accuracies of the constructed maps were evaluated and checked against aerial photographs.
Elevation capital (EC), drought intensity and surface runoff in each catchment area
Sea-level rise is related to the EC and can be computed from (Eq. (1)) where R sl is the absolute sea-level rise rate (3.3 mm yr − 1 ), R sub is the subsidence/uplift rate (-4 Annual drought intensity over time was quantified using September rainfall amounts recorded in 31 synoptic rainfall stations distributed throughout the study sites (Iranian Meteorological Organization, IRMO). Drought intensity is typically expressed by the Standardized Precipitation Index (SPI) that reflects long-term rainfall amounts, drought occurrences, and their impacts on natural ecosystems (Wang et al., 2019). An interpolated spatially explicit SPI raster map was produced for each upstream catchment area applying the inverse distance weighting (IDW) model in the ArcGIS software (version 10.7.1) to annual SPI values of all stations.
Due to the limited availability (7 years) of river discharge data into upstream catchments, no direct long-term estimate of freshwater volumes entering the coastal areas could be obtained. Instead, the amount of annual total surface runoff (m 3 / year) in each upstream catchment was computed as Eq. (2) (Abdollahi et al., 2017) using the WetSpass-M model (Hydrological and Hydraulic Engineering Organization, University of Brussels): where ∑ is the sum over 12 months, S rm is monthly surface runoff (mm), C sr is an actual runoff coefficient, P m is monthly rainfall (mm), I m is monthly interception (mm), and C h is a dimensionless coefficient that represents soil moisture conditions scaled between 0 and 1.
Computing the components of Eq. (2) relied on a time series of monthly rainfall data and values of various coefficients that were adopted from previous studies (De Groen and Savenije, 2006;Pistocchi et al., 2008;FRWMO, 2017;IWRMC, 2019;IRIMO, 2019;LP DAAC, 2020) and then optimized for the study area during the model calibration stage (see supplement for more details). LULC changes were quantified as changes in the runoff coefficient of each upstream catchment were based on satellite images.
The Z-statistic of the Mann-Kendall (MK) test that is embedded in the MAKESENSE 1.0 software (Zhang et al., 2012) was used to determine the change point year in the in the 35-year time series of SPI and runoff values in each study site. The MK test statistic is calculated from Eq. (3): and n is the sample size.
The statistic S is approximately normally distributed when n ≥ 8, with mean (E) and variance (V) (Eq. (4) and Eq. (5)): where t i is the number of ties of the event i. The standardized statistic (Z) for a one-tailed test is (Eq. (6)): At the 5% significance level, the null hypothesis of no trend is rejected if |Z| > 1.96. Positive values of Z indicate increasing and negative values denote a decreasing trend.
Field surveys and estimation of amounts of mangrove biomass
Field surveys were done in 2017 in 385 sample plots (190 in Khamir,110 in Tiab, 85 in Jask) established using the Global Positioning System (GPS) at a distance of 150 m from one another along 10 (Khamir) and 6 linear transects (Tiab and Jask) that extended from the landward margins (location of dwarf mangroves) to the seaward edges (location of tall mangroves) (Supplemental Fig. 2). Plots sizes were 30 × 30 m size and corresponded to the 30 m spatial resolution of Landsat images. In each plot, tree diameters were recorded by mangrove structure.
Without known allometric relationships of above-(AGB) and belowground biomass (BGB) of mangroves to DBH for this study area, we used the allometric equations of Comley and McGuinness (2005) (Eqs. (7) and (8)): where AGB is the above-ground biomass, BGB is the below-ground biomass, and DBH is the diameter of each mangrove stem. AGB and BGB values of all stems in each plot were summed to arrive at plot-level biomass estimates.
Separating dwarf and tall mangrove structures
Examining dynamics of dwarf and tall mangroves required spatial separation and the determination of exact boundaries between structures, which is imprecise when using simple classification of vegetation index maps or through on-screen separation of Landsat images with medium resolution. We thus applied a two-step process that first led to the determination of the change point, i.e., the sample plot with the greatest probability of all points in the biomass frequency distribution (Lubes-Niel et al., 1998) to separate tall from dwarf mangroves. Change points were identified using the non-parametric Pettitt-Mann-Whitney test (α = 0.05) and the Cumulative sum (CUSUM) method in the Change Point Analyzer (CPA) software and confirmed by a t-test of the difference between the mean biomass values of tall and dwarf mangroves at the identified change point.
The second step involved finding the threshold NDVI value derived from Landsat images of the sample plots that corresponded to the biomass change point. Three cloud-free Landsat images (paths/row # 158/042, 159/041, 160/041) that coincided with the field survey dates in August and September 2017 and corresponded spatially to each sample plot were selected to extract NDVI values that were then related to the estimated biomass values through linear regression. The regression model was based on 70% of the sample plots of each study area; the remining 30% were used to validate the model. Following the preparation of NDVI maps and computation of the NDVI threshold value, study sites were classified into dwarf and tall mangrove areas and their spatial extents and average biomass values were computed for the year 2017.
Constructing the time series of spatial extent and biomass of tall and dwarf mangroves
The 35-year time series was constructed using 105 Landsat images (paths/row # 158/042, 159/041 and 160/041) that were acquired from the USGS Earth Explorer (Earth Explorer, 2019) portal and taken in summer months between 1986 and 2020 to avoid seasonal phenological differences in mangroves. Because dwarf mangroves (<1 m height) are often inundated during high tide, Landsat images taken at low tide were selected with the aid of daily recorded wave height data from 22 tide gauges located along the coasts to determine the spatial extent of dwarf mangroves and the location of the seaward edges of the tall mangroves (Mafi-Gholami et al., 2019;Gao et al., 2019). Following the required geometric and radiometric image corrections and using the NDVI threshold value, annual NDVI maps with boundaries around dwarf and tall mangrove areas as well as maps of LULC of upstream catchments were generated in the ENVI software and the biomass and spatial extents of both mangrove structures were computed for the entire time series for each study site.
Validation of the maps
The accuracy of the constructed maps for years 2016 and 2017 was assessed by field sampling of 190 plots during the summer of 2017. In addition, 200 plots (i.e., 90 in Khamir, 65 in Tiab, and 45 in Jask) were established in 2020 to assess the accuracy of the 2018-2020 maps. The accuracy of the 1986-2016 maps was validated by checking whether control points indicated the presence of dwarf or tall structures using texture, pattern, and lightness/darkness information derived from aerial photographs and Quickbird images of the years 1985, 1992, 1995, 2001, 2004, 2009, 2012 and 2014 (National Geographical Organization of
Fig. 2.
One-year SPI values (calculated using monthly rainfall data) over the 35-year period (1986-2020) in the upstream catchment area. Iran). Stratified random sampling was used to assess user accuracy, producer accuracy and overall accuracy of the produced maps (Eslami-Andargoli et al., 2010), which determined a consistent accuracy of > 95% of the classified images.
Relationship of mangroves to the independent variables
Pearson correlation coefficients of EC, SPI, surface runoff, and area and biomass of dwarf and tall mangroves were computed for each study site separately for the wet (1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997) and dry (1998-2020) periods. Repeated measurement mixed modelling with a first-order autoregressive temporal correlation structure was applied to examine the statistical significance of the fixed effects of EC, SPI and runoff values and their two-and three-way interactions on the responses by study site for each period. Statistical analyses were performed in SAS 9.4 (SAS Inc., Carey NC) (https://www.sas.com/en_us/company-information/profile. html).
Drought occurrences in the upstream catchments between 1986 and 2020
Average annual SPI values showed that the region experienced a wet period (positive SPI values) between 1986 and 1997 followed by a dry period (negative SPI values) that started in 1998 and was most severe in 2006. Droughts lessened slightly thereafter in the region, particularly on the Gulf of Oman, but continued into 2020 (Fig. 2).
Computing runoff volumes over time with observed annual rainfall values and LULC values held constant at 1986 baseline values indicated only small contributions of LULC changes to runoff volume decreases (e. g., 1.1% in Kamir, 1.3% in Tiab, and 1.6% in Jask in 2020).
The NDVI threshold value of 0.57 had the highest probability (>0.98 in all transects) separating dwarf and tall mangroves (Table 1, Supplemental Fig. 4).
Development of spatial extents and biomass of dwarf and tall mangroves between 1986 and 2020
Spatial extents of mangroves developed uniquely over time in each study site and differed by mangrove structure (Fig. 4). The greatest area changes occurred during the wet period for both structures in Khamir and Tiab and for tall mangroves in Jask. Areas of dwarf mangroves increased in Tiab (268 to 830 ha, +46.82 ha yr − 1 ) and decreased in Table 1 Changes of mean total biomass in NDVI values higher and lower than 0.57 in the different study habitats along the northern coasts of the Persian Gulf and the Gulf of Oman. Khamir (5757 to 4287 ha, -122.55 ha yr − 1 ) and Jask (346 to 312 ha, -2.82 ha yr − 1 ) while those of tall mangroves increased strongly in all three study areas (Khamir: 1485 to 3423 ha, +161.6 ha yr − 1 ; Tiab: 88 to 338 ha, +20.85 ha yr − 1 ; Jask: 109 to 257 ha, +12.34 ha yr − 1 ).
The development of biomass of tall (but not dwarf) mangroves followed different patterns in the three study regions during the wet period but followed a more synchronized decline (both structures) during the dry period (Supplemental Fig. 8).
Correlation structure among EC, SPI, and runoff
EC values increased by 4.8 mm yr − 1 (Khamir), 3.4 mm yr − 1 (Tiab), and 10.8 mm yr − 1 (Jask), with cumulative increases between 1986 and 2020 of 16.4 cm in Khamir (from 135.1 cm to 151.5 cm), 11.2 cm in Tiab (from 104.5 cm to 116 cm), and 36.7 cm (from 26.5 cm to 63.2 cm). The correlation structure was highly variable before and after the change year, with differences among sites and between the two periods (Supplemental Table 1). Of the few statistically significant correlations, most changed the sign/direction before and after the change year. The correlation pattern in both periods was the same in all three sites (with the exception of Khamir in 1986-1997) only for EC and surface runoff, which also had the greatest correlation coefficient by period in the three sites.
Correlation structure between spatial extent and biomass of dwarf and tall mangroves
The correlation structure differed between the pre and post-changeyear periods and was unique to each study site (Supplemental Table 2). During the wet period, both areas and biomass of dwarf and tall mangroves were significantly negatively correlated in Khamir and positively correlated in Tiab. While dwarf and tall mangroves were significantly negatively correlated, biomass was positively correlated in Jask. During the dry period, area and biomass of dwarf mangroves were significantly negatively correlated in Jask, whereas the areas of dwarf and tall mangroves were significantly negatively correlated in Tiab. The biomass of dwarf and tall mangroves was significantly positively correlated in Khamir and Tiab. The area and biomass of tall mangroves was significantly positively correlated in Khamir.
Relationship of EC, SPI, and runoff with spatial extents and biomass of dwarf and tall mangroves
The correlation of the independent and response variables differed among the three study sites and between the pre-and post-change-year periods in each site (Supplemental Table 3). Consistently significant correlations occurred between EC and all four response variables (negative) in Khamir during the dry period and for all three independent variables with each dependent variable in Tiab during the wet period. Runoff was consistent (but not always significantly) and positively correlated with the area of tall mangroves during both periods in all three study sites. Overall, greater correlation coefficients (positive or negative) with all response variables except biomass of tall mangroves were observed for EC and runoff than for SPI.
The best-fitting repeated measures mixed models for areas and biomass of dwarf and tall mangroves differed between the pre-and postchange-year time periods and among the three study sites (Supplemental Table 4). All models indicated a strong linear or curvilinear influence of sea-level rise (EC) on the responses of dwarf and tall mangroves that was only slightly moderated or accentuated by SPI and runoff during both periods. During the wet period, EC was positively related to the area of dwarf mangroves in Tiab, the biomass of dwarf and the areas of tall mangroves in all three sites, and the biomass of tall mangroves in Tiab and Jask; increasing rainfall (SPI) and runoff typically, but not always, led to an increase in area and biomass of both mangrove structures. During the dry period, increases in EC were negatively related to the areas and biomass of both mangrove structures except for the area of dwarf mangroves in Tiab and the biomass of tall mangroves in Jask that both showed a U-shaped parabolic shape. Similar to the wet period, increasing rainfall (SPI) and runoff typically, but not always, led to an increase in area and biomass of both mangrove structures.
Table 2
Least squares regression (LSR) results using 70% of the data. Modeling the ABG and BGB of mangroves in tall and dwarf mangroves in different habitats using NDVI as a predictor.
Discussion
The use of satellites images was instrumental for elucidating the regional pattern of spatial extents and biomass responses of dwarf and tall mangroves to climate change that developed dynamically and asynchronously in the three study sites over the past 35 years. Consistent with global results (Ellison, 2015), sea level rise was the main climate change driver responsible for the declining areas and biomass of both mangrove structures in this study region. In this study, the effect of sea level rise was embedded in the local and regional temporal climate context of a precipitous decline in rainfall amounts, SPI, and freshwater runoff volumes in 1998 that ushered in a still-ongoing long-term drought. Exposure to oil pollution following major oil spills into coastal waters (ICZM, 2017) can further stress mangroves (Lewis et al., 2016) as may local infrastructure projects (Godoy et al., 2018) such as dam construction or the development of a water supply network to farmlands in upstream catchments (e.g., 2002 in Jask) that can drastically reduce river discharges and sediment volumes (Mafi-Gholami et al., 2017).
Although sea level rises over of the past 35 years may not have been linear as assumed in this study, increasing areas of tall mangroves during the wet period despite rising sea levels (Fig. 5) that caused some modest retreat at seaward margins (Mafi-Gholami et al. 2020a) highlight the importance of rainfall for mangrove dynamics. Abundant rainfall during the wet period appears to have lessened local sea level rises and the magnitude of inundation at seaward margins through increased runoff and sediment loads transported from upstream catchments to coastal areas that can raise mangrove beds (Lovelock et al., 2017;Woodroffe et al., 2016). In addition, the conversion of dwarf into tall mangroves at their intersection due to improved conditions for height growth and biomass production and the migration, establishment, and expansion of dwarf mangroves into upland saltmarshes, appear to have contributed to a net expansion of mangrove area.
Immediate and clearly detectable biomass reductions of both structures with the onset of dry conditions (Fig. 6), indicate that freshwater deficiency rapidly enhances stress, causing immediate growth reductions and mortality of sensitive seedlings and saplings, including a reduction of the competitiveness of dwarf mangroves versus saltmarshes that can halt or even reverse further upland incursions (Lovelock et al., 2017;Gilman et al., 2008;Eslami-Andargoli et al., 2010;Mafi-Gholami et al., 2017;Osland et al., 2017). During droughts, local sea level rise is enhanced due to lower runoff volumes, sediment loads, and sedimentation rates that can increase the potential for erosion (Ellison, 2008), influence temporal and spatial changes of mangrove boundaries/ shorelines and reduce mangrove areas and production potentials (Lovelock et al., 2017;Mafi-Gholami et al., 2020a). Although tall mangroves typically have lower sensitivity and vulnerability to environmental stresses and disturbances than dwarf mangroves (Ellison, 2015;Gao et al., 2019), intensified erosion and regression of seaward tall mangrove margins may also be influenced by the large increases in human activities and numbers of fishing ports and vessels in recent years that can lower sedimentation rates (Ellison, 2008;Mafi-Gholami et al., 2020b).
After accounting for the effects of rising sea levels, the effects of both SPI and surface runoff on spatial extents and biomass of both mangrove structures were mostly positive within each period. However, the lack of statistical significance of SPI and runoff during the wet period indicates that area and biomass dynamics were not tightly coupled to rainfall when SPI values were positive. In contrast, the statistical significance of either SPI or runoff during the dry period indicates slightly greater biomass under less droughty conditions. Compared to the effects of rising sea levels (EC), however, the effect sizes of SPI and runoff were small and those of anthropogenic LULC changes on runoff or mangroves were almost negligible.
Conclusions
This study extends previous results (e.g., Rovai et al., 2016;Osland et al. al., 2017;Gabler et al., 2017;Navarro et al., 2020;Lucas et al., 2020;Lymburner et al., 2020) and documents the benefits of a more detailed investigation of the temporal development of each mangrove structure. Asynchronous spatial dynamics of dwarf and tall mangroves within the region underscore the need for spatially explicit, local climate change adaptation planning that is embedded into the larger regional scale. The loss of biomass of tall mangroves and the loss of area of dwarf mangroves should be seen as an early warning system of a threat from drought, coastal erosion, inundation or competition from saltmarshes. The decline of tall mangrove biomass in Khamir and Jask that was out of synch with rainfall and available freshwater runoff during the wet period points to the other environmental factors (e.g., geomorphological characteristics, local tidal ranges, the degree of water salinity) or anthropogenic influences (e.g., oil pollution, mangrove exploitation, LULC changes prior to 1986) that may have heightened their vulnerability in this region. Sufficient rainfall that results in positive SPI values and greater surface runoff appears to increase growth and biomass and encourage migration of mangroves into adjacent upland saltmarshes that compensates or even reverse area losses due to inundation by sea water. Because accelerating sea-level rise and predicted increases in temperatures and drought intensities in the region in the coming decades will likely subject mangroves to further stresses, future studies should investigate the differences in sensitivity and vulnerability of dwarf and tall mangroves to the various aspects of climate change and how this may inform climate adaptation strategies. High-resolution hyperspectral optical and radar data could enhance the precision of separating tall and dwarf mangroves for more reliable assessments of the impact of climate change in future studies.
Authors statement
We declare that we have no known competing financial interests or personal relationships that could have influenced the work reported in this paper. This paper has not been submitted or published previously.
Declaration of Competing Interest
The authors appreciate the GeoAI Smarter Map and LiDAR Lab of the Faculty of Geosciences and Environmental Engineering, Southwest Jiaotong University (SWJTU), for the technical research support. This work is the outcome of one of the joint research studies with international University collaborators and has not been published or submitted previously.
The authors declare no conflict of interest. This research received funds to publish this research outcome from the start-up funds with Project Number: A1920502051907-6, from the Faculty of Geosciences and Environmental Engineering (FGEE), Southwest Jiaotong University (SWJTU), China. Data and code are available upon request.
|
2021-06-21T01:04:45.001Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "679ef87657963f96d62de2a590b6afb3e7a0362f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jag.2021.102390",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "679ef87657963f96d62de2a590b6afb3e7a0362f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science",
"Environmental Science"
]
}
|
8916394
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Study in Determining Features Extraction for Islanding Detection using Data Mining Technique: Correlation and Coefficient Analysis
A comprehensive comparison study on the data mining based approaches for detecting is-landing events in a power distribution system with inverter-based distributed generations is presented. The important features for each phase in the island detection scheme are investigated in detail. These features are extracted from the time-varying measurements of voltage, frequency and total harmonic distortion (THD) of current and voltage at the point of common coupling. Numerical studies were conducted on the IEEE 34-bus system considering various scenarios of islanding and non-islanding conditions. The features obtained are then used to train several data mining techniques such as decision tree, support vector machine, neural network, bagging and random forest (RF). The simulation results showed that the important feature parameters can be evaluated based on the correlation between the extracted features. From the results, the four important features that give accurate islanding detection are the fundamental voltage THD, fundamental current THD, rate of change of voltage magnitude and voltage deviation. Comparison studies demonstrated the effectiveness of the RF method in achieving high accuracy for islanding detection
INTRODUCTION
A small localized power source called as distributed generation (DG) becomes an alternative to bulk electric generation due to yearly demand growth.These DGs can be in the form of wind farm, micro hydro turbine and photovoltaic (PV) generator.Generally, these DGs are in the range of kW up to MW with several advantages such as environmental benefits, improved reliability, increased efficiency, improved power quality and reduced transmission and distribution line losses [1][2][3].However, one of the major drawbacks of DGs is when subjected to islanding mode of operation.Islanding is referred as disconnection of the main source in which it can be operated either intentional or unintentional.When disconnection occurs, the active part of the distribution system should sense the disconnection from the main grid and shut down the DGs, where island operation is prohibited or control action must be activated to stabilize the islanded part of system [4,5].Islanding operation has some benefits but several drawbacks are still observed, especially in unintentional islanding events which may cause problems related to power quality, safety, voltage and frequency stabilities, and interference [6,7].
Various techniques have been developed to detect islanding.Islanding techniques can generally be classified into remote and local methods.Remote methods are based on communication between the power utility and the DGs.Remote methods are highly reliable, but the practical implementation of these schemes can be inflexible, complex and IJECE ISSN: 2088-8708 1113 expensive.For instance, the cost of implementing a remote method can be extremely expensive especially when it is implemented in networks that do not initially have any communication infrastructure with the power utility.Therefore, local methods are favourable for detecting islanding condition.These local methods can be categorized as passive, active and hybrid techniques [8][9][10].The passive islanding detection technique monitors the system parameters such as voltage, current, frequency and harmonic distortion at the point of common coupling (PCC) with the utility grid for detecting events [3,[11][12][13].In the active islanding detection technique, disturbances are intentionally injected into the network and the island is detected based on the system responses to the disturbances [6,[14][15][16].Meanwhile, the hybrid technique is a combination of the active and passive techniques, in which active technique is applied only if islanding is not detected by the passive technique [3,[17][18][19][20].Data mining is widely used in numerous area including islanding detection [21][22][23][24].For instance, an intelligent islanding detection technique was developed in [25] using decision tree (DT) classifier to identify and classify islanding operations at specific target locations.However, the DT classifier is not capable in capturing all possible islanding events.To improve the accuracy of the DT classifier, fuzzy rule-based incorporated with DT was utilized in detecting the islanding events [26].In [13], a statistical signal processing algorithm is applied by using features from voltage and frequency waveforms.The accuracy of this technique is acceptable, but the delay in statistical processing makes this technique slower than other islanding detection techniques.Realizing the potential of data mining techniques for islanding detection, new techniques have been developed by combining the discrete wavelet transform with various classifiers, namely, DT, probabilistic neural-network (PNN) and support vector machines (SVM) [27].The test results showed that the best accuracy can be achieved by the DT classifier model [27].In [28] a pattern recognition approach based on the DT classifier was employed for islanding detection.However, DT classifier have limitations, such as possibility of spurious relationships, possibility of duplication with the same sub-tree on different paths and limited to one output per attribute, and inability to represent test that refer to two or more different objects, which requires an exploration of others intelligent technique.On the basis of the comprehensive literature review, the data mining using correlation and coefficient analysis had rarely been reported.Therefore, the main objective of this study is to propose a new islanding scheme using the correlation and coefficient analysis for features extraction and data mining techniques.Initially, features are extracted using the correlation and coefficient analysis in which seven parameter indices at the target DG location have been identified as important features for identifying the islanding events.Then five different data mining techniques, namely, DT, SVM, neural network (NN), bagging and random forest (RF) have been developed as classifiers in islanding detection.The proposed islanding detection scheme is tested on the IEEE 34 bus system with inverter based DGs.
BUILDING THE DATA SET 2.1. Test System
Fig. 1 shows the single-line diagram of the IEEE 34-bus distribution system model in MATLAB/SIMULINK software.The DG and the load are connected to distribution system by a 100-kVA 24.9-kV/480-V transformer.Meanwhile, the PCC is connected with R load with 100-kW.The DG is an inverter-based DG with current controlled interface using the same control units in the previous study [29].
Database Generation
Various islanding and non-islanding events should be generated with a wide range of dataset for training the classifier.The possible situations that may create islanding and non-islanding conditions are given as follows: i. Load and capacitor switching at different buses, ii.Several types of fault at different busses, and iii.Event that can trip breaker and reclosers, and island the DG.
ISSN: 2088-8708
The above situations are simulated under possible variation in operating condition which are considered as: i.Normal DG loading, ii.Different operating points that cause power mismatch at the local R load connected at bus 848.
Features Selection
The main idea of features selection is to choose the most significant input variables by eliminating features with non/less-predictive information.The use of significant features can greatly improve the classifier model performance and thus, increase the prediction accuracy as well as the computational speed.In this paper, the combination of various features parameters has been chosen from previous islanding detection methods focusing on inverter-based-DG.The extracted features include X a frequency deviation (∆f), X b voltage deviation (∆V), X c rate of change of voltage magnitude (∆V)/(∆t), X d fundamental current total harmonic distortion (T HD C f ), X e current total harmonic distortion (T HD C ), X f fundamental voltage total harmonic distortion (T HD V f ) and X g voltage total harmonic distortion (T HD V ).
The features are extracted by per phase basis in order to identify the most essential feature parameters for islanding detection.Figs. 2 and 3 show examples of features signals obtained from islanding event for phase-A at DG terminal in distribution system.The signals in Figs.2a-c and d-f represents the voltage and frequency of phase-A during islanding condition case, respectively.The signals in Figs.2b and c are the voltage deviation (∆V) and rate of change of voltage magnitude (∆V)/(∆t), respectively, obtained from the voltage signal of Fig. 2a.The frequency signals of Fig. 2d are evaluated to get the frequency deviation (∆f) as illustrated in Fig. 2f.Meanwhile, the information of THD for voltage and current are selected as shown in Figs.3a and b.The entire features information is then utilized as the input for the classifier.The features are then rearranged and expressed as, where X a is referred to the frequency deviation (∆f), X b is referred to the voltage deviation (∆V), X c is referred to rate of change of voltage magnitude (∆V)/(∆t), X d is referred current total harmonic distortion (T HD C ), X e is referred to fundamental current total harmonic distortion (T HD C f ), X f is referred voltage total harmonic distortion (T HD V ), X g is referred to fundamental voltage total harmonic distortion (T HD V f ) and y is referred to the number of points taken after the disturbance detected.ISSN: 2088-8708
FEATURE EXTRACTION USING CORRELATION AND COEFFICIENT ANALYSIS
The inclusion of irrelevant and redundant features extraction in the classifier model may results in poor performance in classification accuracy and increases the computation time.To obtain high classification accuracy, high quality of features need to be extracted in describing the islanding events using the correlation and coefficient analysis.Fig. 4 shows the correlation between 28 features variable.The colours and shape element in the figure are used to show the degree of correlation [30].The variables are said to have perfect correlation with itself, which is in the diagonal lines on the diagonal of the graphic (see Fig. 4).The blue colours shows the positive value, whereas the red for negative value that used to encode the sign of correlation.Meanwhile, filled circled means positive value, while anti-clockwise is for negative values.In this analysis, the Pearson correlation coefficient is utilized to measure the strength between 28 variable features.Mathematically, the coefficient is expressed as follows: where N is referred to number of pairs of scores, kl is referred to sum of the products of paired scores, k is referred to sum of k scores, l is referred to sum of l scores, k 2 is referred to sum of squared k scores, and l 2 is referred to sum of squared l scores.For instance, Fig. 4 shows that the most positive correlation variable is Xg, where most of the relationship between the variables are in positive value.The relation correlation between Xb 1 and Xc 1, Xb 1 and Xc 4, and Xb 3 and Xc 1 are evaluated as -0.6746369, -0.6300237 and -0.3214842, respectively.Therefore, the circle with red colours in Fig. 4, show the negative correlated between Xb and Xc.This finding proves that Xb is the most negative correlation between the features as shown in Fig. 4.
The significant of the variables is again highlighted by the importance analysis report from the RF learning as illustrated in Fig. 5 At the top of the tree, the value of X e is first compared with the threshold value 0.632898 and it will split into two descendent subsets.This subset is then split into several leaf called nodes which are designated by a class label.There are two class label in this study, namely, islanding and non-islanding cases.From the figure, all the cases having X e within 0.63 and 0.65 are predicted as non-islanding state.However, for cases with X e less than 0.63, the classification depends on the value of X b and X c . Figure 8. DT generated for phase A considering optimal node of inverter based DG.Fig. 9 shows the multidimensional scaling (MDS) plot for islanding and non-islanding events utilizing the RF classifier.This MDS is used to discover the underlying structure of distance measured between objects.The MDS assign the observations to specific locations in a conceptual space (commonly 2 or 3 dimensional space used), thus the distance between points in space match the given dissimilarities as closely as possible.
TEST RESULTS
The simulation data were obtained using MALTAB/SIMULIK software and the data were randomly divided into training and testing data set as summarized in Table 1.The features are extracted from the information given in (1).The open-source software, Rattle is used to implements the conventional DT, bagging and RF classifier.For easy comparison, all the classifier use the same training and testing data sets which gives two predictors of class label called as islanding and non-islanding events.Table 2 shows classification results for testing data set of phase A with three different classifiers, namely DT, bagging and RF classifiers.This result reveals that the highest accuracy can be achieved with the RF classifier with percentage classification of 98.9% and 100% for the non-islanding and islanding IJECE Vol.Further comparison is then made for islanding detection using SVM, NN, DT, bagging, and RF classifiers considering all the three phases.The performances of accuracy of these classifiers is evaluated as shown in Fig. 10 and Table 3. Table 3 show the accuracy of the five classifiers for islanding detection at each phase, i.e, phase-A, B and C. For all the phases, the RF classifier gives the highest accuracy compared to the other classifiers in detecting islanding events as indicated in bold.This result proves that the best classifier model to predict the islanding condition based on per phase feature extraction can be obtained using the RF classifier.A new islanding detection scheme for a power distribution system with inverter-based DG has been developed.The proposed scheme implements feature extraction using correlation and coefficient analysis for each phase and data mining techniques for classifying the islanding events.Unlike previous research works which ignore the per phases analysis, the proposed islanding scheme work progressively at each phase for a comprehensive identification of islanding cases.The simulation results on the IEEE 34 bus system with inverter-based DG showed that the proposed islanding detection classifier using per phase feature extraction can accurately detect the islanding events.A comparison between various classifiers, namely, SVM, NN, DT, bagging and RF has been made and the results showed that the highest accuracy in detecting islanding events can be achieved from the RF classifier.
Figure 3 .
Figure 3. Example features extraction for islanding case: (a) Voltage total harmonic distortion (T HD V ), (b) Current total harmonic distortion (T HD C ).
Figure 4 .
Figure 4. Visual summary of correlation between the 28 candidate attributes for phase A.
Figure 5 .Figure 6 .Figure 7 .
Figure 5. Top-down importance of variable according to accuracy loss or misclassification rate reduction (gini) for phase A.
Figure 9 .
Figure 9. Multidimensional scaling plot of proximity matrix from random forest.
Figure 10 .
Figure 10.Accuracies of various model
Table 2 .
CLASSIFICATION RESULTS ON TESTING DATA SETS FOR PHASE A
Table 3 .
PERFORMANCE OF VARIOUS MODEL CLASSIFIER IN TERM OF ACCURACIES
|
2019-02-17T14:05:52.418Z
|
2017-06-01T00:00:00.000
|
{
"year": 2017,
"sha1": "698d008895d74de49292140d7b4bd000c6ed9846",
"oa_license": "CCBYNC",
"oa_url": "https://dr.ntu.edu.sg/bitstream/10356/105624/1/842ca51e0a24b68aeb434a1e23f046c1d508.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b36c03169734909304fa86fd0025f48a0d124e72",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
186264292
|
pes2o/s2orc
|
v3-fos-license
|
Policy Model of Sustainable Infrastructure Development (Case Study : Bandarlampung City, Indonesia)
Infrastructure development does not only affect the economic aspect, but also social and environmental, those are the main dimensions of sustainable development. Many aspects and actors involved in urban infrastructure development requires a comprehensive and integrated policy towards sustainability. Therefore, it is necessary to formulate an infrastructure development policy that considers various dimensions of sustainable development. The main objective of this research is to formulate policy of sustainable infrastructure development. In this research, urban infrastructure covers transportation, water systems (drinking water, storm water, wastewater), green open spaces and solid waste. This research was conducted in Bandarlampung City. This study use a comprehensive modeling, namely the Multi Dimensional Scaling (MDS) with Rapid Appraisal of Infrastructure (Rapinfra), it uses of Analytic Network Process (ANP) and it uses system dynamics model. The findings of the MDS analysis showed that the status of Bandarlampung City infrastructure sustainability is less sustainable. The ANP analysis produces 8 main indicators of the most influential in the development of sustainable infrastructure. The system dynamics model offered 4 scenarios of sustainable urban infrastructure policy model. The best scenario was implemented into 3 policies consist of: the integrated infrastructure management, the population control, and the local economy development.
Introduction
High population growth in city areas has implications for the improvement of the community infrastructure needs. The relationship between cities and infrastructure is now emerging as a key city policy issue [14]. Many relevant aspects and actors involved in city infrastructure development and planning and it required a comprehensive and integrated policy to be sustainable [7,14,17,27]. A variety of strategies, policies, plans and programs of action for the development of an integrated and sustainable infrastructure in urban have been prepared, but the development of urban infrastructure still faces unresolved issues [16,28]. Infrastructure development does not only affect the economic aspects, but also social and environmental aspects, those are the main dimensions of sustainable development. Therefore, it is important to determine the measuring instrument to identify the ability to build sustainable infrastructure.
Previous studies on sustainable infrastructure reflected the need to design and manage engineering systems by the environment, social and economics consideration. The study include: municipal water system sustainability criteria [6,23], sustainable transportation [5,9,10,12,23,30], drinking water system [6,23,25 ], waste water systems [6 23 26], storm water systems [2,4,19,29], green Less (less sustainable) 50,01 -75,00 Fair (fairly sustainable) 75,01 -100,00 Good (Sustainable) Analytic Network Process (ANP) was used to determine the influential indicators of sustainable infrastructure development. The steps of selecting influential indicators as follows: 1). Determination of criteria and indicators based on expert consultation from the results of the previous analysis was based on a literature study, stakeholders and public opinion 2). Determination of the relationship between indicators was obtained through questionnaires 3). Construction of an alternative network model was based on the results of step 1 infrastructure development. 5). Testing consistency of pairwise comparison matrices that already meet the inconsistency ratio ≤ 10%. The next step is to calculate the weights of criteria and synthesis of indicators alternative of sustainable infrastructure development with a super decisions software use. Dynamic system is a comprehensive and integrated way of thinking, it is able to simplify the complex issues without losing the important things of concern [18]. A system dynamics model can not only arrange and describe the complicated connections among each element in different levels, but also deal with dynamic processes with feedback in a system. It also can predict the complex system change under different scenarios, which is very useful in examining and recommending policy decisions in city management.
The level of sustainability infrastructure
The findings on sustainable criteria and indicator for infrastructure development from various studies were summarized in Table 2). The criteria and indicators which resulted from literature review in Table 1 were used for further consulted with experts through focus group discussions (FGD). From the FGD, it fixed the number of criteria to be 5 criteria, while the number of indicators was reduced to 47 indicators. The results of MDS using Rapinfra showed that the sustainability index value of environmental criteria was 42.88% as shown in Figure 1. It was classified as less sustainable, due to 2 attributes laid in bad score which were the rate of conservation, damage and level of traffic congestion. Seven (7) attributes laid in moderate score which were land carrying capacity, the growth rate of built up area, slum area growth, air quality, water quality, land quality and water resources. The less sustainable status was influenced by 4 key indicators that leverage analysis results and it can be seen in figures root mean square (RMS). Key indicators were indicators of the middle to the highest RMS value. The RMS of key indicators were air quality level; the rate of conservation area damage; the level of water quality; the soil quality level ( Figure 2). Other chart which is the output of MDS with Rapinfra sustainability analysis for the economic criteria, social criteria, technological criteria and governance criteria similar such as output over the environmental dimension. The sustainability index value for social criteria was 15.80 % and classified as not sustainable. The category was not sustainable due to 7 attributes laid in bad score which were the population growth, the number of poor, artesian/shallow weel by public, catchment area by public, trash processing by public, community behaviour, and safety, security, comfort level. It was also due to 3 attributes laid in moderate score which were HDI, sewage system by public and unemployment rate. The unsustainable status was affected by the 6 key indicators. The RMS of key indicators were: the rate of human development index; the sewage system by public; unemployment rate; trash processing by public; catchment area by public and the making artesian or wells drilled by the public. The sustainability index value for the economic criteria was 43.88 % which was relatively less sustainable. The category was less sustainable due to all economic attributes laid in moderate score. The less sustainable status was influenced by three key indicators, that the RMS of key indicators were: the rate of investment; level of income per capita, and the local economy growth.
Sustainability index value for technology criteria was 28.32 %. It was classified as less sustainable due to 5 attributes laid in bad score which were sewage system, drinking water system, bicycle lanes/non-motorcycle vehicle, facilities for pedestrians, public transportation. Four (4) attributes laid in moderat score which were drainage systems, solid waste management, green open space systems and road systems. The less sustainable status was influenced by eight key indicators, the RMS of key indicators were: the level of water services; availability of green open space; availability of roads; availability of pedestrian facilities; waste management; availability of municipal sewage system; the availability of bike lanes/non-motorcycle vehicles and the availability of public transport systems. The sustainability index value of good governance criteria was 44.58 %. It was classified as less sustainable due to 4 attributes laid in bad score which were regulation, inter-sector institution, law enforcement, social political conditions. Five (5) attributes laid in moderate score which were the visionary leadership, spatial planning, budgeting, human resource capacity in goverment, and community participation. Only one attribute laid in good score, it was call center. The less sustainable status was influenced by 5 key indicators, the RMS of key indicators were: law enforcement; call centers; inter-sector institution; leadership, and the local socio-political conditions.
The results of MDS using Rapinfra shows that multicriteria sustainability infrastructure Bandarlampung index value was 38.05 % or less sustainable , as shown in Table 3 and Figure 3. To determine whether the indicators examined in MDS analysis was quite accurate and can be justified scientifically, this can be seen from the stress and the coefficient of determination (R2). This value was obtained in the MDS analysis using Rapinfra software. The results of the analysis were considered sufficiently accurate and reliable because it has a smaller stress value of 0.25 or 25% and the coefficient of determination (R2) values approaching 1.0 or 100 percent [11]. The analysis showed that all indicators were assessed fairly accurate and accountable. It was shown that the stress value by 14% -15% and the coefficient of determination (R2) of 0.95%. Stress value indicates the proportion of variance that was not explained by the model. It showed that, the lower the value the better the model MDS stress.
Priority Indicators Analysis on Sustainable Infrastructure Development
The processes to obtain indicators of sustainable infrastructure development priorities starting from the results of a literature review of 50 indicators, then taken to the FGD, in order to obtain 47 indicators. These indicators is used for the assessment of the various sources which include: stakeholders opinion, communities and planning documents performance. The combined results from the three sources were obtained 27 influential indicators, then this influential indicator was brought to the FGD, in order to obtain 20 indicators were selected to be analyzed in the ANP. ANP analysis results obtained with 8 priority indicator , this stage can be seen in Figure 4.
The community survey showed that 24 indicators of the level of importance according to 5 criteria. Influential indicators for environmental criteria were 5 namely: the level of congestion, water quality, availability of raw water sources, air quality and growth of built up area. There were 5 influential indicators for social criteria, namely: HDI level, level of security and safety, unemployment growth rate, waste management system by community and community behavior. There were 4 influential indicators economic criteria, which include: city minimum wage level, local economy development, the growth of infrastructure budget and economic growth (GDP). Influential indicators for technology criteria were 6 consist of: the availability of drinking water system, waste management system, drainage system, green open space system, wastewater system, and public transport system. There were four influential indicators for governance criteria, namely: visionary leadership, law enforcement, infrastructure planning and infrastructure budget. (Figure 4). Figure 4 shown that there were 27 the powerfull indicators in sustainable infrastructure development. The environmental criteria has 6 indicators namely: availability of raw water, air quality, water quality, damage of conservation, growth of built up area development and traffic congestion. The social criteria has 4 indicators consist of: HDI, security and safety, unemployment rate, public participation and citizen behavior. The economic criteria has 4 indicators namely: the rate of investment, income per capita, the rate of the local economy and minimum city wage. The technological criteria has 7 indicators namely: the availability of clean water systems, waste management, green open spaces, road network, drainage system, waste water system, and public transport. The governance criteria has 6 indicators including: visionary leadership, call center, law enforcement and sanctions, infrastructure The results of the analysis of ANP recommend policy directions in the development of sustainable infrastructure ought to consider 8 priority indicators. Policy recommendations in sustainable infrastructure development were the first: the local economic growth that address the needs of micro economic infrastructure such as: provision of space for small enterprise and street vendors in the city. Second: an integrated infrastructure planning between spatial and sectoral development plans. The Indonesian government is currently preparing a program planning development of spatial-based infrastructure to support integrated development through The Medium Term of Infrastructure Investment Program Plan (RPI2JM). This program may be the first step in planning an integrated infrastructure development and sector-based spatial development. This plan can work well if the planning process also involves decision-makers from related sectors. Third: an increase in the infrastructure budget, efficiency and effectiveness of the budget. Fourth: the availability of clean water system which was distributed to all parts of the city, increasing the amount of raw water sources and water management with 5 R (restore, reduce, reuse, recycle, rechargeable). Fifth: increased community participation in the management of city infrastructure, building consensus between the government and the residents of the city as well as the transparency of information. Sixth: city infrastructure management that considers the behavior of (cultural) communities, for example the pattern of movement of people in the use of transport (public transport, bicycle or on foot) and open space utilization patterns. Seventh: air quality with the increased use of public transport, periodic emissions testing, vehicle age restrictions, environmentally friendly fuel, green industry and waste management without burning. Eighth: the city land use in accordance with the city spatial plan, that requires the provision of 30% open space, minimizing damage to protected areas (mountains, slopes and hills) and the efficient use of space with vertical building development.
Sustainable index of infrastructure development
The result of the simulation scenario on city infrastructure sustainability index value are: the lowest index value of 78.79 and the highest index value 142.08. If the lowest value is assumed to be equivalent to the result of the MDS in the previous section, then the existing condition of the city's infrastructure is also less sustainable. Values reflecting the status of sustainability of infrastructure of the city, the higher the index value, the higher the status of the sustainability of urban infrastructure or the better the quality of the environment as a result of infrastructure development of the city (Table 5 and Table 6). Model of sustainable urban infrastructure development policies with moderate scenario is chosen the most appropriate scenario to be implemented in the time frame until 2026 with consideration of the availability of the resources available today, include: human resources in the government, budgetary resources, and resources of land. The priority policy for the sustainable development of the city's infrastructure moderate scenario are: a. Physical and environmental policy field is through an integrated infrastructure management program that includes: (1) the management of water resources, in particular the increase in the volume of raw water, clean water service improvement, reduction of loss/leakage of water, integrated solid waste management, communal waste water treatment plan/WWTP; (2) the provision of mass transit is easily accessible, secure, inexpensive and convenient; and the imposition of restrictions on the age of a motor vehicle; (3) an increase the quantity and quality of public green open space. b. Social policies that include population control through restrictions on the number of people coming into the city. To support the program it is necessary to measure the change in the form of equitable development of infrastructure to the suburbs and judicial operations. c. Policy economics is managing the local economy through an increase in the number of medium, small and micro enterprises (MSMEs) by providing access to MSMEs on the provision of urban infrastructure needed such as: cheap transport, accessible water supply and integrated waste management.
Conclusion
The sustainable infrastructure development benchmarks were generated in this study which included 5 criterias such as environmental, social, economic, technological and governance, and 47 indicators. The status of Bandarlampung infrastructure sustainability was considered as less sustainable with a score of 38.05 % which means that the availability of the infrastructure was still in good condition. However, it needs to be improved to achieve sustainable infrastructure development.
The ANP analysis recommended that the policy directions in the development of sustainable infrastructure ought to consider 8 key indicators as follows: the local economic growth, infrastructure planning, infrastructure budgets, availability of clean water systems, community participation, people's behavioral, air quality and growth of built up area. The were eight (8) recommendation policies in sustainable infrastructure development. Firstly: the local economic growth that address the needs of micro economic infrastructure such as: provision of space for small enterprise and street vendors in the city. Secondly: an integrated infrastructure planning between spatial and sectoral development plans should consider the indicators of sustainable infrastructure development through The Medium Term of Infrastructure Investment Program Plan (RPI2JM). Thirdly: an increase in the infrastructure budget for more efficiency and effectiveness. Fourthly: the availability of clean water system which was widely distributed throughout the city by increasing the amount of raw water sources and water management with 5 R (restore, reduce, reuse, recycle, rechargable). Fifthly: increased community participation in the management of city infrastructure, building consensus between the government and the residents of the city as well as the transparency of information. Sixthly: city infrastructure management that considers the community behavior, for example the pattern of movement of people in the use of transport (public transport, bicycle or on foot) and open space utilization patterns. Seventhly: air quality with the increased use of public transport, periodical emission testing, vehicle age restrictions, environmental friendly fuel, green industry and waste management without burning. Eighthly: the city land use in accordance with the city's spatial plan, that requires the provision of 30% open space, minimizing damage to protected areas (mountains, slopes and hills) and the efficient use of space with vertical building development. The system dynamics model of sustainable infrastructure development in the study area is comprised of three sub-models: the sub social model, sub-model of the physical environment, and submodel of the economy has resulted in the formulation of the sustainability index value of city infrastructure. The index value can be increased in line with the simulation scenarios. The results without the intervention and pessimistic scenario showed a trend is not sustainable with the sustainability level is still relatively low. Results of moderate and optimistic scenarios already showed a better level of sustainability or already ongoing and highly sustainable. Selected scenario is the most moderate scenario might be implemented because of the limitations of existing resources include budgetary resources, human resources in bureaucracy and land resources. Under the moderate scenario, estimated in 2026 at Bandarlampung infrastructure development is still ongoing, but should be anticipated external influences also the city which can lower the level of sustainability.
In order to increase the value of the city's infrastructure and the status of sustainability in order to materialize the scenario chosen, it must be supported by appropriate policies. The findings of priority policy can be used as a basis for intervention in the upgrading of the sustainability of urban infrastructure in the future. So that the policy can be implemented, it is equipped with programs and action change as follows: a. Policy priorities for sustainable infrastructure development is the policy of physical and environmental fields through an integrated infrastructure management program that includes: (1) the management of water resources in an integrated manner, in particular the increase in the volume of raw water, clean water service improvement, reduction of water leakage and waste management ( liquid and solid); (2) the provision of mass transit is easily accessible, secure, inexpensive and comfortable which is supported by the provision of facilities for pedestrians such as sidewalks, pedestrian bridges and public transportation stops, restrictions on the age of a motor vehicle; (3) Policy management economics is the local economy through an increase in the number of MSMEs by providing access to MSMEs through supporting the provision of basic urban infrastructure in the form of adequate infrastructure.
Recommendation
In order to improve further sustainability infrastructure development of Bandarlampung City, the priority indicators in determining the policy of the city's infrastructure development should be taken into consideration by the local authorities. Model of sustainable infrastructure development policy is a policy concept design that can be adopted in the planning of city infrastructure. The preparation of policy planning and sustainable infrastructure development in urban areas are advised to apply the criteria and indicators generated in this study .
|
2019-06-13T13:13:56.061Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "e84a83c9d121fc0f15967e6e4348a068187ed1ff",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/124/1/012008",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "97341045699f0469c2b321d75e29ce96d0b31ee0",
"s2fieldsofstudy": [
"Environmental Science",
"Political Science",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Business",
"Physics"
]
}
|
252408750
|
pes2o/s2orc
|
v3-fos-license
|
Replica method for eigenvalues of real Wishart product matrices
We show how the replica method can be used to compute the asymptotic eigenvalue spectrum of a real Wishart product matrix. For unstructured factors, this provides a compact, elementary derivation of a polynomial condition on the Stieltjes transform first proved by M\"uller [IEEE Trans. Inf. Theory. 48, 2086-2091 (2002)]. We then show how this computation can be extended to ensembles where the factors are drawn from matrix Gaussian distributions with general correlation structure. For both unstructured and structured ensembles, we derive polynomial conditions on the average values of the minimum and maximum eigenvalues, which in the unstructured case match the results obtained by Akemann, Ipsen, and Kieburg [Phys. Rev. E 88, 052118 (2013)] for the complex Wishart product ensemble.
Introduction
In this note, we describe how the replica method from the statistical mechanics of disordered systems may be used to obtain the asymptotic density of eigenvalues for a Wishart product matrix K = 1 n L · · · n 1 X 1 · · · X L X L · · · X 1 , where the factors X ∈ n ×n −1 (2) are independent Gaussian random matrices. In the simplest case, the factors are real Ginibre random matrices, i.e., they have independent and identically distributed standard real Gaussian elements (X ) i j ∼ N (0, 1), though the complex Gaussian case is also often studied [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15]. We will also consider cases in which the elements of each factor are correlated. Not all of our final results are novel. Rather, our overarching objective in reporting these replica-theoretic derivations are to note their simplicity, as the replica method has to the best of our knowledge not seen broad application to the study of product random matrices [9], despite its common usage in other areas of random matrix theory [16][17][18][19][20][21][22]. For a discussion of the application of the cavity method to Wishart product matrices, we direct the reader to the work of Dupic and Pérez Castillo [9], or to recent work by Cui, Rocks, and Mehta [23].
Applications of Wishart product matrices in science and technology
The spectral statistics of Wishart product matrices are of interest in many areas of physics and applied mathematics [7,8]. For example, they describe the covariance statistics of Gaussian data propagated through noisy linear vector channels [1]-in other words, the covariance statistics of certain linear latent variable models [24]-and transport in simple models for chaotic systems [11,25]. Both real and complex Wishart product matrices are of particular interest in mathematical physics because certain features are amenable to exact study [2][3][4][5][6][7][8][9][10][11][12][13]15].
Most commonly, Wishart product matrices are studied either at finite size or in one of three asymptotic limits. Adopting the nomenclature that the factor dimensions n are the "widths" and the number of factors L is the "depth" of the product, these limiting regimes are as follows: • The thermodynamic limit, in which the widths are taken to infinity proportionally, i.e., n 0 , · · · , n L → ∞ with n n 0 → α ∈ (0, ∞), for fixed depth L [1-4, 8, 26, 27]. This is the regime on which we focus.
Properties of the thermodynamic limit of real Wishart product matrices have recently attracted attention in the machine learning community, as they appear as the Neural Network Gaussian Process Kernel Gram matrix of a deep linear neural network with Gaussian inputs [23,27,[30][31][32][33][34][34][35][36]. In this case, n 0 represents the number of datapoints on which the kernel is evaluated, n 1 is the input dimensionality, and n 2 , . . . , n L are the widths of the hidden layers. The spectrum of this kernel matrix determines the generalization properties of a network in the limit of infinite hidden layer width [31][32][33]. The present note is based on our recent work on deep linear networks in Ref. [31]; we direct the interested reader to that work and references therein for more background on generalization in deep linear neural networks.
Roadmap
Our paper is organized as follows: • In §2.1, we briefly introduce the Edwards-Jones [18] approach to computing the resolvent of a random matrix using the replica method.
• In §2.2, we apply the Edward-Jones method to compute the limiting spectral statistics of Wishart product matrices with uncorrelated factors. The details of this computation are deferred to Appendix A. This recovers a polynomial condition on the resolvent first proved by Müller [1].
• In §2.3, we extend this approach to structured Wishart product matrices where the factors have correlated rows and columns, deferring the details of the computation to Appendix B. We obtain a condition on the resolvent in terms of the spectral generating functions of the factor correlations, which to our knowledge as not previously been reported for L > 1 [37].
• In §3.1, we introduce the spherical spin glass method for computing the averages of the minimum and maximum eigenvalues of a random matrix using the replica trick [38][39][40][41].
• In §3.2, we apply this method to Wishart product matrices with uncorrelated factors, with the details of the computation given in Appendix C. The resulting polynomial conditions on the minimum and maximum eigenvalues match the results obtained by Akemann, Ipsen, and Kieburg [5] for the complex Wishart product ensemble.
• In §3.3, we extend this approach to ensembles with row-structured factors, deferring the details of the calculation to Appendix D. As in our analysis of the resolvent for structured ensembles, this result has to our knowledge not been previously reported for L > 1.
• In §4, we conclude by discussing the outlook for the application of the replica method to product matrix ensembles.
2 Replica approach to computing the resolvent Before summarizing our results, let us briefly record our notational conventions. We denote vectors and matrices by bold lowercase and uppercase Roman letters, respectively, e.g., x and X. For an integer m, I m denotes the m × m identity matrix, while 1 m denotes the m-dimensional vector with all elements equal to 1. We use ∝ to denote equality up to irrelevant constants of proportionality. Finally, we warn the reader that we will often leave implicit the domains of integrals.
The Edwards-Jones method for computing the resolvent
In the thermodynamic limit n 0 , · · · , n L → ∞, n /n 0 → α ∈ (0, ∞), the eigenvalue density ρ(λ) of K is self-averaging, and can be conveniently described in terms of its Stieltjes transform from which the limiting density can be recovered via To compute the the Stieltjes transform using the replica method from the statistical physics of disordered systems [16,42,43], we follow a standard approach, introduced by Edwards and Jones [18]. This method proceeds by writing for where the partition function is In the thermodynamic limit, we expect g(z) to be self-averaging, i.e., to concentrate around its expectation g over the random factors X . The expectation log Z can be evaluated using the identity log Z = lim m→0 m −1 log Z m and a standard non-rigorous interchange of limits: As usual, we evaluate the moments Z m for non-negative integer m, and assume that they can be safely analytically continued to m → 0 [42,43]. Here, as in other applications of the replica trick to the Stieltjes transform, the annealed average is exact, in the sense that the replica-symmetric saddle point is replica-diagonal [16][17][18] (see Appendices A and B).
Spectral moments for unstructured factors
In Ref. [1], Müller proved that the Stieltjes transform of a Wishart product matrix with unstructured factors (i.e., (X ) i j ∼ i.i.d. N (0, 1)) satisfies the polynomial equation see also Refs. [3-5, 9, 10]. As noted by Burda et al. [3], the condition (10) can be expressed more compactly as in terms of the moment generating function where we assume that the formal series converges. Our first result is a derivation, presented in Appendix A, of (10) using the Edwards-Jones method outlined in §2.1.
In the case L = 1, the equation for the Stieltjes transform reduces to which can be re-written as which is the familiar result for a Wishart matrix. In the equal-width case α 1 = · · · = α L = α, we have the simplification In the context of deep linear neural networks, this special case has a natural interpretation as a network with hidden layer widths equal to the input dimensions. If L = 2, this is a cubic equation, which can be solved in radicals, though the result is not particularly illuminating [9,23]. In the square case α 1 = · · · = α L = 1, we have the further simplification As shown in previous works, this can be solved to obtain an exact expression for the eigenvalue density [10]. More generally, the equation (10) must be solved numerically. We show examples for L = 1 and L = 2 in Figure 1, demonstrating excellent agreement with numerical experiment. We direct the reader to previous work by Burda et al. [3] and by Dupic and Pérez Castillo [9] for further examples.
Spectral moments for structured factors
Importantly, the replica approach is not limited to the study of ensembles where the factors have independent and identically distributed entries. It also allows one to tackle with relative ease the more general setting where the factors are independent matrix Gaussian random variables, i.e., for row-wise covariance matrices and column-wise covariance matrices For the thermodynamic limit to be well-defined, we have in mind an ensemble defined by sequences of covariance matrices Σ (n ), Γ (n −1 ) such that the bulk spectral statistics of these matrices tend to deterministic limits (see Appendix B for a more precise statement of our assumptions on these matrices).
We can equivalently define this ensemble by for Z an unstructured Ginibre matrix with standard Gaussian elements (Z ) i j ∼ N (0, 1). This re-writing makes it clear that we may take the columns of all factors except X 1 to be uncorrelated without loss of generality, as the ensemble with is identically distributed, where we write Γ L+1 = I n L for brevity. As a result, we henceforth set henceΣ = Σ for all = 1, . . . , L. Moreover, we may take the covariance matrices Σ to be diagonal without loss of generality, as the random Gaussian factors are rotation-invariant. In the case where the columns of the first factor are uncorrelated, i.e., Γ 1 = I n 0 , then the ensemble is rotation-invariant, and one can consider only row structure without loss of generality. This ensemble describes the kernel of a deep linear neural network with independent input examples, or more generally the covariance of a linear latent variable model [24]. For matrices from this correlated ensemble, we show in Appendix B that the moment generating function M (z) of K satisfies the self-consistent equation Here, the functions are the moment generating functions of the matrices Σ , and the inverse functions We can re-write this as This condition can of course be equivalently written in terms of the resolvent G(z). Moreover, the inverses of the spectral generating functions can be equivalently expressed in terms of the S-transform from free probability theory [26].
In the case in which the first factor has uncorrelated columns, i.e., Γ 1 = I n 0 , we have To gain intuition for how the structured case differs from the unstructured setting, we consider a simple example. With the application of neural network kernels in mind, we include structured correlations only in X 1 , corresponding to the case in which the dataset is composed of independent samples drawn from a Gaussian distribution with correlated dimensions. We keep the remaining factors unstructured-i.e., Σ = I n for = 2, . . . , L-corresponding to a setting in which the weights of the network are drawn independently. This is the standard setting for deep linear neural networks, where the weights at initialization are assumed to be independent and identically distributed [27,30,32,33]. As a toy model for structured data, we consider a gapped model in which a fraction γ ∈ [0, 1] of the eigenvalues of Σ 1 are equal to σ > 1, while the remainder are equal to unity. In the case γ = 0, this reduces to the unstructured spectrum considered before. For simplicity, we restrict our attention to equal-width factors α 1 = · · · = α L = α. With this setup, we have and the simplified condition on the generating function We show examples of this model for L = 1 and L = 2 in Figure 2, demonstrating excellent agreement with numerical experiment. As the signal eigenvalue σ increases, we see that the bulk density separates into two components. It will be interesting to investigate this effect, and other effects of structured correlations, in future work.
For this simple data model, we can also study the case in which all layers include identical structure, i.e., M Σ 1 (z) = M 2 (z) = · · · = M Σ L (z). In the equal-width case α 1 = · · · = α L = α, this gives the simplified condition noted above in (36). In Figure 3, we compare the results of solving (36) for this model to numerical experiments, showing excellent agreement. Interestingly, in this case the gap in the spectrum that is present for L = 1 (for which this model is identical to that considered above and in Figure 2) is not present at L = 2.
3 Replica approach to computing the extremal eigenvalues
The spherical spin glass method for computing extremal eigenvalues
In the thermodynamic limit, we expect the typical minimum and maximum eigenvalues of K, which define the edges of the bulk spectrum, to be self-averaging. Conditions on these eigenvalues can be obtained from the condition (10) on the Stieltjes transform (see Ref. [5]), but they can also be computed using a direct, physically meaningful method. In this approach, the eigenvalues are interpreted as the ground-state energies of a spherical spin glass, as studied by Kosterlitz, Thouless, and Jones [38], and in subsequent random matrix theory works [39][40][41]. Our starting point is the min-max characterization of the minimum and maximum eigenvalues as Rayleigh quotients: We first consider the computation of the minimimum eigenvalue. We introduce a Gibbs distribution at inverse temperature β > 0 over vectors in the sphere n 0 −1 ( n 0 ) of radius n 0 in n 0 dimensions, with density with respect to the Lebesgue measure on the sphere. Here, is the energy function associated to the minimization problem (39), and the partition function is As β → ∞, the Gibbs distribution (40) will concentrate on the ground state of (41), which is the eigenvector of K corresponding to its minimum eigenvalue. We denote averages with respect to the Gibbs distribution (40) by 〈·〉 β,K . Then, recalling our definition of E in (41) and the Rayleigh quotient (39), we have where we have defined the reduced free energy per site In the thermodynamic limit, we expect log Z to be self-averaging, and it can be computed using the replica method. We can also use this setup to compute the minimum eigenvalue. We can see that this computation is identical up to a sign, and that As the rank of K is at most min{n 0 , . . . , n L }, If min{α 1 , . . . , α L } > 1, then we expect the minimum eigenvalue to be almost surely positive.
Extremal eigenvalues for unstructured factors
As in our study of the Stieltjes transform, we first consider an ensemble with unstructured factors, i.e., (X ) i j ∼ i.i.d. N (0, 1). Deferring the details of the replica computation to §C, we find that the edges of the spectrum can be written as where A is a solution to the equation This computation is somewhat more tedious than that of the Stieltjes transform, as the replicasymmetric saddle point is not replica-diagonal. These conditions are identical to those obtained by Akemann, Ipsen, and Kieburg [5] for the complex Wishart ensemble. In general, one must determine which of the solutions to these equations give the edges of the spectrum. However, as noted by Akemann, Ipsen, and Kieburg [5], they are exactly solvable in the equal-width case α 1 = · · · = α L = α. With this constraint, A is determined by the quadratic equation LA 2 + (L − 1)A − α = 0, which gives and thus Considering the minimum eigenvalue, we have the quadratic equation and thus If L = 1, this recovers the familiar results for Wishart matrices. In the square case α = 1, we have the further simplification while λ min = 0, as noted previously by Dupic and Pérez Castillo [9]. In Figure 4, we show that these results display excellent agreement with numerical eigendecompositions.
Extremal eigenvalues for factors with correlated rows
As in our analysis of the resolvent in §2.3, we can extend the computation of the extremal eigenvalues to ensembles with correlated factors. For the sake of simplicity, we focus on ensembles with only row-wise structure, i.e., As discussed in §2.3, this restriction can be made without loss of generality so long as Γ 1 = I n 0 . We provide further discussion of why this restriction simplifies the computation in §D; briefly, it is compatible with the spherical constraint. Then, deferring the details of the computation to §D, we find that the edges of the spectrum are determined by where A is a solution of for Here, (M −1 Σ ) denotes the first derivative of M −1 Σ with respect to its argument; µ is therefore proportional to the multiplicative inverse of the logarithmic derivative of M −1 Σ . As in the unstructured case, one must determine which of the solutions to (57) give the edges of the spectrum [5].
In the unstructured case, we have for = 1, . . . , L, hence we recover the result of §3.2.
To demonstrate the effects of structure, we revisit the equal-width model with structure in the first layer and no structure elsewhere, as introduced in §2.3. In this case, we have µ (z) = 1 + z/α for = 2, . . . , L, hence we have the simplified equation where A is a solution of In Figure 5, we show that this result agrees with with numerical eigendecompositions for matrices with varying fraction of signal eigenvalues γ.
Conclusion
We have shown that the replica method affords a useful approach to the study of product random matrices. These derivations are straightforward, but they are of course not mathematically rigorous [42,43]. We conclude by briefly discussing the utility of these results vis-à-vis open questions in the study of product random matrices. The most notable utility of statistical physics methods, including the replica trick, in random matrix theory is that they allow for the study of non-invariant ensembles. Dating back to the seminal work of Bray and Rogers [17,47], sparse ensembles have been of particular interest [9,17,41]. We hope that the methods described in this work will enable further investigation of products of sparse random matrices and of other non-invariant product ensembles. It will also be interesting to investigate Gaussian ensembles with general correlations between the factor matrices [9,37,45,46]. We remark that the approaches used in this work are particularly simple due to the independence of different factors, i.e., [(X ) i j (X ) kl ] = 0 if = , hence studying ensembles with correlated factors would require a somewhat different replica-theoretic setup.
In the context of neural networks, the structured ensemble with row-wise correlations studied in this work has a natural interpretation as the neural network Gaussian process kernel of a deep linear network where the features and input dimensions are correlated but the datapoints are independent samples. It will be interesting to study the spectra of such kernel matrices in greater detail in future work. To enable future studies of generalization in deep nonlinear random feature models and wide neural networks [31,48], it will be important to extend to extend these approaches to the nonlinear setting [34][35][36]49]. Finally, it will be interesting to investigate the spectra resulting from the non-Gaussian factor distributions that arise in trained Bayesian neural networks [32,33].
Acknowledgements
We are indebted to Gernot Akemann for his helpful comments, and for drawing our attention to recent work on the double-scaling regime. We thank Boris Hanin for inspiring discussions. We also thank Blake Bordelon for comments on an early version of this manuscript. Finally, we thank the referees for their useful suggestions.
Author contributions JAZ-V conceived the project, performed all research, and wrote the paper. CP supervised the project and contributed to review and editing.
Funding information JAZ-V and CP were supported by a Google Faculty Research Award and NSF DMS-2134157.
A Computing the Stieltjes transform for unstructured factors
In this appendix, we derive the result (10) for the Stieltjes transform of a Wishart product matrix with unstructured factors. Our starting point is the partition function (8) in the Edwards-Jones [18] approach. We divide the details of the derivation into two parts. In §A.1, we evaluate the moments of the partition function. Then, in §A.2, we derive the replica-symmetric saddle point equations and use them to obtain the desired condition on G(z) and M (z).
A.1 Step I: Evaluating the moments of the partition function
Introducing replicas indexed by a = 1, . . . , m, the moments of the partition function (8) expand as Using the fact that the rows of X L are independent and identically distributed standard Gaussian random vectors in n L−1 , we have where in the last line we have applied the Weinstein-Aronszajn identity to express the determinant in terms of the Wick-rotated overlap matrix We enforce the definition of these order parameters using Fourier representations of the δ-distribution with corresponding Lagrange multipliersĈ ab L , writing Here, the integrals over C L are taken over m×m imaginary symmetric matrices, while the integrals overĈ L are taken over imaginary symmetric matrices. This yields We can easily see that X L−1 may be integrated out using a similar procedure, and that this may be iterated backwards by introducing order parameters yielding where we have definedĈ L+1 ≡ I m for brevity. We then can evaluate the remaining Gaussian integral over w a : where we discard an irrelevant constant of proportionality. Therefore, we have for where we recall the definitionĈ L+1 ≡ I m . In the thermodynamic limit n 0 , n 1 , . . . , n L → ∞, this integral can be evaluated using the method of steepest descent, yielding where the notation extr means that S should be evaluated at the saddle point
A.2 Step II: The replica-symmetric saddle point equations
As is standard in the replica method (see e.g. Ref. [42]), we will consider replica-symmetric (RS) saddle points, where the order parameters take the form Under this Ansatz, we will now simplify S in the limit m → 0 using standard identities (see Ref. [31] and Refs. [42,43]). We have = q q + q ĉ + c q .
Using the matrix determinant lemma, we have and, similarly, This gives lim m→0 S = log(q 1 − z) +ĉ with the boundary conditionq L+1 = 1,ĉ L+1 = 0. We also have where the order parameters are to be evaluated at their saddle point values.
Simplifying, we find that the replica-nonuniform components are determined by the system while the uniform components are determined bŷ Recalling the boundary conditionq L+1 = 1,ĉ L+1 = 0, it is easy to see that we should have c =ĉ = 0 for all = 1, . . . , L. Then, the Stieltjes transform is given by G(z) = −α 1 q 1 , where q 1 is determined by the system These equations can be simplified with a bit of algebra, as in our prior work [31]. From the equationq we have for = 1, . . . , L. Then, for = 2, . . . , L, the equation If we define A by such thatq we haveq It is then easy to see that for = 1, . . . , L. This yields the backward recurrencê for = 1, . . . , L, which can be solved using the endpoint conditionq L+1 = 1, yieldinĝ Then, using the fact that q 1 andq 1 are related by the equation we haveq Therefore, we have the equation which, substituting in G(z) = −α 1 q 1 , yields the condition on the Stieltjes transform. This is the result claimed in (10).
B Computing the Stieltjes transform for structured factors
In this appendix, we derive the result (27) for the Stieltjes transform of a Wishart product matrix with correlated factors. This computation parallels our analysis of the unstructured case in §A. Again, our starting point is the Edwards-Jones [18] partition function, and we once again first evaluate its moments in §B.1 and then derive and simplify the replica-symmetric saddle point equations in §B.2. (w a ) X 1 · · · X L X L · · · X 1 w a .
B.1 Step I: Evaluating the moments of the partition function
We will first integrate out X L . For brevity, define the matrix A L ∈ n L−1 ×m by such that the required expectation is Let Z L ∈ n L ×n L−1 be a real Ginibre random matrix, with independent and identically distributed elements Z i j ∼ N (0, 1), such that in distribution. Then, we can easily evaluate the expectation using column-major vectorization: where vec(·) denotes the column-major vectorization of a matrix and ⊗ denotes the Kronecker product [50]. Using the mixed-product property of the Kronecker product and the Weinstein-Aronszajn identity [50,51], we have We now introduce the Wick-rotated order parameters which differs from the order parameters used in our previous computation due to the inclusion of the column correlation matrix Γ . We enforce the definition of these order parameters using Fourier representations of the δdistribution with corresponding Lagrange multipliersĈ ab L , which gives We now integrate out X L−1 . Define the n L−2 × m matrix such that where we write for a standard n L−1 × n L−2 Ginibre random matrix Z L−1 . Defining the matrix we can see that we can evaluate and simplify the expectation over Z L−1 in the same way as we did the expectation over Z L , yielding This can in turn be written in terms of the order parameters Then, as in the unstructured case, we can see that we can iterate this procedure backward by introducing order parameters yielding where for the sake of brevity we have defined
SciPost Physics Lecture Notes
Submission for = 1, . . . , L − 1 andΣ andĈ L+1 ≡ I m . The integral over w a is now once again a matrix Gaussian, and yields up to an irrelevant constant of proportionality. Therefore, we obtain for where we recall the definitionĈ L+1 ≡ I m .
In the thermodynamic limit, we expect that provided that the spectrum of Σ is sufficiently generic. It clearly holds in the unstructured case Σ = σ I n , in which we have Under the assumption that this scaling is valid, we can evaluate the required integrals using the method of steepest descent.
B.2 Step II: The replica-symmetric saddle point equations
We now make an RS Ansatz The fist set of new terms relative to our calculation in the unstructured case are More generally, we have Assuming that I n − q q +1Σ is invertible, we may use the multiplicative property of the determinant and the mixed-product property of the Kronecker product to expand this as Then, by the Weinstein-Aronszajn identity, we have This yields Here, we write σ for expectation with respect to the limiting empirical distribution of eigenvalues of the matrixΣ . Assuming no issues arise in interchanging limits in m and n , we can then use the series expansion of the log-determinant near the identity [33] to obtain Therefore, we have 1 mn By an identical argument, we have Combining these results, we obtain
SciPost Physics Lecture Notes
Submission with the boundary conditionq L+1 = 1,ĉ L+1 = 0. Moreover, we have where the order parameters are to be evaluated at their saddle point values.
From the equations ∂ S/∂ q = 0, we have for = 1 and for = 1, . . . , L. Finally, from ∂ S/∂ĉ = 0, we have for = 1 and As in the unstructured case, we can decouple the replica-uniform components from the replicauniform components. This yields the system q =q +1 σ σ 1 − q q +1σ ( = 1, . . . , L) (173) for the non-uniform components. Given a solution to that system, the replica-uniform components are determined by the linear system Recalling the boundary conditionq L+1 = 1,ĉ L+1 = 0, it is easy to see that we should have c =ĉ = 0 for all = 1, . . . , L. Thus, as in the unstructured case, the annealed average is exact.
Our task is therefore to solve the system of equations for the replica non-uniform components of the order parameters, subject to the boundary conditionq L+1 = 1, in terms of which the resolvent is given as As a sanity check, we can see immediately that this reduces to our earlier result in the unstructured case Σ = I n , Γ = I n −1 . We start by writing these equations in terms of standard objects in random matrix theory. We have for MΣ (z) the moment generating function ofΣ . Similarly, we have Then, we have Moreover, we observe that the equation for G(z) implies that Thus, for = 2, . . . , L, we have This relation can easily be iterated backward to give for all = 1, . . . , L, where the = 1 case is of course a tautology. But, we have α 1 q 1q1 = M (z), hence we obtain for = 1, . . . , L. Assuming the invertibility of MΣ , we therefore have for = 1, . . . , L. Using the boundary conditionq L+1 = 1, we have For = 1, . . . , L − 1, we can multiply through by q +1q +1 to obtain This gives Using the relation α q q = M (z), we havê We can now finally use the equation hence we obtain the closed equation This is the result claimed in (27).
C Computing the extremal eigenvalues for unstructured factors
In this appendix, we use the method outlined in §3.1 to obtain the conditions reported in §3.2 on the maximum and minimum eigenvalues of Wishart product matrices with unstructured factors. As in our derivation of the Stieljes transform in §A, we divide the replica computation of the minimum and maximum eigenvalues into two parts. We first compute the moments of the partition function in §C.1, and then simplify the replica-symmetric saddle point equations in §C.2.
C.1 Step I: Evaluating the moments of the partition function
Again, we introduce replicas indexed by a = 1, . . . , m, which gives the moments of the partition function for the spherical spin glass (42) as where we enforce the spherical constraints with δ-distributions. It is easy to see that the matrices X can be integrated out much as before, except for the fact that the order parameters we introduce should be real, i.e.,
SciPost Physics Lecture Notes
Submission and that the boundary condition is nowĈ L+1 = −βI m . Iterating backwards, this yields where we recall that By the spherical constraint, we have It is therefore useful to instead introduce order parameters via Fourier representations of the δ-distribution, such that F aa = 1 and C 1 = F/α 1 . Integrating over F with F aa = 1, the corresponding Lagrange multipliersF aa automatically enforce the spherical constraint. Then, after evaluating the remaining unconstrained Gaussian integral over w a , we obtain As in our computation of the Stieltjes transform, this integral can be evaluated using the method of steepest descent, yielding g = − extr C 1 ,Ĉ 1 ,...,C L ,Ĉ L
C.2 Step II: The replica-symmetric saddle point equations
We make an RS Ansatz Again, we use standard identities to obtain and where we recall the endpoint conditionq L+1 = −β,ĉ L+1 = 0. For brevity, we define q 1 = α −1 1 (1 − f ) and c 1 = α −1 1 f . Then, by comparison with our previous results, the saddle point equations for = 2, . . . , L arê The saddle point equation ∂ S/∂F = 0 yields while the equation ∂ S/∂f = 0 yields hence we haveF (230) Finally, the equation ∂ S/∂ f = 0 yields Then, we can easily eliminate the Lagrange multipliersF andf . The remaining system can be written compactly aŝ where we have the definitions and the endpoint conditionsq Moreover, we have where the order parameters are to be evaluated at their saddle point values. Our task is therefore to solve the saddle point equations in the zero temperature limit. We first simplify the replica-nonuniform saddle point equations using the same trick as before. To do so, it is useful to define an auxiliary variableq 1 bŷ such that the system of equations is identical to what we encountered in §A.2. Then, letting we have the backward recurrenceq for = 1, . . . , L, which can be solved using the endpoint conditionq L+1 = −β, yieldinĝ This shows that we should haveq ∼ O(β) and q ∼ O(1/β). With these scalings, we have To obtain q L , we use the equation which gives hence The equations for the replica-uniform components can be simplified after a bit of tedious but straightforward algebra. Deferring the details of this computation to Appendix C.3, we obtain an expression for c L in terms of c 1 , along with the conditionĉ where we again have defined A = α 1 q 1q1 . Recalling the definitions we can use the condition onĉ 1 to obtain a closed equation for A, Then, recalling thatq we haveq 2 1 To solve these equations in the limit β → ∞, it is clear that we should haveq 1 ∼ O(β) and Then, A is determined by the limiting equation and the minimum eigenvalue is given by We can re-write the equation for A as and the equation for the minimum eigenvalue as For self-consistency with the fact that we should have q 1 > 0, we expect to have A < 0. Then, letting B = −A, we obtain the result claimed in §3.2. Similarly, considering the maximum eigenvalue, we must take β → −∞ through negative values of β, hence we expect A ∼ O(1) to be positive. Then, we can read off the result reported in §3.2. It is easy to confirm that this condition for the edges of the spectrum is identical to the condition given in equations (70) and (71) of Akemann, Ipsen, and Kieburg [5] for the complex Wishart case, with theirv = α − 1 and u 0 = −(A + 1).
C.3 Simplifying the recurrence for the replica-uniform order parameters
In this appendix, we solve the saddle point equations for the replica-uniform components of the order parameters in our computation of the minimum and maximum eigenvalues. This analysis amounts to solving a recurrence relation, and follows our approach in [31].
We first eliminate the variables c by solving the equation to obtain Then, for = 2, . . . , L, the equation yields a three-term recurrence for = 2, . . . , L, with initial difference condition and endpoint conditionĉ L+1 = 0. Substituting in the formula and using the recurrenceq we have hence we obtain the simplified recurrence and the initial difference conditionq We now further simplify our task by defining new variablesû such that which obey the recurrence for = 2, . . . , L, with the initial difference condition and endpoint conditionû L+1 = 0. If L = 1, we simply haveû 1 = 1/α 1 .
To solve this recurrence for L > 1, we observe that it can be re-written as for = 2, . . . , L. Then, it is easy to see that By the endpoint conditionû L+1 = 0, we then havê Iterating backward, we obtain for = 2, . . . , L. In particular, we havê We now use the initial difference condition to writeû 2 in terms ofû 1 , which gives a closed equation forû 1 : and, for = 2, . . . , L, an expression forû in terms ofû 1 : The equation forû 1 simplifies toû which yieldsû If L = 1, this recovers the expected result thatû 1 = 1/α 1 . From this result, we havê which will allow us to obtain a self-consistent equation given the definition ofĉ 1 in terms of f .
Recalling from §C.2 that Eλ min is given in terms of c L , we use the condition to obtain asĉ L+1 = 0 andĉ = α 1q 2 1 c 1û by definition, and Substituting in the value ofû 1 , we find that These are the results reported in §C.2.
D Computing the extremal eigenvalues for row-structured factors D.1 Step I: Evaluating the moments of the partition function
As in our study of the unstructured case, we consider the moments of the partition function of the spherical spin glass: Again, we can integrate out the matrices X iteratively, introducing the order parameters and the modified boundary conditionĈ L+1 = −βI m . Then, iterating backwards until only the vectors w a remain, we have where we recall that As in the unstructured case, the spherical constraint means that it is useful to introduce order parameters via Fourier representations of the δ-distribution, such that F aa = 1 and C 1 = F/α 1 . It is this step-and, concretely, the spherical constraint-that would be difficult to tackle in the presence of column-wise correlations in the first factor, as one would have C ab 1 = (w a ) Γ 1 w b /n 1 , which is not immediately compatible with the spherical constraint.
Then, after evaluating the remaining unconstrained Gaussian integral over w a , we obtain Again, under the assumption that the spectra of the matrices Σ are sufficiently generic, the action S is O(1), and the integral can be evaluated using the method of steepest descent.
D.2 Step II: The replica-symmetric saddle point equations
As elsewhere, we make an RS Ansatz Combining our analysis of the extremal eigenvalues in the unstructured case with our analysis of the Stieltjes transform in the structured case, we have where we recall the endpoint conditionq L+1 = −β,ĉ L+1 = 0 and, for brevity, we define q 1 = α −1 1 (1− f ) and c 1 = α −1 1 f . Then, by comparison with our previous results, we can read off that, after eliminatingF and f , the saddle point equations can be written aŝ where we have the definitions and the endpoint conditionsq Moreover, we have where the order parameters are to be evaluated at their saddle point values. Our task is therefore to solve the saddle point equations in the zero temperature limit.
To solve for the replica-uniform components, we define an auxiliary variableq 1 bŷ such that we have the same system of equations as in our analysis of the Stieltjes transform. Writing we have for all = 1, . . . , L, and the expression for all = 1, . . . , L in terms of the moment generating functions of the correlation matrices. Then, using the boundary conditionq L+1 = −β, we have For = 1, . . . , L, we multiply through by q +1q +1 to obtain hence we can iterate backward to obtain q L q = q +1 q q +2 q +1 · · · q L q L−1 (330) The equations for replica-uniform components can be simplified after a bit of algebra, which we defer to §D.3. This computation results in the condition and the equation where we define to express in terms of M Σ . When combined with the definitions the equation forĉ 1 gives a closed equation for A = α 1 q 1q1 = (1 − f )q 1 : Using the results of §D.3, we may re-write the expression obtained above for λ min in terms of M Σ L and µ L : and Given the results we have obtained thus far, we expect that q ∼ O(1/β),q ∼ O(β), A ∼ O(1), and as β → ∞. Then, where we use the fact that We thus have found that where A satisfies or, equivalently, for This is the result claimed in §3.3.
D.3 Simplifying the recurrence for the replica-uniform order parameters
In this section, we simplify the recurrence, derived in §D.2 that determines the replica-uniform order parameters in the extremal eigenvalue computation for row-structured matrices. We want This can be iterated backward, yielding . . .
|
2022-09-22T06:42:34.960Z
|
2022-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "a3f83f3143c8ee2b77020831c4d3f6f26170ef1c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a3f83f3143c8ee2b77020831c4d3f6f26170ef1c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
268513939
|
pes2o/s2orc
|
v3-fos-license
|
The role of sialidase Neu1 in respiratory diseases
Neu1 is a sialidase enzyme that plays a crucial role in the regulation of glycosylation in a variety of cellular processes, including cellular signaling and inflammation. In recent years, numerous evidence has suggested that human NEU1 is also involved in the pathogenesis of various respiratory diseases, including lung infection, chronic obstructive pulmonary disease (COPD), asthma, and pulmonary fibrosis. This review paper aims to provide an overview of the current research on human NEU1 and respiratory diseases.
Introduction
Respiratory disease refers to a group of disorders that affect the respiratory system, including the lungs, airways, and other structures involved in breathing.These diseases can range from mild conditions such as the common cold to more severe illnesses like pneumonia and chronic obstructive pulmonary disease (COPD).Previously, much attention has been paid to the role of Neu1from viral and bacterial in human respiratory diseases.In recent years, there has been growing interest in the role of human Neuraminidases 1 (Neu1), a sialidase enzyme, in the development and progression of respiratory diseases [1][2][3][4][5][6][7][8][9][10].
Neu1, as the most abundant sialidase located in lysosomes and on the cell membrane of mammal, is an enzyme that plays a crucial role in the removal of sialic acid residues from glycoproteins and glycolipids.Sialic acids are important components of cell surface molecules and are involved in various cellular processes, including cell adhesion, signaling, and immune response.The production, secretion, and activity of sialidase can also be regulated by proteins that bind to sialidases, especially protective protein/cathepsin A (PPCA) binding to NEU1.The binding of glycosidase β-Gal, PPCA and NEU1 formed lysosomal multiprotein complex, which regulate the intralysosomal catabolism of sialylated macromolecules [11].Neu1, PPCA and elastic binding protein (EBP) are together assembled the elastin receptor complex (ERC), which modulated the assembly of elastic fibers [12].Dysregulation of Neu1 activity has been implicated in several diseases, including cancer, metabolism disorder, neurodegenerative disorders, and respiratory diseases.It has been proved that NEU1 is expressed in human airway smooth muscle cells, epithelial and microvascular endothelial cells, fibroblasts, and in the lungs of patients with idiopathic pulmonary fibrosis (IPF) [1][2][3][4][5][6].Neu1 was upregulated in lung tissues of patients with COPD and asthma, suggesting that it may play a role in the development of these diseases [1,2].It has been reported that Neu1 was upregulated in airway smooth muscle cells from patients with asthma, and inhibition of Neu1 reduced airway hyperresponsiveness in mice with asthma [3,4].Consistent with this, Neu1 expression was increased in lung tissue samples from patients with interstitial lung disease (ILD), correlated with increased inflammation and mucus production in the airways [5,6].Above results indicated that the abnormally high expression of Neu1 was correlated with the high immune response in human respiratory diseases.
Neu1 has also been implicated in the pathogenesis of pulmonary fibrosis, a progressive and often fatal lung disease.Previous study demonstrated that Neu1 was upregulated in lung tissues of patients with pulmonary fibrosis, and that inhibition of Neu1 reduced lung fibrosis in a mouse model of the disease [5][6][7].While the exact mechanisms by which Neu1 contributes to lung disease are not yet fully understood, it is believed that Neu1 plays a role in the regulation of cell signaling pathways, inflammation, and extracellular matrix remodeling.Thus, targeting Neu1 may represent a promising therapeutic approach for the treatment of diverse respiratory diseases.
In this review, we used keywords such as "Neu1", "sialidase 1", "respiratory disease", "lung infection", "COPD", "asthma", and "pulmonary fibrosis" to identify relevant articles on PubMed.Here, we will summary the recent understanding of Neu1 and its relevance for pulmonary health and disease (Fig. 1), illustrating its potential clinical application.
NEU1 and lung injury
Airway epithelial cells (EC) express sialylated receptors that recognize bacterial pathogens, and mediate its interaction with host cells.P. aeruginosa (Pa) as a Gramnegative, flagellated, opportunistic human pathogen, adhered to ECs by its flagellin interacted with cell surface sialoprotein transmembrane mucin 1 (MUC1).Simeon and colleague found that flagellin from Pa could promote bacterial adhesion to and invasion of ECs by Neuraminidase 1 (NEU1)-dependent MUC1 ectodomain desialylation in vitro [8].Subsequently, they also verified that Neu1 provoked the shedding of MUC1-ED from airway, which suppressed Pa lung infection in BALB/c mice [9].Recently, Simeon' team reported that the concentration of desialylated MUC1-ED and flagellin expression in Pa were dramatically increased in bronchoalveolar lavage fluid (BALF) harvested from Pa-infected patients [10], this indicated that measurement of MUC1-ED in BALF levels might serve as a guide for antibiotic therapy in patients with Pa-infections.In summary, different in vivo experiments confirmed that NEU1 might modulate the desialylation of human ECs receptors to resist the pulmonary bacterial infections.
NEU1 highly expressed in vascular endothelia of different human tissues.It has been reported that NEU1 overexpressed in human lung microvascular EC (HMVEC-Ls) inhibited their migration into wound [1].Subsequent research found that NEU1 mediated inhibition of angiogenesis in human pulmonary microvascular ECs (HPMECs) through desialylation of CD31 [13].The results of further research manifested that NEU1-CD31 interaction might drive by src family kinases during angiogenesis process for postconfluent HPMECs.According to a in vivo study, Neuraminidase promoted desialylation in polymorphonuclear leukocytes, potentiated inflammatory response and vascular collapse in LPS-induced acute lung injury [14].These studies indicated that NEU1 has negative effect on the pulmonary reprogram after acute injury.
NEU1 and pulmonary fibrosis
As typical feature of pulmonary fibrosis, epithelial abnormalities, vascular remodeling, abnormal wound healing and angiogenesis caused histopathological changes and functional decline of the lungs.Numerous evidence implicated various genes, including cell surface receptors, their ligands, extracellular matrix molecules and downstream signaling pathways, were involved in pathogenesis of pulmonary fibrosis [15,16].The imbalance between pro-inflammatory and regulatory immune cell subsets is a cardinal cause of pulmonary fibrosis.The preponderance of evidence demonstrated that elevated NEU1 expression regulated the desialylation and activity of receptors (such as platelet-derived growth factor receptor and Toll like receptors) in epithelial and endothelial cells involved in the development of pulmonary fibrosis [17,18].It has been verified that elevated expression of NEU1 sialidase provokes pulmonary fibrosis by aggravated lymphocytic infiltration and collagen accumulation in human [3].Neu1-dependent CD31 desialylation modulated the capillary-like tube formation, which was correlated with pulmonary angiogenesis in lungs of patient with IPF [12,19].
Selective inhibition of Neu1 attenuated bleomycininduced lung inflammation and fibrosis by mediated desialylation of Muc1 ectodomain [8].Recent study showed that sialidases was higher in male compared to female mice induced by bleomycin, high NEU1 expression closely correlated with CD11b + macrophages in BALF from lung tissue of bleomycin induced female mice.Both NEU2 and NEU3 levels were associated with some inflammation and fibrosis markers independent on gender [20].Meanwhile another study found that abnormally serum sialidase NEU3 level promoted fibrosis through serum amyloid P (SAP) desialylation acceleration and IL-10 accumulation by PBMC in idiopathic pulmonary fibrosis (IPF) patients [21].Consistent with this, pulmonary fibrosis was strongly attenuated in bleomycin-induced Neu3 knockout mice [22].Recent research confirmed that inhibition of NEU3-mediated TGF-β1 activation might rescued the lung injury caused by bleomycin [9].These studies suggested that both Neu1 and Neu3 might have pronounced effects on pulmonary fibrosis process.
NEU1 and asthma
Asthma is an airway disease, characterized with increased serum levels of IgE and inflammation being caused by elevated levels of T(H)2-type cytokines.T helper 2 (Th2) cells are abundant in type 2 (T2) asthma.CD4 + T cells accumulated in the airway are associated the development of asthma [23].The cell surface adhesion receptor cluster of differentiation 44 (CD44) as a highly glycosylated molecule, interacted with hyaluronic acid (HA), to participate in the airway accumulation of CD4 + T cells in murine model of asthma [3,4].Th2-mediated airway inflammation was caused by CD44-HA interactions through Neu1-meidated desialylation in the airway of asthma mouse model.Another study has demonstrated Neu1 sialidase activity in respiratory airway epithelia regulated EGFR-and MUC1-dependent signaling and bacterial adhesion in vitro [2].It has been reported that NEU1 interacted with Toll like receptors (TLR2, 3, 4) and activated TLR signaling [24].TLR3 could recognize human rhinovirus (HRV) RNA and highly expressed in lung epithelial cells post-HRV infection [25].Martin and colleague demonstrated that higher cytokine expression in asthmatics is correlated with decrease of NEU1-mediated TLR3 activation in asthmatic airway cells after HRV infection in children [26].These reports supported that Neu1 might have the potential for further applications in asthma prevention or treatment.
NEU1 and tumors of lung
NEU-1 exhibited pronounced effects on the development of several cancers by regulated the cancer cells' proliferation and migration, including hepatocellular cancer, pancreatic carcinoma and breast cancer.This sialidase [27,28].RNA-seq data of 1093/556/538 patients with lung cancer (LC), lung adenocarcinoma (LA) or lung squamous cell carcinoma (LSCC) were extracted from the TCGA database, and the expression of NEU1 in tumors of lung and paracancer tissue samples was analyzed using R language, as shown in Fig. 2. Higher transcriptional level of NEU1 was found to be remarkably correlated with LC and LA in human based on bioinformatical analysis (P = 0.05 for LC, P = 2.61*10 -16 for LA), but has no significant change in LSCC (Fig. 2A).NEU2 level all increased in different LC (Fig. 2B).The expression of NEU3 is both remarkably raised in LA or LSCC, except in LC (Fig. 2C), this indicated that NEU3 expression can be used to distinguish LC from other two LA.Although NEU4 expression was only upregulated in LA (Fig. 2D), but its constitutive expression in lung was low, NEU4 was not detected in paracancer tissue of most patient with LA.It suggested that Neu4 was not proper to be an accurate marker of LA.Those results indicated that the different sialidase had specific effects on tumors of lung.
Lung cancer is the most common cancer and the leading cause of cancer-related deaths, including smallcell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) [29,30].Several evidence indicates that Neu1 participated in multistage tumorigenesis through binding with matrix metalloproteinase-9 and G protein-coupled receptor tethered to receptor tyrosine kinases (RTKs) and TOLL-like receptor (TLRs) [31,32].The p53 tumor suppressor gene mutation highly occurred in lung cancer, involved in promoting cell migration and tumor metastasis [30].Recent study found that mutant p53 (p53-R273H) promoted NSCLC cell mobility by accelerated NEU1 transcription via activation of AKT signaling [33].Previous reports demonstrated that Neu1-meidated HA-CD44 binding play an important role in asthma [3], while HA-CD44 binding is also correlated with tumorigenesis or metastasis in lung cancer [34,35].It has been proved deglycosylation by neuraminidase induced CD44-HA binding in human lung cancer cell lines [36], but whether NEU1 is the key neuraminidase for HA deglycosylation in lung cancer remains unclear.NEU1 was also identified to be correlated with levels of drug resistance in lung cancer [34,35].The association between abnormally high expression of Neu1 and poor prognosis of lung cancer needs further investigation.
Abnormally high expression of NEU1 interacted with MMP-9, contributed to neutrophil overactivation from COVID-19 patients with severe infections [47].NEU1 inhibitors (oseltamivir and zanamivir) dampened neutrophil dysfunction and improved infection control as well as host survival in pulmonary infection.SARS-CoV-2 infected human cells through angiotensin-converting enzyme 2 (ACE2) on host plasma.ACE2, as a sialylated glycoprotein, its sialic acids are vital for SARS-CoV-2 infection [48,49].Reduced NEU1 activity might aggravate SARS-CoV-2 infectious through promoting excessive lysosomal exocytosis in host cells [50], but this hypothesis needs more clinical and basic evidences to support.In addition, some patients recovered from COVID-19 didn't produce detectable neutralizing antibodies [51], whether this phenotype is associated the immune dysfunction caused by NEU1 deficiency in cellular response, we should learn it more accurately and deeply.Recent studied revealed that host NEU1 interplay with MMP-9 caused neutrophil overactivation by shedding Sia during severe infections including in sepsis and COVID-19 [47].The newest report shows that highly glycosylated N protein of SARS-CoV-2 and HCoV-OC43 was regulated by host NEU1.Neu5Ac2en-OAcOMe, a newly selectively Neu1 inhibitor targeted intracellular sialidase, remark reduced HCoV-OC43 and SARS-CoV-2 replication in vitro and rescued mice from HCoV-OC43 infection-induced death [52].However, it is important to note that the role of NEU1 in COVID-19 is still not fully understood and further research is needed to elucidate its precise mechanisms and implications in the disease.
Although these inhibitors might provide potential tools for investigation of the specific role of human NEU isoenzymes in biological systems, but the activity of these compounds in vivo needed further study.
Traditional Chinese medicine (TCM) has its unique advantages in the treatment of human diseases.It is an important method to screen the effective drugs for the target of human diseases from TCM. Plenty of Chinese herbs and their extracts also exhibited strong sialidases inhibitory activity, including Lonicerae Japonicae Flos, Scutellariae Radix, Olyra latifolia L. leaves, Huanglian Jiedu Decoction and others [65][66][67][68].Our study found that dipsacoside B ameliorated APAP-induced hepatotoxicity by prohibiting Neu1 [69].The newest evidence supported that salvianolic acid B show strong ability to inhibit Neu1 activity during renal fibrosis development [70].Interfering peptides (IntPep) targeting the transmembrane (TM) domains 2 of human membrane NEU1 has been proved to disrupt NEU1 dimerization and efficiently block the sialidase activity at the plasma membrane [71].In a word, Neu1 inhibitors have shown promise as potential therapeutic agents in various diseases and conditions, but further research is needed to fully understand their mechanisms of action and evaluate their efficacy and safety.
Concluding remarks and future challenges
Numerous studies have provided evidence for the involvement of NEU1 in the pathogenesis of respiratory diseases.NEU1 plays a crucial role in the regulation of glycosylation and is involved in the pathogenesis of various respiratory diseases.NEU-mediated mucin1 extracellular ectodomain (MUC1-ED) desialylation regulates pulmonary collagen deposition, fibrosis, bacterial adhesion, and viral infection.While the molecular mechanism of MUC1-ED desialylation mediated recruitment of NEU1 under Pa infection is still unclear.At the same time, with the advances of medical and measure methods, could MUC1-ED and/or flagellin levels in BALF be a rapid diagnostic assay to identify patients with Pa lung infections independent of bacterial culture or genotyping techniques?NEU1 and NEU3 are abundant sialidases in the lung.Abnormal expression of these sialidases may be a potent marker to distinguish different tumors of lung, but the specific role of these sialidases and the correlated mechanism involved in development of tumors of lung, is needed more and more strong evidence to support.
Although accumulating evidence demonstrated that neuraminidase inhibitors designed for viral or bacterial shown the inhibitory activity of human NEU1 at cellular or animal levels, the clinical studies of the pharmacology effect of NAIs are rare.The direct interaction between NAIs and NEU1 should be explored by a more reliable method, such as the surface plasmon resonance (SPR) assay, affinity chromatographic methods and other methods.Comprehensive studies on efficacy, safety and toxicity of NAIs in humans are urgently to proceed.Targeting Neu1 may represent a promising therapeutic approach for the treatment of these diseases.
Overall, human NEU1 mediated MUC1 desialylation, human NEU1 modulated immune cell differentiate and activation, contribute to the development and progression of various inflammatory, fibrotic, and fibro-inflammatory human pathologies.Further research focusing on the details of human NEU isoenzymes in biological systems is needed to fully understand the mechanisms underlying the involvement of Neu1 in these conditions and to explore its potential as a therapeutic target for the treatment of respiratory diseases.It is promising that NAIs will be effective treatment of various human respiratory diseases.
Fig. 1
Fig. 1 Schematic view of the role of the Neu1 in respiratory disease
|
2024-03-19T13:07:26.453Z
|
2024-03-19T00:00:00.000
|
{
"year": 2024,
"sha1": "1a800aa1a1c576a21d3c4beece8227d4c86e8a04",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1ec8f1f330e2bc1ac810fa099cf76525bef11f2c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
85867166
|
pes2o/s2orc
|
v3-fos-license
|
Population biology of the hermit crab Petrochirus diogenes ( Linnaeus ) ( Crustacea , Oecapoda ) in Southern Brazil
The aim of this study was to provide information on the biology of a subtropical population of the hermit crab Petroc!tirus diogenes focusing size structure, sex ratio, reproductive period and morphometric relationships. Monthly samples were done between January and December 1995 at Armayao of [tapocoroy, Penha, southern Brazil , using two over-trawls in depths from 6.0 to 10.0 m. A total of 126 indi viduals were collected. Overall sex ratio did not differ from I: I. When the sex ratio was analyzed for each size class, it was skewed for females in the smal lest size classes while males outnumbered females in the largest ones. The mean size (cephalothoracic length) of P. diogenes was 30.61 ± 12.52 mm and the size structure of thi s population was skewed to the right. Males were on average larger and heavierthan both ovigerous and non-ovigerous females , which, in turn, showed similar sizes and weights. The ovigerous females represented 61 % of all females and occurred from January to April and in September and December. The relationship of cephalothoracic length and both cephalothoracic width and crab weight were i ometric. Both crab size and weight showed a negative allometry with shell weight, indicating that larger/heavier crabs use proportionally lighter shell s than small-sized ones.
The aim of this study was to provide information on the population biology of the giant hermit crab Petrochirus diogenes from a subtropical regi on.The size structure and the reproductive period of this population is described as well as th e sex ratio and its relationship with crab size.Hermit crab morphometric rel ati o nships were estimated based on cephalothoracic length and width and on crab weight.The relationships between crab and shell dimensions were also described .
MATERIAL AND METHODS
This study was conducted between January and December 1995 at Armar,:ao of Itapocoroy, Penha, Santa Catarina, Brazil (26°46 ' S, 48°36' W and 26°47'S , 48° 37'W).The bottom of this area is composed by sand in the shallowest parts and by biodetritic sediment in the deepest ones.Monthly samples were done in three peri ods (morning, afternoon, and evening) using two over-trawls with 6 m at the opening, 30.00 mm mesh at the outer part and 20.00 mm mesh in the bag.The sediment was trawled in depths from 6.0 to 10.0 m during 30 minutes at constant speed of 2 knots.The water temperature was also meas ured monthly in these three periods.
The individuals of Petrochirus diogenes were removed from their shell s and then measured (cephalothoracic length and width , mm) and weighed (g).The shell s were also weighed (g).The sex of the crabs as well as the presence of ovigerous females was recorded.Monthly means of density and water temperature were ca lcul ated using morning, afternoon and evening samples as replicates.The populati on sex rati o was compared to 1: 1 with the log-likelihood G test C ZAR 1996) .The size and weight of males , ovigerous females and non-ovigero us females was compared through the non-parametric Kruskal-Wallis test followed b 6 a non-para- metri c Tukey-type post-hoc test (ZAR 1996).Power functions (y = ax ) were fitted to estimate the relationships of cephalothoracic length with cephalothoracic width and crab we ight.This function was also fitted for the relationship between shell weight and both cephalothoracic le ngth and crab weight.The Student t test was used to test th e null hypothes is of isometry (b = I for linear rel ati onships, i.e., length vs. le ngth , or b = 3 for exponential relationships , i.e. , length vs. weight) for these relationships.All tests were conducted with the significant level fixed at 0.05 .Mean ± standard elTor is presented through the text.Once the cephalothorac ic le ngth instead of the shield length was measured in the present study, a conversion factor was used to compare the size distribution of this population of P. diogenes with previous studi es.
RESULTS
Petrochirus diogenes was collected year round with a hi gher abundance in the summer months (December to M arch) and a peak in February (Fig. 1).The number of individuals decreased after thi s period and stayed low until N ovember.The temperature varied from IS .7°C to 26.5°C during the sampling period but the highest temperatures (25.0°C to 26.5°C) were recorded from November to February (F ig.I).Then , the temperature decreased thro ughout winter and reached its minimum value in September.-CD -- A total of 126 individuals were collected durin g the sampling period , with 66 females and 60 males .The mean size of P. diogenes was 30.61 ± 13.52 mm and the size structure of thi s population was skewed to the ri ght (F ig.2).The indiv iduals were concentrated between 20.00 and 30.00 mm size cl asses, but a slight increase was evident at the 67 .50mm size class.M ales, ovi gerous females and non-ovigerous fe males d iffered in size (cephalothoracic length and width) and weight (T ab.I).M ales were on average larger and heavier than both ovigerous and non-ov igerous fe males, which, in turn , showed similar sizes and weights .The sex rati o did not diffe r from 1: 1 (G = 0.29, df = 1, P < 0.001 ) when all these individuals were considered .W hen the sex ratio was analyzed for each si ze cl ass , it became e vident that sex rati o was skewed for fe males in the smallest size clas ses while males outnumbered females in the largest ones (Fig. 2).
Months
The ovigerous fe males represented 61 % of all fem ales in the whole studied period.They occurred irregul arly over the year from January to Apri l and in September and December with a peak in M arch (Fig. 3).The smallest ov igerous fe male was 19.00 mm .T he ceph alothoracic length and width of the crabs were strongly and i omeu-icall y correlated (Studentt test, t= -1.3 1; df = 124; ns) (Fig. 4) as well as cephalothorac ic length and crab weight (Student t test, t = 24 .34;df = 124 ; p < 0 .001).Negati ve all omeu-y was recorded in the relationships between cephalothoracic length and shell weight and between crab weight and shell weight (Student t test, t = -9.12; df = 124; P < 0.00 I and t = -10.54; d f = ] 24, P < 0.001 , respectively) (F ig.4) .
DISCUSSION
Tempo ral vari ati on in abund ance of indi vidu als of a given spec ies in a g iven site may be caused by mi grati ons or mass mortalities associated to har sh enviro nmental conditi ons in some periods of the year.In the warmer waters of .'The relationships between cephalothoracic length and shell weight and between crab and shell we ights were also estimated (lower).
the tropical and subtropical regions such seasonality in environmental parameters is no t marked (MANTELATTO & FRANSOZO 1999; prese nt study) but the temperature in the coas tal areas, such as bays , may oscillate between 25 to 28°C during the summer months .In the tropical Ubatuba region (5 to 20 m), Petrochirus diogenes reaches higher densities from June to August (NEGREIROS-FRANSOZO et al. 1997 ; BERTINI & FRANSOZO 1999b) .Another hermit crab species, Dardanus insignis, also have the same pattern in this region.Thus , seasonal migration between shallower/warmer and deeper/colder waters might be a poss ible hypothesi s to explain temporal variation in these tropical hermit crab population s.In fact , there are records P. diog enes up to 130 m depth (MELLO 1999).The temperate intertidal hermit crab Diogenes nitidimanus migrates to subtidal areas in the summer months (ASAKURA & KIKUCHI 1984;ASAKURA 1987).The tropical intertidal and shallow subtidal hermit crabs Clibanarius vittatus (FOTHERINGHAM 1975) and Pagurus longicarpus (RESACH 1978(RESACH , 1981) also undergo seasonal vertical migrations from the shallow and colder to deep and warmer waters during winter month s.The subtropical population of P. diogenes at Armac;:ao of Itapocoroy had higher abundance in the sampled areas (6 to 10m depth) durin g the warmer months .The water temperature in this area during this period of the year (25 to 26°C) is higher than the temperature in the winter months in the Ubatuba coastal region ( 19 to 23°C; BERTINI & FRANSOZO 1999a) but lower than those in the summer mo nths.These data may evidence that P. diogenes have a vertical mi grato ry be havior avoiding shall ow warm waters when the temperature is very hi gh (28°C or more), although slightly lower temperatures in these shallow waters may be advantageous.In fact, temperatures around 25-26°C seem to be adequate to reproduction in Armac;:ao of Itapocoroy once the peak of ovigerous females occured in the same period of the abundance peak.This association between periods of high abundance of individuals in the coastal waters and freq uency of ovigerous females did not occur in the Ubatuba region (BERTI NI & FRANSOZO 1999a, 2000a) although the peak of reproduction was also associated to the periods with highest temperatures (26-28°C) in the shallow waters sampled (10-20 m).These arg uments indicate that reproductive optim um in P. diogenes may occur within a limited range of temperatures, probably between 24 to 26°C , fact that may be governing vertical migration of thi s species and limiting its distribution to Uruguay waters (COELHO & RAMOS-PORTO 1987).H owever, a more detailed stud y focusing on the influence of temperature on reproduction and on the depth distributi o n of ovigerous females is required to strengthe n this assumption.
A conversion from cephalotoracic to shield length was employed to enable size comparisons with other populations.In this way , the shield len gth of thi s population was estimated to range from 9.10 to 41 .30mm.The popUlation of P.
diogenes at the Ubatuba region showed a size range s lightly skewed to lower values (5.40 to 40 .00mm; BERTINI & FRANSOZO 1999b), indicating the presence of sma ller individuals than collected in Armac;:ao of Itapocoroy.This was probably due to the smaller mesh size used by BERTINI & FRANSOZO ( 12 mm mesh size with 10 mm at the cod end, 1999b) in relation to the present study.In contrast, the mean size of the individuals in the Ubatuba population (17.70 mm) was larger than in the present study (14.70 mm ), because P. diogenes in Arma~ao of Itapocoroy showed a more evident skewness in the size freque ncy di stributi on.In both populati ons, the small est individuals escape the fis hing nets and infl ate the smallest size classes.The stronger skew ness in the size distribution of P. diogenes in A rm a~a o of Itapocoroy in compari son to Ubatuba probably evidence of the stronger over fi shing in the former area.
Hermit crabs have variable patterns for the relationship between sex ratio and crab size (WENNER 1972).Petrochirus diogenes at Arma~ao of Itapocoroy presents the standard pattern , with females dominating the smallest and mal es the largest size cl asses.Another population of this species at a northern site in Ubatuba also presented this pattern (BERTINI & FRANSOZO 2000a).This may evidence the standard pattern of sex ratio (sensu WENNER 1972) is a species-specific instead of a population-spec ific characteristic.In fact, this pattern is quite dependent on the popul ati on size structure, characterized by males bei ng larger than females.According to WENNER ( 1972), this pattern may be caused by a faster growth of males in re lation to females .Despite this possibi lity , there is no information on growth rates of males and fe males for this species to rei nfo rce this assu mption.On the other hand , lac k of j uveniles (i ndividuals smallerthan the smallest ovigerous fe male-19.0mm) and small size indi viduals in the samples may prevent further discussions on this subject once the pattern may change considerably if these individuals would be collected by the sampling procedure employed in the present stud y and in the study conducted by BERTINI & FRANSOZO (2000a).Another possibility to expl ain the lack of small-sized sexuall y differentiated individu als in these two studies may be the habitat partitioning between them and the mature individuals.However, there are no data to support thi s hypothesis.
According to BERTINI & FRANSOZO (2000a) and TuRRA & LEITE (2000), hermit crabs may have continuous or seasonal reproductive patterns .TURRA & LEITE (2000) revealed that con tinuous reproduction is markedly more common in tropical waters but that seasonal reproduction is an important strategy in tropical and temperate regions.Th is is reinforced by P. diogenes, which presents a seasonal reproducti ve period in both tropical (BERTI NI & FRANSOZO 2000a) and s ubtropical (present study ) waters .It is important to note that these two populations have coi nc ident peaks of ovigerous females between February and April and absence or low number in the rest o f the year.Once temperature optimum may be assoc iated with peaks of reproduction in this species as exposed above, the depth where reproduction takes place is supposed to vary between populations of this species in a latitudinal gradi ent.
Isometry was recorded between cephalothoracic length and width in thi s popul ati on of P. diogenes as well as for the relationships between s hi eld length and width in another popul ation of this species (BERTINI & F'RANSOZO 2000b) and in Dardanus insignis (FERNANDES-GOES & FRANSOZO 2000) both in Ubatuba region .However, a sympatric population of D. insignis at Arma~ao of Itapocoroy showed pos iti ve allometry between cephalothoracic length and width (BRANCO et al. 2002).Variati on in the all ometri c patterns was also recorded fo r the relationship between cephalothoracic length and crab weight.Isometry was recorded in the study populati on and for D. insignis (s hield length vs. shell weight -FERNANDES-G6ES & FRANSOZO 2000) in Ubatuba region, while BRANCO et al. (2002) recorded a negative all ometry between these variab les fo r D. insignis at Arma<;:ao of Itapocoroy.T his variability in allometric patterns between species and populations reinfo rce th e plasticity of crab dimensions proposed by BLACKSTONE (1985).The negative allometry between crab size/weight and shell weight indicates that larger/heavier crabs are using proportionall y li ghter shell s than smaller/l ig hter crabs.O nce shell s ize and weight are generall y well correlated (TURRA & LEITE 2001, 2002), one may argue that larger crabs are using proportionally smaller shells .Thi s suppo rts th e hypothesi s that large hermit crab individual s, and even species, are unde r stronger shell li mitation than smaller ones (SPIGHT 1985).
for a review).In particular, studies on the population structure of hermit crabs are recent (see B ERT INI & FRANSOZO 2000a; TuRRA & LEITE 2000 fo r reviews) a nd showed that hermit crabs may have continuous o r season a l reproduction a nd m ay ex hibit sexual dim o rph is m with males b e ing on average larger than fema les .Sex ratio is ge nera ll y skewed for fema les, especially in the smallest or intermediate size classes (BERTINI & FRANSOZO 2000a).
Fig . 1 .
Fig .1. Annual va riation in mean density (Inds/sample) of Petrochirus diogenes and in mean water tempe rature (Q C) at the Arma~ao of Itapoco roy, southern Brazil.(data from morning , afternoon , and evening samples were averaged to generate month ly means) .
Fig
Fig .2. Size frequency distribution of the ce phalothoracic length (mm) of males , ovigerous females and non-ovigerous females of Petrochirus diogenes at the Armay8.o of Itapoco roy, southern Brazil.The proportion of males of Petrochirus diogenes through size classes is also shown .Table I.Comparison of the cephalothoracic length (mm), cephalothoraci c width (mm) and weight of Petrochirus diogenes among reproductive classes (males, ovigerous fe mal es and non-ovigerous females) through the Kruskal-Wallis test (H).The number of observations in each reproductive class is shown with in clasps.The superscript labels indicate the result of th e Tukey-type non-parametric post-hoc test (ZAR 1996).Maximum and minimum values for each variable are shown within brackets.(X) Mean , (SE) standard error.
Fig. 3 .
Fig. 3. Reproductive activity of Petrochirus diogenes indicated by th e freque ncy of ovige rous fe males through the year at the Armayao of Itapocoroy, southern Brazil.Monthly total numbe r of individuals is shown with in brackets .
Fig . 4 .
Fig .4. Morphometric relati onships of Petrochirus diogenes showing th e relationships between cephaloth oracic length and width and between cep halothoracic length and crab weight (upper).The relationships between cephalothoracic length and shell weight and between crab and shell we ights were also estimated (lower).
|
2019-03-30T13:04:55.298Z
|
2002-12-01T00:00:00.000
|
{
"year": 2002,
"sha1": "d7e0b1fd48c9f31876ed0cfdf40a5d63402bdec5",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rbzool/a/368sRchHnN6NWSHh5X4cPZK/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d7e0b1fd48c9f31876ed0cfdf40a5d63402bdec5",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
}
|
10640322
|
pes2o/s2orc
|
v3-fos-license
|
Asthma Care in Resource-Poor Settings
Asthma prevalence in low-to middle-income countries is at least the same or higher than in rich countries, but with increased severity. Lack of control in these settings is due to various factors such as low accessibility to effective medications, multiple and uncoordinated weak infrastructures of medical services for the management of chronic diseases such as asthma, poor compliance with prescribed therapy, lack of asthma education, and social and cultural factors. There is an urgent requirement for the implementation of better ways to treat asthma in underserved populations, enhancing the access to preventive medications and educational approaches with modern technological methods.
INTRODUCTION
A sthma constitutes an important public health concern worldwide, affecting approximately 300 million people. Its prevalence has increased in recent years in many countries. The International Study of Asthma and Allergies in Childhood (ISAAC) demonstrated a world mean annual increase of 0.06% in adolescents and 0.13% in 6-to 7-year-old children between 1996 and 2003. In Latin America the increase was 0.32% in adolescents and 0.07% in children ages 6 to 7. 1 International guidelines have defined goals for asthma control through a correct diagnosis, assessment of severity, adequate therapies, and patient education. 2 They include control of symptoms, prevention of exacerbations, reduction of visits to the emergency department and hospitalizations, minimal utilization of reliever medications, no limitations of physical activity, normalization with absence of diurnal variations of lung function, and lack of adverse effects from asthma medications.
Recent studies carried out in various regions of the world concluded that, despite guidelines presently available, asthma control is not obtained in most patients. [3][4][5] As an example, the AIRLA Study (Asthma Insights and Reality in Latin America) demonstrated that between 5 and 15% of patients had severe symptoms and between 40 and 77% needed hospital medical assistance. 5 These investigations concluded that there is a poor standard of care in all regions, with resource-poor countries faring no worse than rich countries in failing to achieve the GINA goals, and both patients and care providers underestimated asthma severity and a significant proportion of patients and care providers paid little attention to use of inhaled corticosteroids. Among recommendations for better outcomes, experts suggested improving the access and affordability of corticosteroids, increasing patient and physician education, and implementing social, cultural, and politically relevant management plans.
It is clear that asthma is a disease with great impact because of its high prevalence, the compromise of patient's quality of life, and its direct and indirect costs. The main direct costs of asthma are associated with lack of control, with frequent exacerbations and hospitalizations, visits to the emergency services, and nonprogrammed ambulatory physician consultations.
The purpose of this paper is to describe the influences of socioeconomic factors on the clinical expression and management of asthma with special consideration to the status of asthma care in Latin America.
DIFFICULTIES IN ASTHMA CARE
Asthma is a complex and highly prevalent disease. It has been recognized that management of asthma is a difficult task. Some obstacles to good disease control are the lack of education about asthma, the absence of effective prevention, multiple problems with delivery of drugs through the inhalation route, and the unavoidable fact that treatment guidelines are too complicated. Additionally, the majority of asthmatic patients do not have access to effective care. This is even worse because of cultural barriers, the low priority given to asthma by health authorities, and costs of medications. 6 Many preventable risk factors for asthma have been identified, including tobacco smoke, intradomiciliary air contamination, allergens, and occupational agents. Asthma and its risk factors generally receive insufficient attention from the health community, patients, families, and the media. Consequently, there is a lack of recognition, underdiagnosis, undertreatment, and insufficient prevention.
DEPRIVATION AND ASTHMA
The high prevalence of asthma implies various additional problems such as the limitations related to medical attention and availability of basic medications in countries with limited economic resources, the decrease in quality of life of patients and their families, the increased use of public health services, the high costs of health management, the increased school and work absenteeism and attendance, and the increase of mortality rates. 7 It has been estimated that asthma is responsible for 250,000 deaths every year, and the mortality is higher in median-and low-income countries. 8 The difficulties mentioned above are worse in deprived populations. For example, the adherence to asthma medication in poor populations is lower than 50%, a figure observed in studies from different nondeprived European and North American countries, whereas the access to essential drugs for adequate asthma treatment, especially to inhaled corticosteroids, is quite low in many developing countries (Fig. 1). A study carried out in Porto Alegre, Brazil, found that the overall rate of compliance in people with asthma was 51.9%. 9 In addition, the study by Watson and Lewis concluded that many patients with asthma living in developing countries are not receiving adequate therapy because the required drugs are not available in their area or are prohibitively expensive. 10 In a study done in 24 developing countries in Africa and Asia, Edwards observed that oxygen was available in 50% of the clinics, electricity in 25 of 41 centers, peak flow meters only in 3 sites or 26 of 41 doctors, rapid-acting bronchodilators in all, inhaled corticosteroids in 50% (considered too expensive), and no offering of patient education. 11 According to Rona, deprivation is consistently associated with increased severity of asthma but not with a higher prevalence. 12 This has also been shown by Baker et al in a study that demonstrated increased severity in patients who live in rented houses and lower severity in patients seen in private institutions. 13 Furthermore, it has been observed that guidelines adopt a general (not individual) approach to disease management, and that in poor countries the commitment of public health officials is low and there is a large diversity of uncoordinated health systems, variations in availability of medications, and limitations in resources. Additionally, in many primary care settings there is resistance to the implementation of National or International Guidelines for asthma diagnosis and management.
Poverty has other effects on asthma. It contributes to exacerbations, is a determinant of the quality of care that patients receive, and determines the psychosocial behavior which in turn impacts the management and prognosis of the condition. The lack of access to essential basic therapy, mainly inhaled steroids, is especially notorious in resourcepoor countries.
Because of cost concerns, in deprived areas inhaled steroids, long-acting beta agonists, and other medications are not available universally, and new drugs, new devices, and formulations are too expensive.
Some investigators have proposed that in such settings the use of older steroids and cheaper oral drugs such as theophylline, oral corticosteroids, and oral bronchodilators would be a plausible alternative to no treatment. 14 Leukotriene receptor antagonists, having better adherence because of their oral administration, would also be useful if costs can be reduced and made accessible to a larger number of persons with asthma living in poor areas, [15][16][17][18][19][20][21] although it is known that leukotriene modifiers are in general less effective than inhaled corticosteroids for the reduction of asthma morbidity and mortality. Increased educational efforts, including nonphysician asthma educators, are required, in addition to providing global access to core medications at affordable prices and encouraging better health care provider and patient education.
CULTURAL AND SOCIOECONOMIC FACTORS
In a study carried out in East London, South Asians showed less confidence in controlling their asthma, were unfamiliar with preventive medications, and often expressed less confidence in their general practitioners. They managed asthma exacerbations with family advocacy, without systematic changes in prophylaxis, and without use of systemic corticosteroids. In general, they attended medical practices with weak strategies for asthma care and consequently showed increased risk of hospital admission. 22 Authors postulated that this could reflect either an intrinsic cultural characteristic or the difficulties of coping with asthma in deprived circumstances where racism is common and health services are often inadequate and inappropriate.
It is generally accepted that good access to primary care is associated with reduced risk of hospitalization. A behavioral intervention for doctors that promotes a partnership style of medical practice would increase patient's confidence and reduce their use of health services.
Another investigation by Moudgil and Honeybourne on differences in asthma management between white European and Indian subcontinent ethnic groups living in socioeconomically deprived areas in Birmingham, UK, concluded that the management of both ethnic groups centered on drug prescription, delivery techniques, and compliance, but was deficient, particularly in the Indian subcontinent group, in patient's understanding of the disease and self-management. 23
ASTHMA IS A PUBLIC HEALTH PROBLEM IN LATIN AMERICA
Asthma constitutes an important sanitary issue not only in Latin American countries but also in Hispanic people living in North America. 24 Cooper et al suggested that the increased prevalence of asthma observed in some countries in Latinoamerica such as Brazil and Costa Rica is associated with underprivileged populations living in cities, whereas the disease remains relatively rare in many rural populations. These authors suggested that causes of asthma in Latin America are likely to be associated with urbanization, migration, and the adoption of a modern "Westernized" lifestyle and environmental changes that follow these processes. 25 When hospitalizations due to asthma in Latin America are analyzed, it can be seen that one of the factors involved is low utilization of controller medications, especially inhaled steroids. 26 The increased use of inhaled corticosteroids and possibly the reduction of theophylline therapy were suggested as the most relevant factors accompanying the decrease of asthma mortality in Argentina. 27 We will discuss the present situation of asthma in Venezuela as an example of the difficulties for asthma control in many Latin American settings. Health statistics confirm that asthma constitutes the second cause, after viral syndromes, for consultation in the outpatient clinics of the Ministry of Health and Social Development of Venezuela with 865,738 visits in the year 2000, most likely an underestimation. It is the first among respiratory diseases, above tonsillitis, rhinopharyngitis, acute bronchitis, and pharyngitis.
Asthma treatment is focused mainly on exacerbations, and therefore asthma is a major burden in emergency department visits and hospitalizations in this country. Eighty-six percent of all medications sold for the treatment of asthma are bronchodilators and other rescue drugs.
In addition to this, in a study performed by Proyecto Venezuela (Fundacredesa) looking into growth and development data across the country, an increased frequency of asthma complaints called investigator's attention. When socioeconomic levels of the patients were analyzed using a questionnaire similar to ISAAC's study, it was observed that there was a significantly increased prevalence of asthma in individuals from lower socioeconomic class (levels IV and V) as compared with higher class (levels I to III) ( Table 1). Authors concluded that in this population asthma affects mainly individuals from low socioeconomic levels. 28 These investigations have suggested that asthma in Venezuela is a disease of young, urban, and poor people. It is likely that the same is occurring in many other large cities in Latin America.
The last revision of the National Asthma Program, based on GINA guidelines, was done in 1998. According to Recent studies have shown a reduction in the number of hospitalizations caused by asthma in various countries when effective preventive and controller measures are implemented. 30 Educational approaches for physicians and patients are essential to improve asthma control.
Brazil is a country that has recently taken the leadership in Latin America in regard to programs for asthma education and management. Lasmar et al reported on the Wheezy Child Program, the experience of the Belo Horizonte Pediatric Asthma Management Program. Using educational strategies, they observed a reduction of hospitalization rates for asthma and pneumonia in children, from 40.5% before admission to 8.6% after being included in the program. 31 Favorable results were also obtained in patients with severe asthma in Salvador, Bahia, as reported by Souza-Machado et al. 32
INTERVENTIONS TO IMPROVE ASTHMA CARE IN DEPRIVED POPULATIONS
Experts recognize that educational interventions are crucial to improve asthma care in underserved populations. De Oliveira et al compared asthma outcomes in patients receiving or lacking education for asthma. Although no differences in lung function, as determined by pre-and postbronchodilator peak flow rates, were present, significant improvements in the educational group for emergency room visits, nocturnal symptoms, symptom frequency, and quality of life were observed. Patients in the treated group showed adequate use of metered dose inhalers, better knowledge of rescue and preventive medications, and improved environmental control. In regard to medications, the educational group showed an increased frequency of inhaled corticosteroid use after implementation of the program. 33 Key educational issues for the patient include the understanding of the role of inflammation in asthma, how preventer/controller and reliever medications work and when they should be used, what to do in an emergency situation (with self-management instructions), and adequate education on the proper technique for the use of inhaled medications.
CONCLUSIONS
Asthma prevalence in deprived regions is high and shows increased severity. Reasons for inadequate asthma control in poor populations include low accessibility to effective controller medications, weak infrastructure of health services for the management of chronic diseases, poor adherence to the therapy, lack of educational approaches, and social, cultural, and language barriers. There is a need for the implementation of improved ways to treat asthma in these populations, enhancing the access to preventive medications and to educational interventions which include modern technological tools.
|
2017-06-19T11:29:00.475Z
|
2011-04-01T00:00:00.000
|
{
"year": 2011,
"sha1": "fba95110c50ae39e367470cbda0af35cbca03ec6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/wox.0b013e318213598d",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fba95110c50ae39e367470cbda0af35cbca03ec6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
255172239
|
pes2o/s2orc
|
v3-fos-license
|
Oral higher dose prednisolone to prevent stenosis after endoscopic submucosal dissection for early esophageal cancer
BACKGROUND Esophageal stenosis is one of the main complications of endoscopic submucosal dissection (ESD) for the treatment of large-area superficial esophageal squamous cell carcinoma and precancerous lesions (≥ 3/4 of the lumen). Oral prednisone is useful to prevent esophageal stenosis, but the curative effect remains controversial. AIM To share our experience of the precautions against esophageal stenosis after ESD to remove large superficial esophageal lesions. METHODS Between June 2019 and March 2022, we enrolled patients with large superficial esophageal squamous cell carcinoma and high-grade intraepithelial neoplasia experienced who underwent ESD. Prednisone (50 mg/d) was administered orally on the second morning after ESD for 1 mo, and tapered gradually (5 mg/wk) for 13 wk. RESULTS In total, 14 patients met the inclusion criteria. All patients received ESD without operation-related bleeding or perforation. There were 11 patients with ≥ 3/4 and < 7/8 of lumen mucosal defects and 1 patient with ≥ 7/8 of lumen mucosal defect and 2 patients with the entire circumferential mucosal defects due to ESD. The longitudinal extension of the esophageal mucosal defect was < 50 mm in 3 patients and ≥ 50 mm in 11 patients. The esophageal stenosis rate after ESD was 0% (0/14). One patient developed esophageal candida infection on the 30th d after ESD, and completely recovered after 7 d of administration of oral fluconazole 100 mg/d. No other adverse events of oral steroids were found. CONCLUSION Oral prednisone (50 mg/d) and prolonged prednisone usage time may effectively prevent esophageal stricture after ESD without increasing the incidence of glucocorticoid-related adverse events. However, further investigation of larger samples is required to warrant feasibility and safety.
INTRODUCTION
Endoscopic submucosal dissection (ESD) is one of main treatment measures for early esophageal cancer and esophageal high-grade intraepithelial neoplasia [1]. It is minimally invasive and permits en bloc resection of large esophageal lesions without esophagectomy. However, esophageal stenosis generally occurs after ESD resection of esophageal lesions, especially for lesions ≥ 3/4 of the lumen. Multivariate analysis has shown that esophageal mucosal membrane defects over 3/4 of the lumen is an important predictor of stenosis formation. Without prophylactic treatment, the occurrence rate of esophageal strictures can reach 83.3%-94.1%; especially when the lesion affects the whole circumference of the esophagus, the rate of esophageal strictures is even higher [2,3]. This often requires repeated endoscopic balloon dilatation (EBD) to alleviate symptoms; however, the benefit is limited [4].
Recently, it has been reported that hormones, as a preventive treatment, can reduce the occurrence of stricture after esophageal ESD [5,6]. Yamaguchi et al [5] first reported that oral prednisone 30 mg/d can effectively prevent esophageal stenosis after ESD, and the postoperative stenosis rate can be reduced to 5.3% (1/19). A study also showed that oral prednisone 30 mg /d is not effective in preventing esophageal stenosis in patients with an entire circumference or ≥ 7/8 circumferential mucosal defects [7]. Meanwhile, one case reported that a patient with superficial esophageal squamous cell carcinoma received high-dose dexamethasone therapy (40 mg for 4 d) for multiple myeloma on the 9 th d after ESD. After three courses of treatment, no esophageal stenosis was found in the follow-up gastroscopy, the histopathological evaluation showed that the submucosa became thinner, and the fibrosis degree of wound scar after ESD was the lowest [8]. In a prospective study by Nakamura, 11 patients with superficial esophageal squamous cell carcinoma with lesions ≥ 3/4 of the circumference were treated with steroid pulse therapy from the next day after ESD (intravenous infusion of sodium methylprednisolone succinate 500 mg/d for 3 consecutive days from the next day after ESD) [9]. It was found that although steroid pulse therapy was safe, steroid pulse therapy had no preventive effect on esophageal stenosis after ESD. Therefore, oral hormone is an effective method to prevent esophageal stenosis after esophageal ESD, but the dose, use time, effectiveness, and safety of the hormone need to be further studied. December 26, 2022 Volume 10 Issue 36
Patients
Between June 2019 and March 2022, 74 patients with superficial esophageal squamous cell carcinoma or precancerous lesions of esophagus were en bloc resected with ESD at the Digestive Endoscopy Center of Shenzhen People's Hospital (Guangdong province, China). Of these, 18 patients accepted mucosal resection via ESD, and the mucosal defects involved a 3/4 or larger circumference of the esophageal lumen. However, 1 patient who had received additional chemoradiotherapy (CRT) and 3 patients who had undergone additional surgery were removed from our study. Ultimately, 14 patients were included in this study. The indication criteria were as follows: (1) Before esophageal ESD, the intrapapillary capillary of the lesion mucosa observed by narrow band imaging magnifying endoscopy was type B1; (2) The preoperative lesions of ESD were histologically confirmed as superficial esophageal squamous cell carcinoma or esophageal high-grade intraepithelial neoplasia; (3) Thoracoabdominal enhanced computed tomography (CT) showed no lymph node or distant metastasis; (4) Provision of written informed consent was given; (5) There was no achalasia; and (6) There was no corrosive injury of esophagus. The exclusion criteria were as follows: (1) Patients who could not be followed up for 6 mo or longer; (2) patients whose stenosis formed before esophageal ESD; (3) patients who had prior esophageal cancer with CRT; and (4) patients with additional CRT or additional esophagectomy after non-curative ESD.
ESD procedure
The ESD was operated in an operating room. The patients were endotracheal intubated and kept in the left lying position. An endoscope (GIF-Q260J; Olympus Co., Tokyo, Japan) was attached with forward water delivery function and was used with carbon dioxide insufflation. Postoperative-related bleeding was defined as bleeding requiring blood transfusion or surgical intervention, or bleeding that resulted in a 2 g/dL decline in hemoglobin levels. Postoperative-related perforation was diagnosed by endoscopy or chest CT [7]. All patients treated with proton pump inhibitor, esomeprazole, with a dose of 20 mg, twice a day after ESD for 28 d [5,9]. After ESD, each patient received two pieces of calcium carbonate and vitamin D3 chewable tablets (each tablet contains 300 mg calcium and 60 IU vitamin D3) per day until the prednisone was stopped to prevent glucocorticoid-induced osteoporosis[10].
Management for esophageal stenosis prevention
Prednisolone was taken orally at a dose of 50 mg/d from the next morning after ESD for 1 mo, and then decreased gradually (45, 40, 35, 30, 25, 20, 15, 10, and 5 mg for 7 d each) until 13 wk later.
Follow-up
Regular endoscopy was examined at 1, 3, and 6 mo after ESD operation, and then annually thereafter. In addition, endoscopic examination was performed whenever the patient had dysphagia to determine whether there was esophageal stenosis. EBD was performed subsequently if esophageal stricture was identified. Any abnormal mucosa required biopsy for pathological evaluation of whether there was local tumor recurrence. Meanwhile, regular physical and blood examinations were carried out to evaluate the side effects of the steroid. Bone mineral density testing was performed before ESD treatment and 6 mo after ESD treatment.
Outcomes
The main outcome measure was incidence of esophageal stenosis. Esophageal stenosis was defined as the inability of 9.9 mm diameter gastroscope (GIF-Q260J; Olympus) to pass through the esophageal stenosis. Secondary observation indicators of glucocorticoid-related adverse events were observed at 1, 3, and 6 mo after ESD such as newly diagnosed diabetes or aggravation of diabetes, pepticulcer, adrenocortical insufficiency, aggravation of osteoporosis or fracture, and corticosteroid-related mental disorders. End point: The follow-up was terminated if tumor recurrence and serious adverse events of glucocorticoid and procedure-related complications (procedure-related bleeding and procedure-related perforation) occurred.
Statistical analyses
Continuous variables are presented as the mean ± standard deviation or median (interquartile range, 25%-75%). Categorical variables are expressed by proportion. Date analyses were conducted using SPSS 23.0 software (version 23.0 for Mac).
Background characteristics of patients
After ESD surgery, there were 18 patients with mucosal defects more than 3/4 of the esophageal circumference. One patient received additional CRT treatment, and 3 patients received additional surgery and were removed from this study. Eventually, a total of 14 patients met the criteria. Patients and characteristics of lesions are shown in Table 1 and Table 2. Male patients accounted for 64%, with a mean age of 62.1 years (ranging from 45 to 75 years). According to the Paris endoscopic classification,13 cases of endoscopic tumor morphology were classified as type 0-IIa and 1 case was type 0-IIc. The lesions were mainly located in the middle and lower esophagus, and 1 case was located in the upper esophagus. Each patient successfully received esophageal ESD treatment, and postoperative pathology confirmed that the lesion was completely removed. All patients had no procedure-related bleeding and procedurerelated perforation after esophageal ESD. The mean resection size was 55.5 mm in diameter (ranging from 47.5 mm to 65.0 mm). According to the range of esophageal mucosal defect, 11 cases involved ≥ 3/4 and < 7/8 circumference, 1 case involved ≥ 7/8 circumference, and 2 cases involved the entire circumference. The longitudinal extension of mucosal defect was < 50 mm in 3 patients and ≥ 50 mm in 11 patients. In 12 cases, the depth of invasion of pathological tissues was limited within the epithelium and lamina propria mucosa, whereas 2 lesions were limited within muscularis mucosa without lymphovascular infiltration. The shortest follow-up time of all cases was 6 mo, the longest follow-up time was 28 mo, and the median follow-up time was 13 mo. During this period, all patients were followed up by endoscopy regularly without dysphagia. The incidence of esophageal stenosis was 0% (0/14) (Table 3). Representative cases are shown in Figures 1 and 2.
Only 1 patient developed esophageal Candida infection on the 30 th d after ESD and recovered completely after 7d of treatment with oral fluconazole 100 mg/d. Glucocorticoid-related adverse events were observed such as newly diagnosed diabetes or aggravation of diabetes, pepticulcer, adrenocortical insufficiency, aggravation of osteoporosis or fracture, and corticosteroid-related mental disorders.
DISCUSSION
In the current study, increasing the dose of oral hormone (prednisone acetate 50 mg/d) and prolonging the treatment time (13 wk) were effective to prevent esophageal stenosis in patients with mucosal defects ≥ 3/4 circumference after ESD. Studies have shown that the occurrence of esophageal stricture after ESD is related to the infiltration of postoperative inflammatory cells and vascular proliferation [11,12]. At the same time, epithelial cells proliferate and migrate from the edge of the wound after ESD, fibroblasts proliferate continuously, and finally fibrous scar is formed. This process is divided into three stages: acute inflammation, proliferation, and remodeling; however, the duration of this process is unknown. Through the dog model study, Honda found that within about 1 mo after esophageal EMR, the mucosal defect healed and was covered by squamous cells. Although the proper muscle layer was not damaged, muscle fiber atrophy still occurred in the 1 st wk after operation, and finally fibrosis was formed [13]. Some clinical studies have also shown that esophageal stenosis mostly occurs within 2 to 4 wk after surgery [3,14], but this was confined to endoscopic observation. Glucocorticoid has a strong anti-inflammatory effect, which not only inhibits the synthesis of collagen but also promotes the decomposition of collagen to inhibit the formation of stenosis. In our study, all cases did not have esophageal stenosis after ESD. We believe that increasing the dose of prednisolone can enhance the antiinflammatory effect in the acute inflammatory period, especially in the critical period of the 1 st mo after ESD. Meanwhile, we speculate that the process of esophageal stenosis may last longer than expected, and prolonging the usage of prednisolone may inhibit the proliferation of fibroblasts steadily to prevent esophageal remodeling and the formation of esophageal stenosis. There are several reports on the application of steroids to prevent stenosis after ESD operation for large-area superficial esophageal squamous cell carcinoma and precancerous lesions. Yamaguchi et al [5] reported the therapeutic effect of oral prednisolone after esophageal ESD for the first time. In their report, prednisone was taken on the 3 rd d after ESD, with an initial dose of 30 mg/d, and then decreased gradually (30, 30, 25, 25, 20, 15, 10, and 5 mg for 1 wk each). The incidence of stenosis after semi-circumferential ESD resection and entire circumferential ESD resection were 6.3% (1/16) and 0% (0/3), respectively[5]. However, for the cases of circumferential esophageal mucosal defect after ESD, Sato et al [15] found that oral prednisone 30 mg/d could not reduce the incidence of postoperative esophageal stenosis, but could decrease the total number of EBD expansions required. In Kadota's [7] study, the stenosis rate of patients with less than entire circumferential ESD resection and with oral prednisone 30 mg/d administration was similar to the results by Yamaguchi et al [5], whereas patients with entire circumferential ESD resection showed higher stenosis rate (10/14) even with additional local submucosal steroid injections [7]. Meanwhile, two studies of submucosal injection of triamcinolone acetonide within the mucosal defects combined with oral prednisone in the prevention of esophageal stenosis post-ESD for lesions more than 3/4 circumference have obtained completely opposite results. Chu et al [16] reported that after treated with submucosal injection of triamcinolone acetonide within the mucosal defects combined with oral prednisone 30 mg/d, the incidence of esophageal stenosis was only 18.2% (2/11), including lesions with total circumferential resection. Surprisingly, in the Hanaoka et al [17] study of 12 cases with whole circumferential defect, the same steroid submucosal injection combined with oral prednisone 5 mg/d were used for post-ESD treatment; nevertheless, 11 patients failed to avoid postoperative stenosis. This discrepancy may be caused by different doses of orally-taken prednisolone in these studies. A study about short-term usage of oral prednisolone (30, 20, and 10 mg/d for 1 wk each) for mucosal defects ≥ 3/4 circumference, including 3 patients with total circumferential resection that showed a stenosis rate of 18% (3/17), and 1 of 3 patients with total circumferential resection withstood stenosis [18]. Accordingly, we speculate that the prevention of esophageal stenosis after esophageal ESD by oral prednisone was correlated with the dose and the use time. In our study, we increased the dose of prednisone to 50 mg/d and prolonged the treatment time to 13 wk. During our follow-up, 14 patients had no feedback of dysphagia symptoms, and no esophageal stenosis observed by endoscopic examination. Especially the 2 patients with entire circumference mucosal defects, although the esophageal wounds were fibrotic, the 9.9 mm diameter gastroscope (GIF-Q260J; Olympus) December 26, 2022 Volume 10 Issue 36 could pass, and we did not add EBD treatment. In addition, studies have shown that the injury of the intrinsic muscle layer was one of the risk factors for esophageal stenosis after ESD for early esophageal cancer and precancerous lesions [19,20]. Therefore, we paid more attention to avoid the injury of the intrinsic muscle layer as much as possible during ESD operation, which we think is also helpful for the prevention of postoperative esophageal stenosis. Furthermore, systemic steroids are associated with adverse events, including newly diagnosed diabetes or aggravation of diabetes, pepticulcer, adrenocortical insufficiency, aggravation of osteoporosis or fracture, and corticosteroid-related mental disorders. Stuck et al [21] showed that when the cumulative dose of oral prednisone exceeded 700 mg, the risk of infectious complications in patients taking prednisone increased with the increase of prednisone dosage. One study also found that even short-term steroid use is related to increased risks of adverse events [22]. However, in our protocol, the accumulated dose of oral steroids was 3075 mg, which was higher than that of other studies, and proton pump inhibitor, oral calcium, and vitamin D3 were taken simultaneously. One patient was found to have esophageal Candida infection on the 30 th d after operation, and completely recovered after 7 d of oral fluconazole 100 mg/d therapy, and no patients experienced other adverse incidents related to orally-taken prednisolone. Therefore, we believe that the treatment scheme of increasing the dose of prednisone (50 mg/d) is safe, but still needs long-term follow-up and observation.
This study had several limitations. First, this study was a retrospective analysis in single-centered, and possible bias could not be avoided. Second, the follow-up time was insufficient and could not comprehensively evaluate the feasibility and safety of the hormone. Third, the number of subjects was relatively small, and the control group was lacking, so statistical difference analysis could not be conducted. Due to these limitations, prospective randomized controlled studies should be established to validate the efficacy and safety of prophylactic steroid therapy.
CONCLUSION
In conclusion, increasing the dose of oral prednisone (50 mg/d) and prolonging the usage time (total 13 December 26, 2022 Volume 10 Issue 36 Endoscopic view of the tumor after Lugol's staining. The tumor spread to about the entire circumference of the esophageal lumen; B: Endoscopic view of the ulcer bed immediately after endoscopic submucosal dissection. The width of the mucosal defect was the entire lumen circumference. Then oral steroid was administered as a prophylactic treatment; C: Endoscopic view 6 mo later. The mucosal defect underwent complete epithelialization, and an 9.9 mm diameter gastroscope (Olympus GIF-Q260J) could pass; D: Endoscopic view after 1 yr. The endoscope could pass without dysphagia.
wk) may effectively prevent esophageal stenosis after ESD to remove large-area superficial esophageal squamous cell carcinoma or precancerous lesions of esophagus, and does not increase the incidence of glucocorticoid-related adverse events.
Research background
Esophageal stenosis is one of the main complications of endoscopic submucosal dissection (ESD) for the treatment of large-area superficial esophageal squamous cell carcinoma and precancerous lesions (≥ 3/4 of the lumen). Oral prednisone is useful to prevent esophageal stenosis, but the curative effect remains controversial.
Research motivation
Explore more effective methods to prevent esophageal stenosis after ESD for early esophageal cancer and precancerous lesions.
Research objectives
We shared our experience of the precautions against esophageal stenosis after ESD to remove large superficial esophageal lesions.
Research methods
Patients with large superficial esophageal squamous cell carcinoma and high-grade intraepithelial neoplasia experienced ESD were enrolled. Prednisone (50 mg/d) was administered orally on the 2 nd d after ESD for 1 mo, and tapered gradually (5 mg/wk) for 13 wk.
|
2022-12-28T16:01:52.261Z
|
2022-12-26T00:00:00.000
|
{
"year": 2022,
"sha1": "65af0bae5c892f442f878a89d4d8375dc96fbe16",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v10.i36.13264",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "952c2c700c398ca7bb8e4e125105d058f1ffdc6e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
5994819
|
pes2o/s2orc
|
v3-fos-license
|
Techniques for assessing teratogenic effects: epidemiology.
Epidemiologic studies of malformations can aid in the understanding of human teratogenesis. Employing a variety of approaches epidemiology can develop or test hypotheses concerning possible causes or through surveillance provide data useful for a variety of purposes. Drawing heavily upon our experiences at the Center for Disease Control, this paper reviews some concepts and uses of epidemiology in studies of human teratogenesis.
Epidemiology is the study of the distribution and determinants of disease among populations. It is the study of the causes and manifestations of disease and of their associations. This paper focuses on the use of epidemiology to study human structural malformations and their environmental determinants. Epidemiologic studies of malformations may be classified as descriptive or analytic. In descriptive studies, malformation cases are characterized according to frequency, race, sex, or other variables related to their occurrence; in analytic studies their associations with possible environmental determinants are defined and tested. Another part of the epidemiology of malformations is surveillance, the routine collection, analysis, and reporting of data.
Some concepts should be kept in mind when considering epidemiologic studies of malformations and teratogenesis. First, structural malformations are only one of several pregnancy outcomes influenced by environmental determinants. Others include spontaneous abortion, fetal and infant death, low birth weight, mental retardation, deafness, and blindness. Limiting at-*Cancer and Birth Defects Division, Bureau of Epidemiology, Center for Disease Control, Public Health Service, U.S. Department of Health, Education, and Welfare, Atlanta, Georgia 30333. tention to malformations only excludes other effects of the environment on the fetus. For practical reasons, studies are generally restricted to one outcome, but the other disorders should be kept in mind, particularly when the study design allows consideration of them.
Second, unless there is evidence for grouping malformations, each should be considered separately, even those involving the same body system. Patients with anencephaly-spina bifida (ASB), for example, have a lower proportion of affected males, greater racial variation, and more familial recurrence than patients with hydrocephalus alone (1)(2)(3)(4). Similarly, patients with cleft lip with or without cleft palate differ epidemiologically and, therefore, probably etiologically from those with cleft palate only (5). In fact, patients with cleft lip and palate plus other malformations differ epidemiologically from those with facial clefts only (6). Despite the frequent need for larger numbers of cases to study, the cases should not be combined without regard for their possible epidemiologic or etiologic differences.
Third, one malformation can have a variety of causes, and one cause can result in a variety of malformations. Rubella infection during pregnancy, for instance, can cause congenital heart defects, cataracts, mental retardation, and deafness in affected infants. Conversely, congenital cataracts, while caused by the rubella virus, can also occur with over 20 other conditions, including Down's syndrome, a chromosomally determined condition, and galactosemia, a genetically determined biochemical abnormality (7).
Last, many factors and teratogenic mechanisms determine the number and type of human malformations caused by exposure in utero to an environmental teratogen. Discussion of these factors and mechanisms is beyond the scope of this presentation, but they have been considered in some detail by Wilson (8). Knowledge of these mechanisms must be brought to bear on epidemiologic studies. For instance, an important determinant of malformations is the gestational age at which the fetus is exposed to the teratogen. The now classic example is thaliodomide, which produced a variety of structural abnormalities among infants exposed in utero. Most typical were reduction deformities of the limbs and anotia; other anomalies included duodenal stenosis and atresia, heart anomalies, facial hemangiomata, and anophthalmia or microphthalmia (9,10). Early in the studies of thalidomide, investigators reported that at least 139 pregnant women had taken the drug without its causing fetal malformations. Upon establishing more precisely when the mother took it, however, they defined the embryosensitive period as 34-50 days after the first day of the last menstrual period (11). Embryos exposed during this period were malformed; those exposed outside this sensitive period were unaffected. In this instance, the fact of exposure had to be further qualified by the time of exposure. When this was done, the findings were consistent with knowledge about the timing of normal embryogenesis.
Descriptive Epidemiology
Descriptive epidemiology characterizes patients with malformations according to such variables as geographical differences, seasonal patterns, socioeconomic and racial influences, and maternal age effects. Such studies may identify high or low incidence populations which can suggest areas for further work. The vast majority of epidemiologic studies in humans to date have been descriptive. ASB has been well described epidemiologically because two circumstances have made these defects especially well suited for study: affected infants are easily recognized and reliably reported and cases occur with sufficient frequency that adequate numbers can be assembled for study.
From these descriptive studies have come an epidemiologic picture of ASB which suggests in part an environmental etiology. Pronounced geographic differences exist between countries and even within the same country. In Ireland, for instance, over 1% of newborns are affected with ASB, while in Great Britain a twoto threefold increase in rates occurs in a progression from southeast to northwest (3,12). In the United States, the ASB incidence declines from east to west with the rates of 1.0 per thousand in the South being twice that in the West (13). Other findings also suggestive of environmental factors include an inverse relationship between ASB incidence and socioeconomic status (14,15) and incidence changes of epidemic proportions over time (16,17).
Descriptive studies of malformations in the United States have generally drawn upon three data sources: vital records (birth, death, and fetal death certificates), available hospital records, and special clinical surveys often employing standard infant examinations. Each source has advantages and disadvantages in terms of reporting completeness, uniformity and reliability of diagnosis, and ease with which the data are obtained. Vital records yield rates of 1% to 1.5% for total malformed infants, hospital records yield rates of 2.5% (18)(19)(20), and intensive clinical surveys yield rates ranging from 5% to 15% (21-23).
These ranges reflect the composite effect of several variables, including the completeness of recognition and recording of malformations, the infant's age at diagnosis, and the definitions used to distinguish malformations, variants, and states of normalcy. The intensive clinical studies provide the highest rates because they insure that every newborn receives a standard examination with special attention to such relatively inconspicuous conditions as ear shape, the configuration of fingers or toes, and the existence of more trivial birth marks. Of the other two sources, vital records are the most deficient (20), yet they are still quite suitable for some uses. For example, studies of seasonality or the effects of maternal age and parity upon the occurrence of some malformations can be done with vital record data.
Another determinant of malformation case rates is the age at which malformations are recognized and still reported (21,24,25). Internal anomalies of the cardiovascular, renal, and gastrointestinal systems in particular may not be evident until some weeks, months, or even years after birth. Therefore, rates will be higher from data sources which provide for ascertainment of cases into later life.
Descriptive studies require sizeable case numbers and knowledge of the characteristics of the general population from which the cases are drawn, For this reason, they often use data available from the Bureau of the Census or from state vital records departments. These populations generally conincide with county, multicounty, or state boundaries and hence are geographically (or population) based.
The use of a geographic base can pose problems of case ascertainment. Identifying and registering cases over a large geographic area requires the cooperation of many hospitals and sometimes other facilities where malformed infants are seen. The involvement of a number and variety of data sources precludes an intensive clinical examination of each infant. Therefore, one must develop some mechanism for obtaining case reports which combines hospital and vital record reporting, realizing that there will be some loss in reporting completeness.
One example of such a community-based surveillance program is the Metropolitan Atlanta Congenital Defects Program (26
Analytic Epidemiology
From descriptive studies, laboratory investigations, clinical reports and other sources, then, come leads about possible human teratogens. These leads generally result in hypothesis testing with analytic studies. The data available from the Atlanta program, for example, have been used for analytic studies of drugs and other environmental teratogens. In 1973, on the heels of a reported association be-tween parental use of spray adhesive compounds, chromosomal damage, and congenital malformations, we compared time trends for the rate of malformations with the sales of the suspected teratogen ( Fig. 1) (27). In the face of a marked increase in sales no change occurred in the rates of all malformations, infants-with multiple defects, or any single malformation. To determine the extent to which pregnant women might have been exposed to the compounds, 173 postpartum women in five Atlanta hospitals were interviewed about their exposure to spray adhesive compounds: nine (4.6%) reported having used them at some time. While the lack of change in incidence is evidence against substantial teratogenesis, a small effect could have been present and not detected by this method. However, by using these same data, an estimate also can be made of the maximum teratogenic risks which the compounds pose. In this case, the proportion of women at teratogenic risk was the proportion of women exposed to spray adhesives who had malformed babies. The rarer the defect, the smaller the risk that might be detected. From the interview data it is estimated that some 5% (1400) of the 28,000 Atlanta women delivering babies in 1973 were potentially exposed. If these compounds affected, for example, 2% of the 1400 pregnancies, there would be 28 "extra" cases from exposure. This would appear as a twofold increase over the usual malformation rate of 1 per 1000 or less. (Roughly 95% of the 130 different malformations coded in Atlanta occur at this frequency or less.) In the unlikely event that spray adhesive compounds caused an increase in all malformations, a 100/ risk would be detec- December 1976 tLL Two other types of analytic studies are the case-control and cohort studies. The case-control study examines a group of cases with 1 or more of the same malformation(s) and a group of suitable controls, comparing their frequencies of exposure to an environmental agent or the presence of some other determinant. Greater frequency of exposure among the case group may suggest a causal association with the malformation under study. Since the data on exposure are collected after the malformation has been diagnosed, the case-control study is in the temporal sense retrospective.
Another phase of the Metropolitan Atlanta Congenital Defects Program is the regular interview of mothers 3 months after the birth of an infant with any of 12 selected malformations.
Mothers are questioned about their occupation before and during pregnancy, their residence at conception, any illnesses, and any drugs taken during pregnancy. When the interviews began in 1970, Gal et al., also using a case-control study, had reported an association between the use of hormonal tests for pregnancy and the later birth of infants with defects of the central nervous system (28). As part of the Atlanta interview we included a question about the hormonal pregnancy test. Over a 3-yr period, 123 mothers of infants with ASB and 310 mothers of infants with other malformations, who served as controls, were interviewed (Table 1). We chose these mothers as controls because mothers of malformed infants have been shown to recall more events during pregnancy than mothers of normal infants (29). We found no significant differences in frequency of use of the hormonal pregnancy test (30).
The second type of analytic study is the cohort approach. These studies begin with the fact of exposure, for example, a group of pregnant women exposed to an environmental teratogen. These women and a comparison group, preferably similar in all respects except exposure, are then followed prospectively from exposure to outcome. Alternatively, the exposed and unexposed cohorts are analyzed retrospectively after pregnancy has ended. So long as the presence or absence of exposure is defined independent of any knowledge of pregnancy outcome, the "retrospective" nature of such a study need not be reason for skipticism. In fact, this latter approach can be preferable since there need not be provisions for long-term observation of the exposed and non-exposed cohorts.
One notable advantage of cohort studies of teratogenesis is that a variety of poor pregnancy outcomes can be sought among the exposed population. Providing the exposed women are followed from early in pregnancy, the range of effects upon pregnancy, including abortion, stillborn infants, and malformations can be determined. Continued follow-up of infants exposed in utero might determine further effects such as mental retardation, learning disabilities, deafness, or blindness. The commonly overriding disadvantage of cohort studies is their expense. The low frequency of most of these conditions necessitates the study of large numbers of persons. Acceptable statistical boundaries for such a study might be a 95% certainty of detecting an effect when present and a 5% risk that an observed effect is falsely positive. Within these limits, a cohort of approximately 700 exposed pregnant women would be required to detect a 10-fold increase in a malformation which occurs normally at a rate of 1 per 1000. Within the same boundaries, detection of a twofold increase would require enrollment of approximately 17,000 exposed women.
'Surveillance
Surveillance is a third component of the epidemiology of congenital malformations. Its purposes include the provision of data for Environmental Health Perspectives descriptive studies, the development of a registry for case-control studies, and the monitoring of malformation cases for changes suggestive of environmental influences (31). Monitoring and surveillance can be used interchangeably so long as the term conveys a sense of immediacy in the collection and analysis of data. The importance of monitoring has been considered to be the early detection of thalidomidelike epidemics. To date no such episodes have occurred, to our knowledge, so the usefulness of this approach has not yet been tested. Nevertheless, in the face of the countless new substances which current technology contributes to the environment and whose teratogenic effects upon humans are not known, malformation monitoring must be considered prudent for the foreseeable future.
While monitoring of malformation cases can detect changes which might suggest environmental causes, it will not necessarily explain the cause of the change. Vital to the monitoring activity is the systematic comparison of the current incidence of malformations with an expected value based on previous experience. Comparisons should provide for recognition of sudden increases, gradually rising trends, and geographic differences. Because of these many and varied comparisons computers have come to play an important role in monitoring.
The Metropolitan Atlanta Congenital Defects Program has provided for the local monitoring of malformation cases at monthly intervals since 1970. Although many increases have been noted, most were transient, and to date no environmental cause for any of them has been found. Four other programs, in Nebraska, Washington, Upper New York State, and northern Florida, have also contributed to the monitoring effort in this country. The Nebraska and Florida programs use hospital records, while the New York and Washington programs rely upon vital records (31).
The first nationwide monitoring effort was started at the Center for Disease Control in December 1974 with the implementation of the Birth Defects Monitoring Program (BDMP) (partially funded by Interagency Agreement No. YO-HD-31002) (13). This program was developed jointly by CDC, the National Institute for Child Health and Human Development, the National Foundation-March of Dimes, and the Commission on Professional and Hospital Activities (CPHA), a health-data-processing organization located in Ann Arbor, Michigan. Data used in the BDMP are obtained from hospital discharge records of some 1,200 hospitals located throughout the country. These BDMP hospitals are among those already enrolled in CPHA's Professional Activity Study (PAS), a computerized service for participating hospitals.
The BDMP uses data on newborn discharges already sent to CPHA by the PAS hospitals. There are presently some 1 million births annually in BDMP-PAS participating hospitals. Because of the self-selection of these hospitals, the data are not a random sample of U.S. births nor are they geographically based. Nevertheless, the BDMP represents the largest single source of uniformly collected and coded data presently available on malformations among newborns in the United States. The percentage of live births by census division in the BDMP varies from over 15% in the East and West South Central Divisions to 51% in the East North Central Division (Fig. 2). This variation reflects the selfselection of hospitals which choose first and primarily to participate in the PAS and, second, in the BDMP. For a case with a malformation to be used in the BDMP system, the defect must be apparent at birth or during the newborn's nursery stay, be noted by the attending physician, and be recorded by the medical record department staff on discharge abstracts routinely sent to CPHA.
At CPHA a special data file is maintained for all newborns discharged from BDMP hospitals with any of some 200 different defects. Data already available for 1970 through 1973 were entered into this special file and are used as the baseline rate for calculating the expected numbers of malformation cases. At quarterly intervals, for each defect category the most recently observed number of cases is compared with the expected. Comparisons are made for 3-, 6-, and 12-month time periods and for geographic areas ranging from individual county to the whole United States. Whenever the observed number of cases in any of these exceeds the expected number at the 0.01 level or greater, the defect and the geographic area are listed on an exceptions report by the computer. These lists along with other data are then sent to CDC for analysis and evaluation for possible environmental causes. One of the more striking increases observed early was a threeto four-fold increase in the incidence of lung agenesis which occurred between 1973 and 1974 rather uniformly throughout the United States (Fig. 3). Upon further inquiry it was noted that a revision of the H-ICDA code (32) instituted on January 1, 1974, had resulted in the assignment of lung hypoplasia, a different but related defect to the lung agenesis category.
|
2014-10-01T00:00:00.000Z
|
1976-12-01T00:00:00.000
|
{
"year": 1976,
"sha1": "e4e0e7283db5b1131d77da8c7dc1a7f7af50b215",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.7618117",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4e0e7283db5b1131d77da8c7dc1a7f7af50b215",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46645409
|
pes2o/s2orc
|
v3-fos-license
|
Identification of Disulfide Bonds among the Nine Core 2 N -Acetylglucosaminyltransferase-M Cysteines Conserved in the Mucin (cid:1) 6- N -Acetylglucosaminyltransferase Family*
Bovine core 2 (cid:1) 1,6- N -acetylglucosaminyltransferase-M (bC2GnT-M) catalyzes the formation of all mucin (cid:1) 1,6- N -acetylglucosaminides, including core 2, core 4, and blood group I structures. These structures expand the complexity of mucin carbohydrate structure and thus the functional potential of mucins. The four known mucin (cid:1) 1,6- N -acetyl-glucosaminyltransferases contain nine conserved cysteines. We determined the disulfide bond assignments of these cysteines in [ 35 S]cysteine-labeled bC2GnT-M isolated from the serum-free conditioned medium of Chinese hamster ovary cells stably transfected with a pSecTag plasmid. This plasmid contains bC2GnT-M cDNA devoid of the 5 (cid:1) sequence coding the cytoplasmic tail and transmembrane domain. The C18 reversed phase high performance liquid chromatographic profile of the tryptic peptides of reduced-alkylated alties corresponding to Standard, Restrictive or Permissive parameters were used to scan the structural data base. Matchmaker and SYBYL graphical interfaces were used to analyze results. Finally, the Biopoly- mer module of SYBYL was used to build and analyze structural model of the bC2GnT-M molecule.
There are two types of mucins (1,2): secreted and membranebound. MUC2, MUC5AC, and MUC5B are representatives of secreted mucins, whereas MUC1, MUC4, leukosialin, and Pselectin glycoprotein ligand-1 are examples of membranebound mucins (3,4). Secreted mucins are produced by epithelial mucus cells and play important roles in the rheological and bacteria-binding properties of the mucus covering the epithelial tissues (5,6). Membrane-bound mucins are found at the cell surface throughout the body (3,4). They can modulate immune functions, such as maturation of B cells and trafficking of leukocytes during inflammatory response (3,4). The biological properties of both secreted and membrane-bound mucins are attributed to the structurally heterogeneous carbohydrates covalently bound to the peptide backbones.
These 1,6GlcNAc transferase (6GnT) 1 isozymes differ by their nucleotide and amino acid sequences, tissue distribution, and the carbohydrate structures they are able to form (9). Despite these differences, all 6GnTs contain nine conserved cysteines (9,12). In an effort to elucidate the structural deter-minants that distinguish the difference in substrate specificity among members of this gene family, we characterized the disulfide linkages formed among these nine conserved cysteines in bC2GnT-M. To facilitate the effort, we generated a secreted form of the recombinant bC2GnT-M by removing the N-terminal region that contains the cytoplasmic tail and transmembrane domain and cloned the cDNA into pSecTag2B, which contains Ig -chain leader sequence at the N terminus and Myc epitope and polyhistidine tag at the C-terminal end. By microsequencing of the [ 35 S]cysteine-containing tryptic peptides separated by reversed phase high performance liquid chromatography (RP-HPLC), we identified four cystine pairs between first and sixth, third and seventh, fourth and fifth, and eighth and ninth cysteine residues. The second cysteine was not conjugated. This pattern of disulfide bond distribution is different from that of mouse C2GnT-L recently reported (12). The results indicate that the conservation of nine cysteines does not lead to the formation of same disulfide bonds between different isozymes, suggesting that other factors such as secondary structures may play a crucial role in determining the formation of disulfide bonds and substrate specificity. Molecular modeling using distribution of disulfide bonds and fold recognition/threading method to search the Protein Data Bank showed a match with the crystal structure of aspartate aminotransferase (20,21). This template permits proper spatial arrangement of the cysteines involved in the formation of the four cystine pairs determined for bC2GnT-M. The structure is different from either the glycosyltransferase B-fold structure proposed for mouse C2GnT-L (12,22) or glycosyltransferase A-fold, the major protein fold proposed for glycosyltransferases (22,23 (17) by PCR using 5Ј and 3Ј primers containing EcoRI and KpnI restriction sites, respectively. The PCR product was cloned into a Pichia vector (pPIC6␣C) (Invitrogen) first and then transferred via EcoRI and NotI sites to pSecTag 2B vector, which contains Ig -chain at the N terminus and a Myc and a His 6 tag at the C terminus. After confirmation by sequencing and then enzyme activity assay of the recombinant protein following a transient transfection of CHO cells by published methods (24), the pSecTag2B-bC2GnT-M was used to generate stable clones in CHO cells as described next.
After they were cultured in Ham's F-12 medium plus 10% fetal bovine serum to 70% confluence, the CHO cells were transfected under serum-free conditions with pSecTag2B-bC2GnT-M delivered with Lipofectin supplemented with insulin as previously described (24). Two days later, cells were split 1:4 and cultured in Ham's F-12 medium plus 10% fetal bovine serum and 300 g/ml Zeocin (Invitrogen). After 10 days, 24 clones were picked and characterized for C2GnT activity. The clone that expressed highest C2GnT activity after four passages was used for the current study.
Assay of C2GnT-L, C4GnT-M, and IGnT Activities of Recombinant bC2GnT-M-The recombinant bC2GnT-M generated by the CHO cells stably transfected with pSecTag bC2GnT-M was assayed for C2GnT-L, C4GnT-M, and IGnT activities in the cells and conditioned medium as previously described (17). The conditioned medium was first concentrated 10-fold at 4°C by centricon filtration with a 30-kDa molecular weight cut-off membrane (Millipore Corp.).
Metabolic Labeling of bC2GnT-M-The [ 35 S]cysteine (or [ 35 S]methionine)-labeled bC2GnT-M was prepared from CHO cells stably transfected with pSecTag2B-bC2GnT-M as follows. The CHO cells that had grown in T-75 flasks to 90% confluence in Medium A (see "Cell Culture") were switched to 12 ml of serum-free Medium A containing 2 mM sodium butyrate and cultured for 6 -7 h. Then the cells were exposed for 1 h to Dulbecco's modified Eagle's medium (catalog no. 21013-024; Invitrogen) supplemented with 2 mM L-glutamine, 1 mM sodium pyruvate, 15 g/ml methionine (for preparing [ 35 S]cysteine-labeled bC2GnT-M) (25) or 24 g/ml L-cysteine-HCl (for preparing [ 35 S]methionine-labeled bC2GnT-M), and 2 mM butyrate. Following the addition of 63 l of [ 35 S]cysteine-HCl (11 mCi/ml at 1,075 Ci/mmol) or 22 l of [ 35 S]methionine (10 mCi/ml at 540 Ci/mmol) (ICN) to each T-75 flask and incubated for 1-2 h, the medium was replaced with serum-free Medium A containing 2 mM butyrate. After the cells were cultured for 24 -48 h, the conditioned medium was harvested, centrifuged at 1,000 ϫ g for 5 min to remove cell debris, and used for purification.
Purification of 35 S-labeled bC2GnT-M-The 35 S-labeled bC2GnT-M was purified from the combined supernatant in a two-step process. First, the medium was concentrated at 4°C from 180 to 10 ml using an SCHEME 1 Disulfide Bonds of the Nine Conserved Cysteines in C2GnT-M Amicon YM 30 centricon (Amicon Bioseparations Centriprep; Millipore) in a centrifuge (Jouan model MR 22i) at 1,500 ϫ g for 30 min. Two ml of nickel-nitrilotriacetic acid metal affinity resin (Qiagen), which has a 5-10-mg protein-binding capacity/ml of resin, was added to the concentrated medium. After a gentle shaking at 4°C overnight, the resin was packed in a column. Following successive rinsing of the packed resin with the supernatant twice, 10 ml of 10 mM imidazole (pH 8.0), 10 ml of 20 mM imidazole (pH 8.0), and 10 ml of 20 mM imidazole (pH 6.2), the protein was eluted with 300 mM imidazole buffer (pH 8.0) (26) and collected in 1.5 ml/fraction.
Polyacrylamide Gel Electrophoresis and Western Blot Analysis-The purity of the recombinant bC2GnT-M purified by a nickel-nitrilotriacetic acid column was analyzed by SDS-10% PAGE under reduced conditions followed by Coomassie Blue stain or Western blotting using anti-Myc antibody (1:500) (Invitrogen). The anti-Myc antibody-treated membrane was further treated with horseradish peroxidase-conjugated secondary antibody (1:1000) (Invitrogen) and developed with ECL (Amersham Biosciences).
Alkylation and Reduction-Alkylation of Recombinant bC2GnT-M-To prepare bC2GnT-M with free cysteines alkylated, 20-25 g of recombinant protein in 1.5 ml of elution buffer in a silanized tube was treated with 10 mM iodoacetamide in the dark at 37°C for 30 min (27). To prepare bC2GnT-M with all cysteines alkylated, the same amount of the recombinant bC2GnT-M was treated first with 15 mM dithiothreitol under argon gas at 37°C for 2 h and then 10 mM iodoacetamide for 30 min.
Trypsin Digestion of bC2GnT-M-Both alkylated and reduced-alkylated bC2GnT-M (90 g in 1.5 ml of elution buffer adjusted to pH 8.0 with 1 M Tris-HCl buffer in silanized polypropylene tubes) were digested for 16 h with 50 g of diphenylcarbamyl chloride-treated trypsin in 50 -150 mM Tris-HCl (pH 8) containing 5 mM CaCl 2 (25,27). Digestions continued for 4 h after the addition of another 50 g of trypsin (67 g/ml final concentration), which was maintained at pH 8.0 with 1 M Tris-HCl, pH 8.0. Samples were then centrifuged (1,500 ϫ g), and the supernatant was kept at 4°C prior to HPLC separation of the tryptic peptides.
Tryptic Mapping Strategy-The tryptic mapping strategy consisted of three steps (25)(26)(27). First, the [ 35 S]cysteine-containing bC2GnT-M was fully reduced and alkylated and then digested with trypsin. The tryptic peptides were separated by C18 RP-HPLC. Those HPLC fractions containing cysteine were identified by virtue of their radiolabel and pooled, and the attendant peptides were identified by Edman degradation. In the second step, the [ 35 S]cysteine-labeled bC2GnT-M was digested with trypsin without prior reduction and alkylation. The tryptic digests were subjected to chromatography on C8 RP-HPLC. In the third step, each [ 35 S]cysteine-containing peak from C8 chromatographic profile was reduced, alkylated, and rechromatographed by C18 RP-HPLC. The cysteines involved in cystine pairing were identified by comparing the profile with that obtained in step one. In this study, [ 35 S]methionine labeling was also employed to identify the peptides that contain both cysteine and methionine.
RP-HPLC Separation of Tryptic Peptides from Reduced-Alkylated bC2GnT-M Labeled with [ 35 S]Cysteine or [ 35 S]
Methionine-C18 (0.46 ϫ 25 cm) (Vydac; 300 Å, 5 m) column was used for establishing the profile of fully reduced and alkylated tryptic peptides first. It was then used for identification of the cysteines involved in cystine pairing. Tryptic peptides prepared from reduced-alkylated bC2GnT-M labeled with [ 35 S]cysteine or [ 35 S]methionine were injected onto a C18 column equilibrated with 0.1% trifluoroacetic acid (buffer A) at 42°C. The column was eluted isocratically at 1 ml/min for 3 min with buffer A followed by an acetonitrile gradient at 0.32%/min for 100 min, 4.2%/min for 15 min and then re-equilibrated with buffer A. One-minute fractions were collected in silanized polypropylene tubes containing 4.5 g of myoglobin/tube as carrier (25,27). The fractions were monitored by liquid scintillation counting. The fractions containing 35 S-containing fractions collected and analyzed as described above were concentrated by Speed-Vac, reconstituted in 1.5 ml of 150 mM Tris-HCl (pH 8.4) containing 20 mM dithiothreitol under argon gas, and incu-bated at 37°C for 3-5 h. Then free thiols were alkylated with 15 mM iodoacetamide under subdued light (26,27). The carboxymethylated [ 35 S]cysteine (or methionine)-containing peptides were then separated by C18 RP-HPLC as described above.
Amino Acid Sequencing-[ 35 S]Cysteine-or [ 35 S]methionine-labeled tryptic peptides were concentrated to less than 50 l using a Speed-Vac concentrator (Savant). Each sample was loaded onto a Polybrenecoated, trifluoroacetic acid-treated cartridge filter (Applied Biosystems) and sequenced using a pulse liquid protein sequencer (Applied Biosystems model 477A). After each cycle of Edman degradation, the released amino acid derivatives were collected and analyzed by liquid scintillation counting to determine the position(s) of radiolabeled cysteine or methionine in each peptide (28). Amino acid sequencing was performed in the protein sequencing facility at the University of Nebraska Medical Center (Omaha, NE).
Bio-Gel P-4 Column Chromatography-A Bio-Gel P-4 (200 -400mesh) column (1 ϫ 50 cm) was employed to separate the two cysteinecontaining peptides, which co-eluted at peak a (see Fig. 2) of the C18 RP-HPLC chromatogram of the tryptic peptides prepared from reducedalkylated bC2GnT-M. The column was eluted with water at 1 ml/min and collected at 1 ml/fraction. Fractions were analyzed by liquid scintillation counting to localize the [ 35 Fold Recognition and Molecular Modeling of bC2GnT-M-Due to the lack of appropriate templates (with sequence similarity greater than 30%) for homology modeling, the "inverse folding" approach (29) was used to determine a set of known three-dimensional protein structures, which were compatible with our sequence of interest. The Matchmaker module of SYBYL 6.8 software package (TRIPOS, Inc., St. Louis, MO) was utilized to find crystal structures from the RCSB Protein Data Bank (available on the World Wide Web at www.pdb.org) with threedimensional folds that match structural properties of the sequence of bC2GnT-M. Matchmaker examines propensities of amino acid residues from the protein sequence to be in a certain environment (solventexposed or buried), finds the optimal alignment (frozen or thawed mode) of the sequence to the "structural fingerprint" describing the threedimensional environment at each residue position, and estimates pseudoenergy scores for different protein folds. Three sets of gap pen- alties corresponding to Standard, Restrictive or Permissive parameters were used to scan the structural data base. Matchmaker and SYBYL graphical interfaces were used to analyze results. Finally, the Biopolymer module of SYBYL was used to build and analyze structural model of the bC2GnT-M molecule.
RESULTS
Purification and Characterization of the Recombinant bC2GnT-M Secreted into the Medium-We found that the recombinant enzyme secreted into the medium was fully active. However, the relative activity of the recombinant bC2GnT-M toward the three acceptors, core 1, core 3, and blood group i oligosaccharides, was changed from 0.7/1.0/0.4 in the wild-type bC2GnT-M (17) to 6.0/1.0/1.0 in the recombinant bC2GnT-M. Treatment with dithiothreitol (2.5 mM) and -mercaptoethanol (10 mM) did not affect the enzyme activity. The yield of the recombinant C2GnT-M isolated from the serum-free conditioned medium by nickel-nitrilotriacetic acid affinity column was about 1.5 g/ml. Coomassie Blue staining of the SDS-PAGE gel of the purified recombinant showed a single band of about 58 kDa (Fig. 1), which was larger than the calculated molecular mass (52,479 Da) of the recombinant protein.
Western blot analysis using an anti-Myc antibody also showed one band. Treatment of the purified enzyme with N-glycanase with or without sialidase A plus O-glycanase decreased the size of the recombinant protein by about 4 -5 kDa, suggesting that the recombinant protein was Nglycosylated at one or both of the two potential N-glycosylation sites, N-72 and N-108 (17). The lack of apparent change in size after treatment with sialidase A plus O-glycanase suggests either the absence or presence of a small amount of O-glycan T antigen with or without sialic acid in the recombinant bC2GnT-M.
RP-HPLC Tryptic Map of Recombinant bC2GnT-M Labeled with [ 35 S]Cysteine or [ 35 ]Methionine-The recombinant
bC2GnT-M contains 10 cysteines, of which nine are conserved among all members of the 6GnT family (9). The amino acid sequences of these 10 cysteine-containing tryptic peptides in the recombinant bC2GnT-M are listed in Table I (column 3). Analysis of these peptides by localization of the position of the [ 35 S]cysteine residue in each peptide by microsequencing followed by liquid scintillation counting could identify only nine (peaks a-i) of the 10 expected cysteine-containing peptides (Fig. 2). The radiolabeled peaks that could not be identified may represent incompletely cleaved tryptic peptides. The peptide that contained cysteine at the 17th position was not detected, due probably to inhibition of Edman degradation reac-tion by the proline at the amino-side of the cysteine. This peptide was also identified as peak I from the C8 RP-HPLC tryptic peptide map of alkylated bC2GnT-M prepared under nonreduced conditions (Fig. 3A). This peak I peptide had the same retention time as that of peak a shown in Fig. 2 after rechromatography on C18 column (Fig. 3B). The result suggested that peak a had two cysteine-containing peptides, one of them having a cysteine at the second position and the other one having a cysteine at the 17th position. This prediction was verified by column chromatography of peak a material in Fig. 2 on Bio-Gel P4, which separated peak a materials into peaks a 1 and a 2 (Fig. 2, inset). The a 2 peak, which was the smaller of the two, was confirmed to be the peptide that had cysteine at the second position by microsequencing. To positively verify the identity of the peak a 1 peptide, which contained a methionine at
FIG. 2. C-18 RP-HPLC separation of 35 S-labeled tryptic peptides of reduced and alkylated bC2GnT-M metabolically labeled with [ 35 S]cysteine.
Nickel-nitrilotriacetic acid affinity columnpurified bC2GnT-M labeled with [ 35 S]cysteine was reduced, alkylated, trypsinized, and then separated on a Vydac C18 column with an acetonitrile gradient described under "Experimental Procedures." The 35 S-labeled peptide peaks are designated as a-i according to the retention times. The identity of each peptide was determined by the position of the 35 S label recovered after each Edman degradation cycle and then measured by liquid scintillation counting. Peak a contains two 35 S-labeled peptides, which were separated into peak a 1 and peak a 2 by Bio-Gel p4 column chromatography and then analyzed by liquid scintillation counting. The chromatographic profile is shown in the inset. position 10, a C18 RP-HPLC tryptic peptide map was generated from the reduced and alkylated bC2GnT-M metabolically labeled with [ 35 S]methionine (Fig. 4A). By amino acid sequencing and then liquid scintillation counting, the peak a 1 material (Fig. 4A) was shown to be the tryptic peptide that contains methionine at position 10, indicating that peak a in Fig. 2 was a mixture of two peptides, one having a cysteine at the second position (a 2 ) and one having a cysteine at the 17th position (a 1 ). Fig. 3A were further reduced, alkylated, trypsinized, and run on C18 (B-G). HPLC fractions were collected and monitored by liquid scintillation counting. Peak II corresponds to peak c in Fig. 2. Peak III generated peaks e and f, peak IV produced peaks a 1 and g, peak V yielded peaks d and h, and peak VI formed peaks b and i. labeled peak was subjected to C18 RP-HPLC after reduction and alkylation to identify the cysteines involved in the formation of each cystine pair. As shown above, the cysteine in peptide a 2 was not involved in disulfide bond formation, because rechromatography of peak I material from the C8 tryptic peptide map (Fig. 3A) generated only a single peak (a 2 ) on the C18 column (Fig. 3B). Also, the cysteine in peptide c (Fig. 2) was not involved in disulfide bond formation, because rechromatography on the C18 column of peak II material eluted from the C8 column from alkylated bC2GnT-M (Fig. 3A) yielded only a single peak (c) (Fig. 3C). The identity of peak c material, Cys 55 -Arg 56 , was further confirmed by C18 RP-HPLC chromatography of alkylated Cys-Arg standard (data not shown). The four disulfide bonds were identified by rechromatography of peaks III-VI in Fig. 3A on the C18 column after reduction, alkylation, and trypsinization. Peak III yielded peaks e and f (Fig. 3D), indicating that these two cysteine-containing peptides were S-S-bound. Similarly, peak IV produced peaks a 1 and g (Fig. 3E), and peak V generated peaks d and h (Fig. 3F), whereas peak VI formed peaks b and i (Fig. 3G). The disulfide bridge between peptides e and f, which contained methionine (Table I and Fig. 4A), was further confirmed by rechromatography on the C18 column (Fig. 4C) of peak III material from C8 column fractions (Fig. 4B). These fractions were produced from trypsinization of alkylated bC2GnT-M labeled with [ 35 S]methionine. Cysteine 55, which is not a conserved cysteine, was not involved in disulfide bridge formation. Among the nine cysteines conserved in every member of the mucin 6GnT family, the second cysteine (Cys 113 ) is the only one not involved in disulfide bridge formation. The four disulfide bridges were formed between the first (Cys 73 ) and sixth (Cys 230 ), the third (Cys 164 ) and seventh (Cys 384 ), the fourth (Cys 185 ) and fifth (Cys 212 ), and the eighth (Cys 393 ) and ninth (Cys 425 ) cysteine residues. The radiolabeled peaks in Fig. 4 may be incompletely cleaved tryptic peptides.
Identification of Free Cysteine and Disulfide-bonded Cystine
Molecular Modeling-Initially, we attempted to generate a three-dimensional structure of bC2GnT-M based on the templates of GT-A and GT-B folds (22,23). However, neither protein fold could produce a structure that would allow the placing of cysteines in close proximity amendable to the formation of the disulfide bonds. We proceeded to search the Protein Data Bank for crystal structures that could accommodate the threading model of bC2GnT-M and the four disulfide bonds determined in our study. After carrying out Matchmaker runs (with Standard, Restrictive, or Permissive parameters and frozen or thawed alignments), we identified 10 crystal structures that showed the best pseudoenergy estimates. Three crystal structures, including 1GOX (glycolate oxidase) (30), 1ELS (enolase) (31), and 2CST.A (aspartate aminotransferase) (20), were found to be among the best scored proteins for each run. The spatial arrangement of the eight cysteine residues that were predicted to form the four disulfide bridges was taken as a criterion for further selection of the structural template to conduct molecular modeling. By this criterion, the only protein structure with a three-dimensional fold that demonstrated spatial proximity of all cysteines involved in the formation of disulfide bridges was a crystal structure of the chicken cytosolic aspartate aminotransferase (2CST.A). Among the high scored proteins, another crystal structure of the chicken mitochondrial mutant (K258H) aspartate aminotransferase 1AKA (21) had an even better positioning of the corresponding cysteines. Therefore, crystal structures of the aspartate aminotransferase 2CST.A and 1AKA were used as templates for molecular modeling of the bC2GnT-M. bC2GnT-M to locate at the proper locations amendable for the formation of the four cysteine pairs (Fig. 5B). The molecular model of the fragment (aa 48 -440) for bC2GnT-M, which was constructed based on the aspartate aminotransferase template and the formation of the four disulfide bridges, is shown in Fig. 6. DISCUSSION We have determined the disulfide bonds among the nine bC2GnT-M cysteines conserved in the mucin 6GnT family members (9). We employed the strategy of identifying cysteine-containing peptides by locating the [ 35 S]cysteine in the tryptic peptides of [ 35 S]cysteine-labeled bC2GnT-M by amino acid sequencing. We found that the nine conserved cysteines in bC2GnT-M had a different pattern of disulfide bond distribution compared with those of mouse C2GnT-L recently reported (12). In this study, HPLC followed by mass spectrometry was employed to determine the distribution of disulfide bonds. In mouse C2GnT-L, the sixth conserved cysteine was free, whereas it was the second conserved cysteine in bC2GnT-M that was a free thiol. The conserved cysteines involved in the formation disulfide bonds were the first and ninth, the second and fourth, the third and fifth, and the seventh and eighth cysteine residues for mouse C2GnT-L and the first and sixth, the third and seventh, the fourth and fifth, and the eighth and ninth cysteine residues for bC2GnT-M. Therefore, sharing of nine conserved cysteines does not necessarily form the same disulfide bridges. A similar observation has also been reported for ␣(1,3/1,4)-fucosyltransferase III and VII (32,33), in which different disulfide bonds were formed from the four cysteines conserved in these two fucosyltransferases. The results suggest that other factors, such as secondary structures and protein folds, play a crucial role in directing the formation of disulfide bonds.
The formation of different disulfide bonds between mouse C2GnT-L and bC2GnT-M would probably determine the difference in substrate specificity. Mouse C2GnT-L acts only on the core 1 acceptor, whereas bC2GnT-M can act on core 1, core 3, and blood group i acceptors. It will be of interest to know whether the pattern of disulfide bridge distribution of C2GnT-3, which exhibits same substrate specificity as that of C2GnT-L, is the same as that of C2GnT-L or different from those of C2GnT-L and C2GnT-M. As we previously pointed out (9), different 6GnT isozymes share 39 -52% amino acid sequence identity, whereas the same 6GnT isozyme from different animal species displays a 81-86% sequence identity. Characterization of the disulfide bond distribution of C2GnT-L from species other than mouse, and that of C2GnT-3 from different species should provide important clues for the identification of factors that direct the formation of disulfide bonds of the nine cysteines conserved among the mucin 6GnT family members (9,12).
As recently reviewed by Coutinho et al. (22), over 7,200 glycosyltransferase (GT)-related sequences are in the data banks. There has been no consensus structure among these glycosyltransferases (22,23). To date, each glycosyltransferase is defined by the nucleotide sugar donor, the acceptor, and the product. The recent explosion of the number of new glycosyltransferase sequence data in the postgenomic era has outpaced the speed of classical biochemical characterization of new glycosyltransferases. As a result, the identity of most of the putative glycosyltransferases remains unestablished. In an attempt to group these glycosyltransferases based on sequence similarity, 65 GT families have been identified (22). However, there is an inherent limitation in this approach because the specificity of glycosyltransferases is determined by three-dimensional structure. To date, there are only 11 glycosyltransferases of which crystal structures have been solved (22)(23)(24), indicating that only limited inference may be drawn from the data base. Despite the limitation, the structures of these glycosyltransferases have been used for proposing two GT superfamilies, GT-A and GT-B (22,23). The GT-A family, which includes eight of these glycosyltransferases, contains two tightly associated and abutting /␣/ domains that form continuous central sheet of at least eight -strands. The GT-A fold has also been described as a single domain fold. On the other hand, the GT-B fold contains two loosely associated Rossmann-like /␣/ domains facing each other forming an in between space to accommodate the sugar donor and acceptor (34). These two protein folds have been the model structures to which the new GT structure has been compared. It was observed that the nucleotide sugar binding site was located at the N-terminal domain of the GT-A enzymes but at the C-terminal domain of the GT-B enzymes, whereas the acceptor binding site was on the other domain. It should be noted that the DXD motif, which was considered a signature sequence for GT-A enzymes (35), was found at a similar frequency in GT-A (71%) and GT-B (69%) enzymes (22). Therefore, the DXD motif itself could not be used as a reliable marker for identifying GT-A enzymes. Other factors that can be used as more reliable predictors of glycosyltransferase structure need to be developed.
Since the secondary structures lack the predictability of the three-dimensional structure, which is an important determinant of the conformation of a glycosyltransferase, the disulfide bridges coupled with the secondary structures may provide the best basis for predicting the structure of a glycosyltransferase in the absence of a crystal structure. Using this structure prediction strategy, Yen et al. (12) found that mouse C2GnT-L fit the GT-B fold, which could accommodate all the cysteines at the spatial locations enabling the formation of the cystine pairs identified. However, neither GT-A nor GT-B fold could accommodate all of the conserved bC2GnT-M cysteines participating in disulfide bond formation at the locations, which make the formation of disulfide bonds feasible. Instead, the three-dimensional structure of the chicken aspartate aminotransferase (K258H) mutant (21) provides the best fit for bC2GnT-M. This is one example showing that a protein fold other than the GT-A and GT-B folds derived from the crystal structures of 11 glycosyltransferases may exist for other glycosyltransferases. However, the significance of the high degree of the proposed threedimensional structural similarity between the chicken asparate aminotransferase and bC2GnTM is not clear at the present time. These two enzymes catalyze enzymatic reactions by different mechanisms (e.g. ping-pong mechanism for aspartate aminotransferase (20) and sequential mechanism for bC2GnTM (13)). An apparent modification of the aspartate aminotransferase was detected during catalysis but none for bC2GnTM. Despite these differences, the x-ray crystallographic structure of the aspartate aminotransferase could help guide further structural characterization of bC2GnTM. These results plus that of Yen et al. (12) suggest that two C2GnT isoenzymes with nine conserved cysteines may have distinct three-dimensional structures. The difference in three-dimensional structures between these two isozymes may provide an explanation for the difference in acceptor specificity. Confirmation of these proposed structures would await the determination of x-ray crystal structures of these two 6GnT isozymes.
|
2018-04-03T06:24:12.783Z
|
2004-09-10T00:00:00.000
|
{
"year": 2004,
"sha1": "9f0f512d1b17b738ddae972c5e99e8df63d7651a",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/37/38969.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "05ca2e232b8d5a9421d39009a338374f8f7b6c17",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
245251266
|
pes2o/s2orc
|
v3-fos-license
|
Prediction and analysis of thermal aging behavior of magnetorheological grease
Magnetorheological grease (MRG) is a new type of field-response intelligent material with controllable performance and excellent settlement stability, which is feasible to replace traditional materials. The heating phenomenon of magnetorheological (MR) devices is more common during operation and the influence law of continuous thermal effect (thermal aging) on the performance of MRG needs to be studied. In this article, the effect of thermal aging behavior on the rheological properties of MRG has been investigated. Accelerated heat treat the sample and test the shear stress under the condition of thermo-magnetic coupling. To reduce the time and cost during the study of MR materials, an improved and reliable artificial neural network (ANN) prediction model was developed to characterize and predict the relationship among temperature, aging time, magnetic field strength and the thermo-rheological properties of MRG. The test results of MRG before and after thermal aging show that thermal aging causes irreversible structural damage and the performance decreases with increasing aging time. The comparison of the ANN prediction results with the test results, the correlation coefficient R reached and exceeded 0.95. The results showed that the model had excellent prediction accuracy and could provide theoretical reference for the thermal aging behavior of MRG.
Introduction
MRG is a field-response intelligent material. It has gained great attention of researchers because of its salient controllable properties and potential applications to various fields such as automotive industry, aerospace and military sector [1]. MRG uses commercial grease as a carrier liquid, which is structural colloidal dispersion system with special rheological properties in the process of flow [2] . The unique soap fiber structure of lubricating grease can effectively solve the settlement problem of magnetic particles [3]. MRG can be used as a working medium for some special-purpose MR devices due to its unique structural system.
In fact, researchers have conducted extensive research on MR devices. MR devices are mostly energyconsuming devices in the working process, inevitable temperature rise effect in the actual service process of MR devices. MR material is a composite system and temperature is an important factor affecting the properties of polymer composites [4]. Temperature increase has a large impact on the rheological properties of the MR medium. The research work of Rabbani et al [5] confirmed that the shear stress of MR materials decreases significantly with increasing temperature. Chen et al [6] studies of magnetorheological fluid (MRF) have found that magnetic particle chain formation is strongly influenced by temperature. The rheological properties of MRG is also influenced by temperature. The studies of Sahin et al [7] showed that temperature has a significant effect on the yield stress and apparent viscosity of MRG. Wang et al [8] observed that temperature has an effect on the apparent viscosity, and this effect decreases with the increase of magnetic field strength. Pan et al [9] found that MRG and grease have similar thermal rheological characteristics. The yield stress and consistency coefficient decrease with the increase of temperature. Yang et al [10] found that the interaction of the grease soap fiber structure and magnetic chain will have an effect on the shear stability of MRG. We expect the MRG to Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. maintain excellent stability as a MR medium during long-term service. Especially in engineering applications, long-term stable and reliable service is an important indicator for MR devices [11]. What is not ideal is that Zheng et al [12] found that long-term storage under high temperature causes changes in the viscosity of MRF and this phenomenon that constitutes the main failure modes of MRF. After long-term heat treatment, the degradation and structural damage of the base carrier liquid of MRG will occur [13], which have an effect on the whole MRG system. Combining previous research results, temperature is the main factors affecting the performance of MRG but the effect of long-term heat treatment on the performance of MRG needs to be further studied. MRG as a composite system inevitably faces structural deterioration and even thermal aging effects under long-term thermal effects. Aziz et al [14,15] experimentally evaluated effects of thermal aging on polymer composites under long-term elevated temperature exposure. Experimental results show that no significant changes of loss factor occurred at a low CIPs concentration, whilst the loss factor increased at a higher CIPs concentration [14] and over a long-term of thermal aging, the rubber becomes hard, thus influencing the properties of the materials [15]. However, the current research on MRG mainly focused on the influence of temperature and magnetic field, and the influence of continuous thermal effect (thermal aging) was ignored. Therefore, it is necessary to investigate the performance changes of MRG before and after thermal aging. This is valuable for assessing the stability and reliability of MRG for long-term service.
The experimental test is the main research means in the research process of MRG, it takes plenty of time, manpower and financial resources. ANN is a computational model with self-adaptability, self-organization and self-learning properties [16], and its output depends on connection mode, weight value and incentive function. In its application, Erdil et al [17] used ANN model to predict meteorological variables and achieved the reliable prediction of meteorological value through the training and testing of ANN model. Actually, in the research field of carrier liquid grease of MRG mentioned in this study, Lijesh et al [18] proposed a method to evaluate the hydrophobicity of grease to save time and materials, and quantified the hydrophobicity of grease by using the contact angle of water droplets on the surface of grease. Osara et al [19] derived the basic formula for the instantaneous nonlinear response of grease to load according to the law of thermodynamics. The grease model obtained is close to 100% agreement with the nonlinear data measured by uncontrolled experiments. Son et al [20] evaluated the life of ball bearing grease according to the changes of temperature, speed and load, and developed and theoretically verified the ball bearing grease life testing machine. Rezasoltani et al [21] applied the theory of irreversible thermodynamics to study the mechanical degradation of grease under shear. The verification results show that there is a linear relationship between degradation rate and entropy production, which can be used to estimate the life of mechanically degraded grease. In the existing studies, the grease model and grease life estimation obtained based on thermodynamic theory have a certain degree of confidence. Zheng et al [12] used the improved CGP algorithm to study the effect of high temperature on the rheological properties of grease-based MRF and predicted its lifetime. Bahiuddin et al [22] proposed a constitutive model of MRG to predict the variation of shear stress and dynamic yield stress with temperature. In the follow-up research, they proposed a model based on machine learning method to characterize and predict the relationship between the rheological properties of MRG and shear rate, magnetic field and its constituent elements [23]. Saharuddin et al [24] proposed a new constitutive model for the viscoelastic behavior of MRE. The machine learning method is used to predict the stiffness and damping characteristics of MRE varying with the magnetic field. With regard to the effect of thermal aging time on the rheological properties of MRG, there are few literatures about the effect of thermal aging time on the rheological properties of MRG. And the nonlinear regression model of multi-feature input and single-feature output does not fully cover the thermal aging time and magnetic field strength.
According to the above research work, introduce the corresponding prediction model to predict the rheological properties of MRG is valuable. The prediction model can save a lot of time and cost for research related to MRG. The purpose of this paper is to propose an ANN prediction model to predict the shear stress of MRG after thermal aging based on the rotational rheometer test results. The results of the study can provide reference for MRG performance prediction and evaluation after being thermal aged.
Materials Preparation
The MRG in this study is prepared in the laboratory, using grease as carrier liquid and CI particle as magnetic particles. The specific preparation method is in accordance with the method proposed by Mohamad et al [25]. The exact weight of grease was weighed and placing it in a beaker. Then the grease was stirred with a mechanical stirrer at a temperature of 80°C (ten minutes, 500 rpm). Subsequently, CI particles with a weight fraction of 30% were added to the grease and stirred with the same mechanical stirrer (800 rpm) until the grease and CI particles were were fully and evenly mixed. MRG for experiment can be obtained after cooling.
A lithium grease was selected as the matrix to suspend the CI particle, which was manufactured by Lubricant Tianjin Company, Sinopec Lubricating Oil Co., LTD (China). The main components and technical parameters of the lithium grease were listed in table 1. The weight fraction of CI particle is 30%. The CI particle (type MRF15) was purchased from Jiangsu Tianyi Ultrafine Metal Powder Co., LTD (China). The parameters for the MRF 15 CI particle were given in table 2.
Experimental method 2.2.1. Static thermal aging
For the static thermal aging process of MRG, the MRG was smeared on the inner wall of the beaker with a capacity of 150 ml (thickness is 1∼3 mm). Placed at heating and drying oven (DHG-9623A from Shanghai Jing Hong Laboratory Instrument Co., LTD, China) and heated at 120°C for 4 h, 8 h and 24 h. The thermal aging samples were numbered MRG-0h, MRG-4h, MRG-8h and MRG-24h.
Rheological property
Rheological measurements of MRG were performed in a controlled-shear and controlled-strain rheometer (Physica MCR 302 from Anton Paar, Germany) by using a plate-plate geometry (PP20/MRD, 20 mm diameter, 1 mm gap), as shown in figure 1. The magneto-controllable accessory (MRD 180) and temperature control unit (JULABO F25) were used to control magnetic field strength and temperature during the process of rheological experiment, respectively. The magnetic field of MRD 180 accessory was generated by controlling the current of experimental system, and the values 0 A, 1 A, 2 A, 4 A, 5 A are corresponding to 0 mT, 220 mT, 440 mT, 880 mT, 1100 mT, respectively. The magnetic sweep test, at different temperatures (25°C, 45°C, 65°C, 85°C), were carried out in an electricity range comprised between 0 A and 5 A, with a constant shear rate 10 s −1 for 300 s. The shear stress of MRG before and after thermal aging was tested at different temperatures with the variation of magnetic field strength.
Experimental results
The relationship between shear stress and magnetic field strength of MRG before and after thermal aging at different temperatures is shown in figure 2. It can be seen that the MRG shear stress increases with the increase of magnetic field strength, and tends to be smooth after the magnetic field reaches saturation. The operating temperature during service is a key factor affecting the performance of MRG, and the shear stress decreases significantly as the test temperature rises. At 25°C, the shear stress of MRG is significantly higher than other temperature conditions, which may be due to the high entanglement of grease soap fibers at low temperature, which hinders the movement of magnetic particles. In addition, the magnetic field scanning curve of MRG-24h at 25°C were different from other samples at the same temperature, and similar to those at 85°C, indicating the thermal aging at 24 h caused irreversible damage to the structure of MRG. After the magnetic field strength reaches 440 mT, the trend of the magnetic field scanning curve tends to be smooth, which is close to the magnetic field saturation. The shear stress varying with the magnetic field strength reached a maximum value and remained constant. This phenomenon is mainly due to the magnetic particles in MRG are formed into magnetic chains along the magnetic field direction under the action of the magnetic field, and the number of magnetic chains reaches the maximum after the magnetic field is saturated. In the case of magnetic field saturation, there was no obvious shear yield phenomenon in MRG-0h, but the shear yield phenomenon began to occur owing to the structural damage caused by thermal aging, among which MRG-24h had the earliest shear yield and was the most obvious.
Prediction model of thermal aging properties of MRG
To obtain the relationship between temperature, aging time, magnetic field strength and shear stress of MRG, an ANN prediction model was established. The model can be approximately regarded as a nonlinear continuous function. The nonlinear function can be expressed as the function of the variation of the shear stress of MRG with the magnetic field strength at different experimental temperature and aging time, as shown in equation (1): where F is shear stress, t is aging time, T is experimental temperature, and m is magnetic field strength.
ANN has a strong ability to approximate nonlinear functions, good accuracy can be obtained after data training [26]. Therefore, it is proposed to use ANN to complete the approximate fitting of this function, so as to provide relevant reference data for future MRG research and engineering applications. To further explain the working principle and implementation of the ANN prediction model, the implementation flow chart of the ANN prediction model is given, as shown in figure 3. Firstly, the original data need to be obtained through the rheological experiment on MRG after thermal aging. The architecture of ANN should be designed based on the characteristic of the experiment data, and the appropriate activation function and loss function should be selected [27]. The preliminary construction of the overall architecture of the network has been completed and the neural network will calculate the shear stress prediction value after input data and compare it with the experimental date.
Network architecture
In the process of setting the neural network architecture, the number of neuron nodes in the input layer is set to 3, because the input characteristics include temperature, aging time and magnetic field strength. The output is characterized by the predicted shear stress of MRG, so the number of neuron nodes in the output layer is set to 1. In addition, according to the Kolmogorov theorem [28], the formula for calculating the reference value of the number of nodes in the hidden layer is shown in equation (2): where p is the reference value of the number of nodes in the hidden layer, n is the number of neuron nodes in the output layer. In addition, in order to improve the generalization ability of the network and make it adapt to the complex objects, an additional hidden layer is added. The architecture of ANN is shown in figure 4 as: 3×7×1.
Training ANN
Input part of the data from the magnetic field scanning experiment into the ANN, randomly select 70% of the data for training and 30% of the data for verification and testing. As the data is substituted into the ANN, the model will derive the predicted value and compare it with the experimental value, and then calculate the mean square deviation by the loss function (as shown in equation (3)), making it lower than the given value. When the value is higher than the given value, the mean square error is used to modify the weights in the neural network. After the loss value is lower than the given value or reaches the given iteration period, the training is finished and the image of the loss value with the iteration period can be obtained at the same time, as shown in figure 5. It can be found from the chart that the mean square error of the ANN prediction model tends to be fixed after the number of training reaches 20 times.
Prediction results
The ANN prediction model was trained by using the experimental data of temperature, aging time and magnetic field strength as input features and the experimental data of shear stress as a single output feature. The established model takes the shear stress as a single prediction result to characterize the performance of MRG under different temperature and magnetic field strength after thermal aging. The experiment date which is not trained by the ANN prediction model is used as the verification experiment, and the experimental conditions (temperature, aging time and magnetic field strength) of the verification experiment are used as input characteristics to predict the shear stress with the trained ANN model, and the predicted shear stress is output. In general, it can be seen that the shear stress of the MRG increases as the magnetic field increases, and the shear stress reaches a constant value when the magnetic field strength exceeds a certain value (440 mT). This outcome is observed mainly due to the MR effect of MRG. The stronger the magnetism is, higher the number of magnetic chains will be, thereby eventually reaching saturation. The predicted model shows a good correlation with the experimental values, especially under the unsaturated magnetization.
Comparing the predicted value with the experimental value, it can be seen from table 3 and figure 6 that the ANN model proposed in this work can predict the shear stress with the R accuracy up to 0.99. The prediction results strictly show the theoretical trend of magnetic field scanning. The experimental results decrease slowly under saturation magnetization, but then recovers to the original level. Especially, the trend of graph at 24h-65°C displays a different trend from other groups. The predicted progress of MRG at magnetic field strength from 0 to 1100 mT shows excellent predictions at 0 h, 4 h, and 8 h, but there is a great discrepancy between the predicted value and the experimental value at 24 h. Further discussion will be given as following. The structural recovery properties of MR materials are more dependent on magnetic field strength and temperature [29]. The test temperature in figure 6 is 65°C. When the magnetic field strength reaches saturation, the error between the experimental and predicted values of MRG-24h shear stress increases with the further increase of magnetic field strength. Theoretically, as the magnetic field increases to saturation, the shear stress should remain constant. The predicted results are exactly this trend. MRG uses grease as a carrier liquid which shows excellent settling stability due to the soap fiber structure. The soap fibers create the link between the magnetic chains. As the thermal aging time increases, the soap fiber entanglement will be changed. Shen et al [30] discovered that the stability of the grease structures deteriorated with prolonged ageing time, and the net structures of fiber rebuilt slowly after shearing. For MRG, the link between the magnetic chains is weakened along with shear stability variation of the grease structures. Therefore, the shear thinning trend of graph at 24h-65°C is mainly attributed to the change of grease fibrous entanglement. With the thermal aging time increased, the damage degree of MRG structural increased. The MRG samples showed fracture phenomenon and shear thinning with increase of shear time. Thus, the experimental results are lower than the predicted values.
In addition, Wang et al [31] observed that under the action of magnetic field, the nonlinear behavior of MRG is related to the shear frequency. Therefore, in the follow-up research work, test variables such as shear time and shear rate should be considered as input features and the prediction model should be further optimized.
Conclusion
Through the continuous heat treatment of MRG at 120°C for 4 h, 8 h and 24 h, the plate test head of rotational rheometer was sheared at a constant rate under different temperature and magnetic field strength and the change of shear stress was observed. An ANN prediction model was established to predict the rheological properties of MRG after thermal aging treatment. The following conclusions can be drawn: 1) The effect of thermal aging behavior at 120°C on MRG is mainly manifested in the reduction of the degree of entanglement of the base carrier liquid grease soap fiber structure, the reduction of shear flow resistance, and the weakening of structural recovery performance. At relative low temperature, the entanglement degree of the carrier liquid grease soap fiber is high, which hinders the movement of magnetic particles. In the case of magnetic field saturation, there is no obvious shear yield phenomenon in MRG-0h, nevertheless, shear yield begins to appear owing to the structural damage caused by thermal aging, in which shear yield occurs earliest and most obvious in MRG-24h. Thermal aging causes irreversible structural damage to MRG, and the effect of this damage is further aggravated with the increase of shear time.
2) The ANN prediction model is introduced to predict the shear stress in the working process of MRG. The training task of the model is completed by taking temperature, aging time and magnetic field strength as input features. The comparison between the prediction results and the experimental results of the saturated magnetic field (440 mT) shows that the model has excellent accuracy in predicting the shear stress of MRG, ANN prediction model is feasible in practical applications.
3) The variation of MRG shear stress with heat treatment time is the main study of this paper. The output data of the ANN prediction model are consistent with the experimental data, indicating that the model can be used for the prediction of MRG rheological behavior and the development of related MR devices. However, the actual working conditions are more complex and involve frequent shearing and media loss, etc. In summary, the model can be further improved by introducing other input parameters, studying the normalization of data and activation functions. With the continuous improvement and optimization of the ANN prediction model, it is expected that future research will provide more informative prediction data for MRG materials and MR device design.
|
2021-12-17T16:36:11.167Z
|
2021-12-15T00:00:00.000
|
{
"year": 2021,
"sha1": "edb1f4073557792e9891c4c7bdb018c404fc7b3c",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2053-1591/ac433d/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "cd6e930860c2f63848bc9bc0a76fb23c0033f9bd",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
}
|
149884811
|
pes2o/s2orc
|
v3-fos-license
|
Viewpoint Estimation using Triplet Loss with A Novel Viewpoint-based Input Selection Strategy
Viewpoint estimation is a fundamental procedure in vision-based robot tasks. A good viewpoint of the camera relative to objects can help the visual system perform better both in observation and manipulation. Recently, CNN-based algorithms, which can effectively extract discriminative features from images in challenging conditions, are utilized to handle the viewpoint estimation problem. However, most existing algorithms focus on how to leverage the extracted deep features while neglecting the spatial relationship among images that captured from various viewpoints. In this paper, we present a deep metric learning method for solving the viewpoint estimation problem. A triplet loss with a novel viewpoint-based input selection strategy is introduced, which could learn more powerful features after incorporating the spatial relationship between viewpoints. Combined with the traditional classification loss, the presented loss can further enhance the discriminative power of features. To evaluate the performance of our method, a dataset containing a large number of images generated from five different texture-less workpieces is built and the experiment results show the effectiveness of the proposed method.
Introduction
Generally, human beings are not necessary to calculate the precise pose of the target object relative to the eyes but roughly estimating the view of target to determine the action. In vision-based robot tasks, precise pose estimation for target is also a great challenging because of the requirement of the accurate camera and coordinate calibration. Moreover, the situations that objects perform different shapes under different views bring difficulties in pose estimation [1]. Hence, some researchers [1]- [3] believe that finding an optimal viewpoint, which is defined as a point on an object-centered view sphere, is helpful for better manipulation and observation. Viewpoint estimation has become a meaningful research subject. For example, in 3D object retrieval tasks [3]- [4], estimating the viewpoint of objects can help enhance the performance of object recognition and in active vision tasks [2], viewpoint estimation is considered as a pre-procedure of pose estimation.
Recently, in order to extract more robust and discriminative features, [5]- [7] have employed the convolutional neural networks (CNNs) to handle the viewpoint estimation problem. Viewpoint estimation is solved as a classification problem and excellent performance is obtained. In practical applications, the difference between the feature of images and that between the spatial distance of viewpoints is often incommensurate. In some cases, objects tend to have similar shapes and features observed from the front side and back side, but the distance between the two observation positions is [5] proposed a geometric loss to take the spatial relationship into account and improve the performance of viewpoint estimation. According to recent works, metric learning [8] can also be well used to learn the spatial relationship and seems suitable for viewpoint estimation task. In metric learning, triplet loss [9], which is proposed by Wein-berger and Saul, has been introduced for training CNN in order to learn a metric or an embedding space that makes the samples from the same class closer to each other than those from different classes. The triplet loss has been implemented in training CNNs for face recognition [9] and person re-identification [10]. Similarly, if the embedding of image features in learned feature space can be adjusted on the basis of the spatial relationship of known viewpoints, the network can represent more discriminative features that take both the image self and viewpoints into account. In this paper, we propose a triplet loss function with a viewpoint-based input selection strategy and apply it training the CNN to help the network learn discriminative features. To evaluate the proposed method, some different workpieces with symmetrical and low texture properties are made and utilized to build a large-scale viewpoint estimation dataset in CAD environments. The results tested on the generated dataset demonstrate that our proposed method is feasible and efficient.
The remaining of this paper is organized as follows. The data generation procedure is introduced in Section 2 and the proposed method is described in Section 3. The implementation details and experiment results are in Section 4. Section 5 gives the conclusion.
Dataset Generation
In CNN-based supervised algorithms, a robust deep learning model needs to be driven by large quantities of labelled images. However, it is great expensive and time-consuming to collect and precisely label the images from different viewpoint s in a viewpoint estimation task. To reduce the burden of collecting images, CAD models, which are easy to obtain in industrial scene and have high similarity with real objects, are utilized [11] to generate large-scale synthetic images instead of capturing images from real environment.
We follow the previous work [12] and introduce an object-centered view sphere with a constant radius, as shown in figure 1. The point on the sphere is defined as viewpoint. Note that we only consider the upper view sphere because it is the workspace of most robot arms. In order to better describe the viewpoint, a spherical coordinate system is built on the hemisphere. In the coordinate system, 360 points are sampled along the longitude direction and 90 points are sampled along the latitude direction so that every viewpoint can be denoted by its corresponding coordinate with the polar angle and azimuthal angle . Since a huge computational burden would be brought in if each viewpoint is treated as a separate class, we classify near neighbour viewpoints into the same class. Specifically, a viewpoint node is designated at every 45 points in the longitude direction and every 30 points in the latitude direction. Based on these viewpoint nodes, the viewpoints closer to the node are classified as the same viewpoint class so that 17 classes are obtained, and thus the viewpoint estimation problem can be simplified to classification problem. The viewpoint classes are expressed as: (1) where denotes the coordinate of viewpoint node and indicate the number of slices divided in polar and azimuthal angle, respectively. Facilitated with the pre-defined view sphere, we program in the CAD environment to make a virtual camera automatically move on the hemisphere and render the object as images with precise viewpoint class label. However, the rendering process is carried out in the synthetic environment so that there exists the discrepancy between the rendered images and the real images. To bridge the gap, we change the light intensity and position during the rendering process and embedding different background images taken in the real environment on the rendered image. As a consequence, it is more reliable to directly use these labelled processed images, which are denoted by where indicates the image and the label, to learn a robust classifier that can be applied in the real scene.
Method
A large-scale image dataset can be built using the above dataset generation procedure, which is enough for training a CNN-based classifier. When the CNN is trained on the dataset, a classification loss and triplet loss are jointly employed to encourage the network to simultaneously learn the image features and spatial relationships of different viewpoint images. In this section, triplet loss is introduced and a novel viewpoint-based input selection strategy is proposed to make the triplet loss better applied to the viewpoint estimation task.
Review on Triplet loss
Triplet loss, as its name suggested, is calculated on a triplet of samples. The goal of the triplet aims at helping the network to find an embedding space where the samples from the same class are closer than those from different classes. Specifically, the input of triplet loss consists of a pair of samples from the same class and a pair of dissimilar ones . Generally, is called as an anchor, is a positive sample and is a negative sample. Given such a group of input, the triplet loss can be expressed as: (2) where denotes the margin between the same class and different classes. Here is calculated by the squared Euclidean distance function: By applying the triplet loss function, the features of the similar samples are pulled closer in learned embedding space and those of dissimilar samples are pulled away.
Viewpoint-based input selection strategy
Recall that the criterion for defining positive and negative samples is whether the samples belonging to the same class or not, which is used for pulling similar samples and pulled dissimilar samples away in learned feature space. In viewpoint estimation problem, indeed, there exists quantitative differences between viewpoints. Therefore, we extend the application of triplet loss and propose a novel viewbased triplet input selection strategy, which defines the positive and negative samples according to the Specifically, for a single triplet loss calculation, three samples , ,and are randomly selected from the dataset. In these three points, two of them may be from the same viewpoint class but all three points cannot belong to the same class. On one hand, if two of the samples belong to the same class, one is taken as anchor , the other as positive sample and the remaining third one as the negative sample . On the other hand, if the three samples belong to three different classes, the viewpoint distance between each two of them is calculated. Then the pair of samples with the largest distance and that with smallest distance is selected. These two pairs must have one common sample, which is marked as . The sample with smaller distance relative to the anchor is taken as and that with bigger distance as . In order to better quantify the difference between viewpoints, the two-dimension coordinates of viewpoint class is uniformly transformed into a rectangular coordinate system: Then the viewpoint Euclidean distance between two samples is calculated. In figure 2, the selection strategy is more intuitively expressed. The triplet loss function is thus modified from equation (2) and has the following form: where is the mapping of the network and represents the output vector from the last fullconnected layer.
Equation (5) reveals that the triplet loss is zero if the distance between negative sample and anchor is bigger than the summation of the margin and the distance between the positive sample and anchor. Otherwise, the loss is non-zero, which encourages the network to learn a better embedding space during the training process.
Combined with a cross-entropy loss term for classification, the loss function implemented in our case is expressed as: where and denote the prediction and the ground-truth for sample , is the number of class and is influence factor of the triplet loss term for the whole loss function, which is chosen as 1 in this paper. Figure 3. The implementation framework in the training process. Details of VGG16 is described in [13] The overall framework is shown in figure 3. Minimizing the can help the network learn features from image itself and minimizing the can achieve two properties: 1) enlarging the distance between features from farther viewpoint classes and 2) reducing the distance between those from closer or similar classes. In other words, by minimizing the whole loss function, the network is encouraged to learn the features from the image itself and simultaneously learn the spatial relationship between viewpoint classes. In fact, triplet loss can also be regarded as a regularization item from the spatial level so that the features learned by the network are more representative.
Experiment
In this section, we will present the details of the dataset preparation and the experiment implementation, and then show the experimental results and corresponding analysis.
Data preparation
The viewpoint estimation dataset consists of five different workpieces and their corresponding images are captured from various viewpoints. All the chosen workpieces have the characteristics of low texture, uniform colour, and symmetry, which are very common in industrial scenes. We follow the procedure introduced in Section 2 to generate the workpiece image dataset. In fact, we utilize the 3DsMax and the Maxscript to program the movement of the virtual camera and the rendering process. For each viewpoint class of one workpiece, the virtual camera first moves to the pre-defined viewpoint node to render images and then moves to 30 viewpoints randomly selected near this node to render more images. In the rendering process at each viewpoint, the workpiece rotates around its own central axis to cover more conditions. Every rendered image is automatically annotated with precise viewpoint class label based on the position of the virtual camera. This process is repeated 17 times because we define 17 viewpoint classes in Section 2 so that one workpiece totally contains about 15000 images. When the images are rendered, the intensity and position of lighting are randomly changed and some background images captured from the real environment are embedded on the rendered images to bridge the gap between synthetic domain and real domain. In addition, the whole dataset is split into a training dataset for learning and a testing dataset for evaluation at a ratio of 7:3. Each dataset for every workpiece is denoted from WP 1 to WP 5 . Some images from the generated dataset (WP 3 ) are shown in figure 4.
Comparision
In order to demonstrate the advantage of our proposed method, the geometric loss, which is implemented in [5] [12], is chosen for comparison. The formula of geometric loss can be written as follow: (8) where is the predicted probability of viewpoint class V for input I and is a tuneable parameter. It is obvious that geometric loss has also considered the distance between the viewpoints. However, the main difference lies in that the geometric loss is calculated based on the prediction and the ground truth, while the triplet loss is calculated by the feature vectors outputted by the network.
Implementation details
As described in Section 3, we chose a CNN as the model to solve the viewpoint estimation problem, where 16-layer VGG-Net [13] is chosen as the CNN-based model. Before performing experiments, the CNN-based model is initialized with pre-trained weights on Image-Net. Three different loss function are implemented for comparison: 1) simple cross-entropy loss( ); 2) cross-entropy loss and geometric loss ( ); 3) cross-entropy loss and triplet loss ( ). In every training step, a batch of training images, which are all resized to (300×300) pixels and randomly cropped as (227×227) pixels, is fed into the network. Then the model is continuously optimized using Stochastic Gradient Descent (SGD) algorithm with a batch size of 32 examples, learning rate of 0.001, momentum of 0.9 based on the implemented loss function. In a single experiment, all training images are trained by 10 epochs so that the loss can reach convergence. All experiments are conducted through a deep learning toolkit: Pytorch [14] on our computer with CPU: Intel i7-5820k and accelerated by GPU: Nvidia GTX Titan. We notice that our proposed method outperforms those applying simple classification loss or combining the geometric loss and classification loss. For the viewpoint estimation task, our method outperforms the other two methods by 8.2% and 2.1% accuracy, respectively. The improvement demonstrates that triplet loss can effectively encourage the network to learn more discriminative features.
Visualization.
To further demonstrate the effectiveness of our method, the learned embedding space of WP 1 is plotted in figure 5 using Tensorboard [15], which is a toolkit that often used in deep learning research to visualize the learned feature representation with the aid of some dimensionality reduction algorithms so as to explore the distribution of learned features. Here we choose t-SNE [16] as the dimensionality reduction algorithm to reduce the feature vector outputted by the last fullyconnected layers from 4096 dimensions to 2 dimensions. L ce L geo + L ce L t + L ce Figure 5. Visualization of learned features tested on one of the datasets (WP 1 ). The data points are 2Dfeatures reduced from high dimensional features using t-SNE. Features from class (0,0) are marked with the red boxes and those from (0,180) are with the blue boxes. Best viewed in colour.
The three graphs show that only minimizing classification loss can cluster the features of similar images but minimizing geometric loss or triplet loss can distinguish features according to the viewpoint distance, making the features of similar images that belong to different class more discriminative. For example, the images from (0,0) and (0,180) show similar image features while the two viewpoint class has a relatively large dissimilarity. Features of the two classes learned only with classification loss are very close because the loss just focuses on the images self. In contrary, and can effectively make the features farther away based on the viewpoint distance. Moreover, we see that feature embedding learned by our method exhibits tighter clustering and less misclassification data points, which demonstrates that the method can encourage the network to be more representative and enhance the performance in viewpoint estimation task.
Conclusion
In this work, the triplet loss function that allows for better tackling the viewpoint estimation problem has been implemented. We focus on extending the application of triplet loss and propose a novel viewpoint-based input selection strategy of triplet loss. Besides, a complete workpiece image dataset using different workpieces and CAD environment for viewpoint estimation is built. Compared to the classification loss and geometric loss, our method can learn more discriminative and structural features, which demonstrates that the method is effective and feasible in viewpoint estimation task and our work may be of importance for vision-based robot tasks in industrial applications.
|
2019-05-12T13:11:08.593Z
|
2019-04-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ff22f0d24164e8fe38b3d0b89f63dd38933e85fe",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1207/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "da77fdec438a759ab1a978dd8a2359a458023a6c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
147375536
|
pes2o/s2orc
|
v3-fos-license
|
Enhancing integrative motivation: The Japanese-American Collaborative Learning Project
The Collaborative Learning Project is a language exchange program in which American and Japanese university students have the opportunity to interact with native speakers over the course of a three-week period. This paper reports the outcomes of the Collaborative Learning Project in terms of its effectiveness in fulfilling student expectations and their integrative motivation, i.e. social and cultural motivation. Using quantitative and qualitative data, this research includes the first project (2012) as a preliminary study, and the second project (2013) as the basis of the formative assessment. In their responses to questionnaires, the majority of the students reported that the program fulfilled their expectations and fostered their integrative motivation. After participating in the project, the participants were motivated to continue learning Japanese and seemed to be more interested in the study abroad program/studying abroad. Subjects: Applied Linguistics; Language Acquisition; Motivation
ABOUT THE AUTHOR Fumie Kato, PhD, is an associate professor in the Department of Languages and Culture Studies, at the University of North Carolina at Charlotte. Her expertise lies in applied linguistics, specifically second language acquisition, student motivation and learning strategies. Providing foreign language learners with an authentic learning environment is vital to increase learners' motivation. However, if the foreign language learners live in a place where there is minimal need for the use of their target language on a daily basis and they have fewer opportunities to meet with native speakers, it is challenging for instructors to address this important issue. This paper illustrates how instructors can introduce the authentic learning situation into their courses to enhance the student motivation level. Similar projects may be considered during different periods or in collaboration with different universities.
PUBLIC INTEREST STATEMENT
Providing foreign language learners with an authentic learning environment is vital to increase learners' foreign language interests and also heighten their motivation levels. As Japanese is categorized as one of the most difficult foreign languages to master, enhancing learners' motivation is a focal point emphasized to foster their continuous learning. In the Collaborative Learning Project, American and Japanese university students took part in a language exchange program over the course of a three-week period. American participants' expectations were fulfilled by this project; they benefitted immensely from interacting with the Japanese university students in their home country, which, in turn, heightened their motivation to learn Japanese. This motivation may have influenced the increase in number of participants in the Japanese study abroad program. While this study reported findings from a program between Japanese and American students, the program model could easily be adapted to fit other languages and other countries.
Introduction
This paper analyzes the Collaborative Learning Project, in which American and Japanese students interact at an American university for three weeks. In this program, study abroad students from Sophia University in Japan visit the University of North Carolina at Charlotte (henceforth, "UNC Charlotte"), giving them the opportunity to be immersed in American society and improve their English language proficiency. Likewise, students at UNC Charlotte practice their Japanese language skills by joining in activities with students from Sophia University, thereby enhancing their motivation to learn Japanese.
Unless specific activities have been arranged beforehand, it is generally difficult for visiting students to have sufficient opportunity to interact with their native counterparts when taking part in a short-term program. Schedules for language learning, both in and out of the classroom, often limit the time visiting students have to find native students with whom they can practice speaking. While students in study abroad programs visit a foreign country in order to immerse themselves in that society, they often spend time only with instructors, caretakers of the program, or other people in the community in planned off-campus social activities.
On the other hand, foreign language students studying in their home country have difficulty finding sufficient opportunities to practice their language of study, especially American students who want to practice with native Japanese speakers. Japanese language students at UNC Charlotte, for example, find few occasions to meet with native Japanese speakers or use their Japanese abilities in their everyday lives. UNC Charlotte is located in Charlotte, North Carolina, an area of the United States where there is minimal need for the use of the Japanese language on a daily basis. In addition, Japanese is a minority among the foreign languages spoken in the southeastern region of the United States. Although students study the Japanese language for a few years at UNC Charlotte, in many cases they have never met a native speaker, and have not had the chance to use the knowledge and language skills they have acquired during their university studies.
Therefore, a short-term language program that brings Japanese students to the United States presents an excellent opportunity for American students to engage with native Japanese students during their visit. It is also an ideal arrangement for Japanese students to be welcomed by American university students when they first arrive in Charlotte. This collaborative learning style is thus considered to be a "win-win" strategy. However, how does this strategy actually affect the participants? While the research includes the experiences of both Japanese and American participants, the emphasis of the research has been placed on the American students due to the unavailability of Japanese student data. The study discussed in this paper was undertaken in order to determine how participants, specifically American students, view this form of language and cultural exchange. This paper reports the outcomes of this exploration of the Collaborative Learning Project conducted in 2012 and 2013 at UNC Charlotte regarding its influence on the enhancement of their motivation to continue their language learning as well as their desire to study abroad in Japan.
Background
Many language researchers have identified motivation as one of the most influential factors in foreign language acquisition (e.g. Kato, 2002;MacIntyre & Gardner, 1994;Oxford, 1996;Tremblay & Gardner, 1995). Kato (2007) investigated student motivation among Japanese language learners at UNC Charlotte in 2003. Her research, based on a model including eight different kinds of motivation (Intrinsic, Integrative, Instrumental, Self-confidence, Anxiety, Motivational strength, Cooperative and Competitive), found that after Intrinsic motivation, i.e. doing things for the enjoyment of it, Integrative motivation was the highest form of motivation among language learners.
Integrative motivation and Instrumental motivation, which Gardner and Lambert introduced in 1972, relate specifically to language learning. While instrumental motivation involves learning a second language for a practical purpose, such as furthering a career, improving social status, or meeting an educational requirement, integrative motivation refers to identification and desire to interact with another ethnographic group. Many researchers (e.g. Dörnyei, 2001;Gardner & Lambert, 1972;Hernández, 2010;Kato, 2010;Masgoret & Gardner, 2003) have demonstrated that language learners with high integrative motivation focus on developing their language proficiency and "seem to seek out more opportunities to interact with native speakers" (Hernández, 2010, p. 652), and eventually seek out the opportunity of studying abroad.
Since the publication of the results of studies on student motivation in 2003, the Japanese Studies Program at UNC Charlotte has employed several teaching strategies to better fit the needs of students driven by integrative motivation (Kato, 2010). The Collaborative Learning Project, one of these teaching strategies, was developed to enhance students' integrative motivation. This paper focuses on the 2012 and 2013 versions of the Collaborative Learning Project, which was first implemented in spring of 2012.
As American students of Japanese generally do not have sufficient access to authentic language use in communicative contexts, this project offered opportunities for them to meet with native Japanese university students while in their home country, i.e. the United States. However, because American students had limited time to meet with the Japanese students due to their regular class schedule in the spring semester, the primary purpose of this project for them was not specifically to enhance linguistic abilities, but to provide learners with the opportunity to interact with native Japanese students. This interaction was anticipated to enhance their Japanese learning attitude and their integrative motivation. Martinsen (2008) mentioned that only "cultural sensitivity predicted students' improvements in language skills" (p. 504), and the study also stated "another important variable in language learning is student motivation" (p. 506).
Focusing on the 2012 and 2013 Collaborative Learning Projects, this paper explains program procedures, analyzes student data, and examines how effectively students were motivated. Furthermore, it explores if heightened integrative motivation influences participation in long-term study abroad programs in Japan. Long-term programs have been proven to impact language learning and "the highest benefits are associated with longer-stays" (Cubillos & Iivento, 2013, p. 50). In addition, this study explores the perspective participants gained by spending time together and how they were motivated in their Japanese learning. Questionnaires and reflection papers were analyzed in order to evaluate if the project fulfilled student expectations.
Two primary research questions are: (1) Can the Collaborative Learning Project fulfill student integrative motivation?
(2) How do the project influence students in their Japanese learning afterwards? Student data from both the 2012 and 2013 projects were analyzed and interpreted via methodological triangulation, utilizing the outcomes obtained through quantitative and qualitative research methods. On the basis of the formative analysis of the 2012 student data, the revised Collaborative Learning Project was carried out in 2013.
Procedures
Participants at UNC Charlotte were learners of Japanese and participants at Sophia University in Japan were students majoring in the Sciences who were interested in improving their English abilities in the United States.
The Japanese Studies Program did not offer a course specifically for the project, and participants in the project attended activities in addition to their regular university schedule. Furthermore, one of the three weeks of the program took place in March during a mid-semester recess, which meant many UNC Charlotte students were out of town. Students at UNC Charlotte had to balance the exchange program with their academic responsibilities, and thus the length of their time with native Japanese students was significantly limited. In order to overcome this challenge and use the available time spent together effectively, email correspondence was incorporated into the program. Program participants were encouraged to communicate as pen pals for one month before the visiting students arrived in Charlotte, thus allowing them to get to know each other smoothly over time rather than meeting without any prior knowledge of their counterparts.
Participants
At the start of the spring semester in January, instructors in intermediate (3rd and 4th semesters) and upper intermediate (5th and 6th semesters) Japanese language courses explained the procedures of the research study conducted in conjunction with the Collaborative Learning Project to all of their students and recruited students who were interested in participating in this project. Consequently, 34 students in intermediate and upper intermediate courses participated in the project. At the beginning of February, an orientation was held to instruct the participants about the project's procedures and requirements, e.g. correspondence with their partners during February, an overview of the Japanese students' visit in Charlotte, extracurricular activities, and a preliminary questionnaire about the project afterwards. All of the participants complied with the terms of the project, and agreed to participate in the research study, allowing researchers to utilize their questionnaire responses. Sophia University advertised this project to all of the students majoring in the Sciences in the previous year, i.e. 2011, and the 29 students who applied to this program visited UNC Charlotte in 2012 (N = 63). Table 1 below shows the participants.
The American students ranged in age from 20 to 25 years old. All American participants, with the exception of three students from China, were native English speakers and all Japanese participants were native Japanese speakers. The students were divided into 12 pairs and 11 groups of 3. Each pair contained one American and one Japanese student and each group was composed of one Japanese and two American students. All of the participants were instructed to exchange emails with their assigned pen pals using their respective target languages during February.
Schedules
The Japanese students arrived in Charlotte on 1 March 2012, to a welcome reception with their pen pals from UNC Charlotte. It was the first time all of the participants met with each other after engaging in the pen pal experience for one month. During their time in Charlotte, the Japanese students visited science and technology classes (e.g. Computer Science, Mechanical Engineering), visited local businesses (e.g. The Charlotte Technical Center, Microsoft), and attended sports events (e.g. basketball and ice-hockey games). UNC Charlotte students could not join their peers for all of the events due to their regular class schedules during the semester, and took part in only a few events. Beyond those planned activities, it was up to the American students to spend time together with the Japanese visiting students while accommodating their own schedules. Three weeks later, on 22 March 2012, all of the participants attended a farewell reception.
Outcomes of the questionnaires (quantitative analysis)
A few days after the project's end, UNC Charlotte students were required to write reflection papers and respond to a questionnaire developed by the program coordinators. The questionnaire included 12 questions in total, consisting of four "yes" or "no" questions, one five-point Likert-type scale Table 1
. Participants in 2012
Notes: "F" stands for female students, and "M" stands for male students. Three significant questions specifically helpful in evaluating the impact of this project are questions (hereafter "Q(s).") 3, 5, and 12. Table 2 shows the outcomes of the three questions.
UNC Charlotte Sophia University
Q.3 asked participants if they enjoyed the project. As shown in Table 2, most participants "strongly agreed" with this statement. Q.5 asked if the project met student expectations. Three-quarters of the students agreed. Q.12 asked if students would participate again if offered the opportunity. The majority of the students desired to participate again. The high number of positive responses demonstrates that most of the participants benefited from the first trial of this project. The assumption and analyses leading to this conclusion are found below.
The positive impact of the program is demonstrated most prominently in the participants' focus of study and their enhanced desire to visit Japan. Twenty-six out of thirty-four participants (76%) declared Japanese as their Major or Minor, indicating most of the participant motivation levels in learning Japanese were high. Furthermore, in terms of participant desire to study abroad, as only 3 of the 30 students had been to Japan, over two-thirds reported that they desired to travel to Japan to study in the future.
The Collaborative Learning Project was treated as an additional assignment, and the Japanese Studies Program added one extra point to each participant's final score (perfect score of each course is 100 points). Q.4 asked what motivated the students to participate in the project. Seventy percent of the students responded that their primary motivation was to interact with Japanese students, i.e. integrative motivation, and 60% responded that they wanted to improve their Japanese skills, i.e. instrumental motivation. Approximately a quarter of the students responded that they wanted to receive extra credit. The outcome of this question verified that the students' integrative motivation was strong.
Q.6 asked how many times students corresponded with their partners during February. In the analysis of all responses, the mean was 4.3 times, which indicates that students corresponded approximately once a week. Q.7 through Q.11 asked how actively UNC Charlotte students participated in the activities planned by the university and social events students coordinated independently. Two-thirds of the UNC Charlotte students joined at least one planned activity over the duration of the project. However, two-thirds of American students also spent time with the Japanese students individually; one half of the students met less than three times and the other half met more than four times, indicating that though UNC Charlotte students were unable to attend many planned activities, they spent time together with the Japanese students outside the class hours, indicating once more high levels of integrative motivation among this population. Their social engagements included eating dinner, going shopping, having coffee, ice skating, watching movies, and playing video games.
Outcomes of the reflection papers (qualitative analysis)
In total, 21 of 34 students from UNC Charlotte (response rate: 62%) submitted open-ended reflection papers written in English without any constraints aside from being approximately one page in length. These reflective essays were then collected and a conceptually clustered matrix was employed to group students' comments. Researchers then used these clusters to generate student opinions on the program that could be interpreted clearly. In total, 107 student evaluations, including opinions and comments written in the reflection papers, were extracted and clustered into 14 categories. For the purposes of this study, 14 categories are further classified into two groups: positive (nine categories) and negative (five categories) comments. Each group is named under a category header (see Table 3).
Positive comments
Nine categories are included in the positive comments (see Table 3). In total, 86 positive student reflections on this project were recorded. The majority of the students were excited to spend time with Japanese students outside of the academic setting, indicating once more strong integrative motivation. This outcome coincides with the results collected from the survey above. One student wrote, "We went to Red Robin … [and we talked] about the differences between American and Japanese women's personalities.… the night was very interesting all in all." All of the participants expressed enjoyment in interaction with the Japanese students. Another student wrote, "I invited all of the exchange students to my birthday celebration … including our 3202 [upper intermediate course] classmates and [we played] games." In the second highest category of positive comments, students detailed their excitement about participating in the Collaborative Learning Project. For example one student reported, "I had been crazy excited [about] … an opportunity to make new friends [from] other countries," and another student found the result to be an "extremely enjoyable … an incredibly enriching experience." The first and the second positive categories verify that the students were pleased to fulfill their high integrative motivation.
The same number of students referred to the pen pal correspondence in a positive manner. Some viewed it as "a very good idea," and an "interesting experience." When participants received responses from their partners, they were "very happy," "excited," and they "learned a lot about" their pen pal. Through correspondence, students cooperated with one another; one student stated that "[I] helped him with his English, as he helped [me with my] Japanese." Project participants got in touch even before the Japanese students visited, as demonstrated by the fact that several students "soon became Facebook friends," and the "[correspondence] helped to form [bonds] between American and Japanese students." These comments illustrate that corresponding with Japanese students for the first time was a meaningful experience for UNC Charlotte students. It also helped them, who had been studying Japanese for three or four semesters, to satisfy their integrative motivation by providing authentic interactions with native speakers The fourth largest category of positive responses is compromised of comments on language/reactions to language. This is easily understandable because all of the participants are currently learning foreign languages. It was also the first opportunity for UNC Charlotte students to speak with Japanese university students possessing limited English abilities. Because current Japanese study abroad students learning at UNC Charlotte must demonstrate proficiency by achieving a certain score on the TOEFL exam, most UNC Charlotte students have never met and spoken with native Japanese students whose English abilities are less than fluent. One student stated, "We all struggled through communication with each other." Another student recorded, "the conversation got awkward just because neither of us really knew what to say." Gradually the students "managed to understand each other." One participant noted, "I got to work on my Japanese and they got to work on their English," and "this was a great experience to help me to try to explain myself in Japanese." As all of the students lacked proficient language abilities, these reflections indicate that participants tried to do their best in order to communicate with one another. Finally, one American student noted improvement in Japanese students' English by saying, "their English improved a great deal since the beginning of the program." Categories five, six and eight talk about the positive aspects of the project. Many students commented on the value of participation and social interaction with native Japanese students, i.e. integrative motivation. For example, seven students noted that the program was "a great experience" and they "loved having a pen pal"; eight students would "certainly like to participate [in the program] again," and one student wrote, "this program will be something that I will never forget." Six of the comments, such as, "I believe we made a lasting friendship," and "we still wrote even after she returned home," illustrate the depth of friendships formed in the program.
Category seven mentions the cultural overlap manifested in similarities and differences of students' tastes. Seven students were excited to discover cultural similarities and differences, and mentioned that they "[had] very similar taste" and watched "a lot of the same anime." One student commented that she "enjoyed watching [others'] reactions" to the cultural differences between the United States and Japan. A study conducted by Hernández (2010) shows a positive relationship between motivation, interaction with the culture, and the development of speaking proficiency. The Collaborative Learning Project demonstrates the value of discovering a foreign culture through direct communication, evident in the experiences of the American participants in this program who, by interacting with their Japanese counterparts, learned about the similarities and differences between Japan and America.
As reflected in the ninth category in Table 3, six students' motivation toward study abroad programs in Japan increased. Student reflections include, "my desire to visit Japan grew and grew," and "I liked the experience which made me want to go to Japan even more." The students who wrote these comments later did attend a study abroad program in Japan. One of these students almost gave up the opportunity due to the fact that it was his senior year, but he finally managed to find the time to attend a short-term summer program in Japan. The other student did not initially seem to be interested in the study abroad program in Japan, perhaps due to her Spanish Major. However, surprisingly, she attended the program in Japan for one academic year after participating in this project. Beyond these two students, this project motivated and seemed to influence other participants' attitudes toward going to Japan as a study abroad student. These students were good examples of how strong integrative motivation can eventually encourage students to seek out the opportunity of a study abroad program (Hernández, 2010).
Negative comments
Twenty-one negative comments about the project were extracted in the student reflections. These opinions were clustered into five categories (see Table 3).
As already anticipated the majority of negative comments involved the difficulty of coordinating opportunities to meet with the Japanese students during the spring program. Representative student opinions include, "I was sadly only able to meet with him once … [as I have] a very busy [schedule]," and "Unfortunately I did not do as many activities with the Japanese students due to work and school schedule." Though negative, by their regretful sentiment, these reflections confirm that the UNC Charlotte students participating in this program possess strong integrative motivation. Although they were unable to do so, they wanted to spend more time with the Japanese students and seek out interaction with this ethnographic group.
Four students claimed that correspondence with their pen pals was not reciprocal. Students commented, "I emailed him three times and received [an email] back only once," and "I wrote my pen pal trying my best to write entirely in Japanese. He wrote back two sentences." Low correspondence rates of Japanese students were found to be caused by their final examination schedules in Japan.
The following two issues were also identified as problematic: (1) lack of transportation and (2) expense of the events. Two students did not have a means of transportation, namely a car, so it was difficult for them to organize a trip with their pen pals. This is understandable, as it is a challenge to go anywhere without a personal vehicle in the Charlotte region. One student said, "A lot of the events did cost quite a lot of money." This economic issue seemed to prevent him from participating in the events.
Formative evaluation
Responses to three significant questions, i.e. Qs 3, 5 and 12, and the outcomes of the qualitative analysis indicate that the program in 2012 was handled satisfactorily, although there were areas for improvement. The second project was implemented in 2013 and built upon this critical feedback in order to address the lacking areas.
The 2013 program implemented a new procedure of creating small correspondence groups rather than pairs in order to address the outcomes of the quantitative and qualitative analyses, and in consideration of the short amount of time allocated for correspondence between pen pals. Furthermore, when the program organized groups in 2013, two issues were considered. One of them was the issue of student access to a personal vehicle because a car is required for traveling around the greater Charlotte metropolitan area. At least one student who owns a car was included in each group. The other issue to consider was the integration of students who return to their hometown and students who remain in Charlotte during spring break. The groups were organized so that the numbers of group members who would be gone was balanced among groups.
Formative evaluation revealed the following suggestions for improving the program in 2013: (1) Create small correspondence groups, rather than student pairs, to exchange emails.
(2) Request each student's availability during spring break.
(3) Ask each student if he or she owns a car.
(4) Integrate spring break availability and transportation options into the formation of groups.
Participants
Through the same methods utilized in 2012, 34 students in intermediate and upper intermediate courses at UNC Charlotte applied for this project and participated in spring 2013. Twenty-three Japanese students majoring in Sciences at Sophia University visited UNC Charlotte. In total, 57 students participated in the Collaborative Learning Project from 1March through 23 March 2013. American students ranged in age from 20 to 28 years old. All the Japanese participants were native Japanese speakers and all the American participants, with the exception of three students (one each from Vietnam, China, and South Africa), were native English speakers. Table 4 shows the breakdown of 2013 participants.
Eleven groups were formed based on student availability and transportation. All students were required to exchange emails among their group during February 2013.
After the program, all American students responded to the same questionnaires made for this project in 2012. In addition, they were required to submit a reflection paper after completion of the program.
Comparison of the outcomes of the questionnaires (quantitative analysis)
Student responses were analyzed in comparing of UNC Charlotte student responses in 2012 and 2013. A list of the student questionnaire responses is shown in the Appendix 1. Participants who declared a Japanese Major or Minor in 2013 reflected similar rates to those recorded in 2012. The outcomes of three vital questions, i.e. Q.3, Q.5 and Q.12, in the evaluation of the project are shown in Table 5.
Outcomes of the student responses of only these three significant questions were statistically compared. Q.3 is an ordinal scale and all observations from 2012 and 2013 are independent. Based on these conditions, the Wilcoxon-Mann-Whitney test was conducted to compare the outcomes of Q.3 for 2012 and 2013. The resulting p-value from the two-sided test was .2072 (>.05) indicating
Table 4. Participants in 2013
Notes: "F" stands for female students, and "M" stands for male students. Responses to the rest of the questions are shown in the Appendix 1. In terms of correspondence rates (Q.6), participants in 2013 corresponded much more frequently than the students in 2012. Seven participants reported that they exchanged emails "more than fifteen" times and three of them even corresponded "twenty plus" times. One of the strategies implemented in 2013 was to organize participants in small groups rather than in pairs. This strategy seemed to be effective in increasing correspondence rates.
UNC Charlotte Sophia University
Q.9 through Q.12 asked the participants about which program activities they joined. Comparison of their responses between 2012 and 2013 shows that participants in 2013 attended the events more actively, spent much more time together outside of the events, met the Japanese students more frequently, and took part in other social activities such as bowling, playing video games, and visiting the houses of their pen pals. Increased correspondence rates between participants in 2013 might have stimulated their integrative motivation more than in 2012.
Outcomes of the reflection papers (qualitative analysis)
The participants at UNC Charlotte were asked to write a reflection following the same procedure implemented in 2012. In total, 29 papers (15 intermediate and 14 upper intermediate) were collected out of the possible pool of 34 (response rate: 82%). In total, 157 evaluations and comments were extracted and clustered following the 14 categories of the 2012 analysis. Furthermore, seven more categories, three positive comments and four suggestions, were added in 2013 for a total of 21 categories. Table 6 displays and compares the 2012 and 2013 student comment results.
Responses for each category are dichotomous variables and the observations for both 2012 and 2013 are independent. Chi-square tests were thus conducted on the 14 categories in order to compare the number of student comments between 2012 and 2013. As the seven categories were newly added in 2013 and could not be compared with the results of the year prior, these were excluded from the comparison test. The results indicate that the three categories found to vary significantly between the student reflections of the two years include "Exciting pen pal correspondence," "Made friends" and "Motivation to study abroad" (see Table 6). The results in the remaining 12 categories do not differ between 2012 and 2013.
Within the category "Made friends" (p = .0354 < .05), the participants in year 2013 wrote significantly more positive comments about their friendships than those who participated in 2012. Regarding the category "Motivation to study abroad" (p = .0115 < .05), although only one student in 2013 mentioned study abroad per se, 11 students wrote that they intended to visit Japanese
Comments 2013 (n = 29)
students in Japan and that they had concrete arrangements to go, by stating comments such as, "I have made plans to meet up with them when I visit Tokyo," and "XX gave me his address in Tokyo … I need to go and see him … I could stay at his house and we would ride motorcycles together." These results suggested that the project in 2013 enhanced American students' motivation to travel to see their friends in Japan, indicating an increase in integrative motivation (Hernández, 2010).
Since they were successful in forming friendships during this short period, many participants in 2013 were very sad when they attended the farewell party. In one of the new positive comments added in 2013, "Sad to Say 'Goodbye'," seven students expressed their sadness by writing comments such as, "Everyone was sad when they had to leave," and "The last day of the program was surely the saddest. The group of Sophia girls … nearly shed tears as we said our goodbyes." No students in 2013 mentioned the inconvenience of not having their own personal vehicle. This indicates that rather than acting independently, students interacted in groups, which reduced the need to worry about transportation. The program was able to correct the transportation issues from 2012 by including at least one student who had a vehicle in each of the 2013 groups.
Students in 2013 proposed three kinds of suggestions: "Post Schedule Sooner," "Longer Program" and "Not Groups, but Pairs." Nine students stated that they could have attended more events if they had received the program agenda in advance, because they could have arranged their schedules accordingly. Comments such as "[We could have been] able to schedule our jobs," illustrate the students' concern.
Five students requested that the program be longer than three weeks, as the following comments illustrate: "Three weeks seemed like five days," and "The program was very short, and I wished we had a longer time to spend with each other." There were no students in 2012 that commented on the length of the program. These comments indicate how much the participants in 2013 wanted to spend time with the Japanese student and demonstrate strong integrative motivation, manifested in an insistent desire to get to know their peers.
Discussion
The revised project was carried out in 2013 on the basis of the preliminary study from 2012. Both quantitative and qualitative student data analyses reveal that the 2013 project was more satisfactorily conducted, for example, students in 2013 corresponded more frequently, attended the events more actively, made friends more closely, and fulfilled their expectation more than the participants in 2012. However, the majority of the students both in 2012 and 2013 found this project offered valuable opportunities for social interaction with native Japanese students. This Collaborative Learning Project found that many students already possessed strong integrative motivation and enhanced it even further by bringing them together with Japanese students for a three-week period in their home country. This outcome is in line with the study of Norris-Holt (2001) who saw students' motivation elevated after having sessions with the target language group.
In terms of research Q.1, whether the project could meet student integrative motivation, the majority of the participants indicated that the project fulfilled their expectations, i.e. integrative motivation. Thus, the vast majority would like to participate in the project again if it were to be offered in future.
Regarding research Q.2, i.e. how the project influenced student motivation levels, the project was found to increase students' motivation to learn Japanese continuously and to strengthen students' integrative motivation, specifically in terms of their motivation toward study abroad in Japan. Six students in 2012 wrote about their future study abroad program, and in 2013, 12 students were looking forward to meeting with Japanese friends again in Japan. Table 7 shows the student participation numbers in the Japanese study abroad program from 2003 to 2014.
Twenty of the participants from the Collaborative Learning Project in 2012 and 2013 are included in the above table. In comparing student participation rates in Japanese study abroad programs before and after 2012, student numbers were found to increase slightly after 2012. It is not legitimate to conclude that mere participation in the Collaborative Learning Project directly caused students to attend a study abroad program in Japan. However, the increase of participation in Japanese study abroad programs after 2012 indicates that it is likely that the Collaborative Learning Project influenced student motivation levels and gave them a more positive attitude regarding the idea of studying abroad in Japan.
In terms of student motivation levels toward learning Japanese afterwards, for example, there was only one student who negatively responded to the question regarding whether or not the program had fulfilled his expectations (Q.5) in 2013. He corresponded with his pen pal four times and participated in three events. The "amount of exposure students have to the target language" (Reynolds-Case, 2013, p. 312) is crucial. The limited amount of interaction from three meeting times was not enough to fulfill his expectations. However, he wrote, "(I made) a few good friends with Sophia students … I hope that I can continue learning more Japanese and more about Japan with them." He responded positively to the question asking if he would like to participate in the project again, indicating his motivation for continuing to learn Japanese was high. Although this student is an example of someone whose expectations were not met, his motivation to learn Japanese afterwards was still enhanced.
Conclusion
Providing students with authentic learning opportunities through directly communicating with native speakers and responding to students' high integrative motivation are quite challenging in the environment at UNC Charlotte. Though only a short-term program, the Collaborative Learning Project provided students with an opportunity to socialize and spend valuable time with native Japanese university students. This study has shown that students can enhance and act on their integrative motivation while in their home country. Furthermore, their heightened integrative motivation can increase their desire to attend Japanese study abroad programs.
A limitation of this study though was the time restraint on interactions between students due to the necessity of conducting the program during the semester. Future projects could revise this issue by offering the program in another period in order to lengthen the amount of time students spend together. Also, while this study was conducted on a program facilitating conversation between American and Japanese students, other programs could easily adapt this model to other languages and other countries.
|
2018-12-04T16:39:44.764Z
|
2016-02-08T00:00:00.000
|
{
"year": 2016,
"sha1": "1251c53186fea38ca0ec7dc520c2553393e8a648",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1080/2331186x.2016.1142361",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1251c53186fea38ca0ec7dc520c2553393e8a648",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
264190282
|
pes2o/s2orc
|
v3-fos-license
|
Underwater Source Counting with Local-Confidence-Level-Enhanced Density Clustering
Source counting is the key procedure of autonomous detection for underwater unmanned platforms. A source counting method with local-confidence-level-enhanced density clustering using a single acoustic vector sensor (AVS) is proposed in this paper. The short-time Fourier transforms (STFT) of the sound pressure and vibration velocity measured by the AVS are first calculated, and a data set is established with the direction of arrivals (DOAs) estimated from all of the time–frequency points. Then, the density clustering algorithm is used to classify the DOAs in the data set, with which the number of the clusters and the cluster centers are obtained as the source number and the DOA estimations, respectively. In particular, the local confidence level is adopted to weigh the density of each DOA data point to highlight samples with the dominant sources and downplay those without, so that the differences in densities for the cluster centers and sidelobes are increased. Therefore, the performance of the density clustering algorithm is improved, leading to an improved source counting accuracy. Experimental results reveal that the enhanced source counting method achieves a better source counting performance than that of basic density clustering.
The target directing methods based on AVSs include the average sound intensity detector, sound intensity flow DOA histogram, cross-spectral DOA histogram, and so on [11,12].AVSs can also be used in multi-channel arrays [13].Each of these methods has its advantages and disadvantages.The histogram algorithm is widely used in engineering due to its better robustness compared with other algorithms.It can suppress narrowband and strong line spectrum interference and has a certain degree of multi-target resolution ability [14,15].In [11], a DOA histogram based on sound intensity flow was used to estimate the DOA of targets, and the weighted DOA histogram of the line spectrum was used to realize the resolution and DOA estimation of multiple line spectrum targets.In [16], a DOA histogram was used to realize the resolution and DOA estimation of multiple wideband targets in the time-frequency domain and instantaneous frequency domain of the Huang transform.In an AVS sea trial experiment based on the Argo buoy platform in [17], it was also found that the DOA histogram could distinguish two wideband targets, the test ship and the engineering ship, whose adjacent interval was about 80 degrees.In reference [18,19], the windowed-disjoint orthogonality (WDO) of the target signal was further introduced to explain the mechanism of the wideband multi-target resolution ability of the DOA histogram.
The components of ship-radiated noise are very complex, including line spectrum, stationary ergodic random signals, and transient signals [20].Therefore, it is considered to be broadband noise with energy distribution at any time and any frequency from the macro point of view.However, from the micro point of view, the energy intensity at different timefrequency points varies, which leads to a significant difference in the energy of different sources at some time-frequency points when multiple sources are synthesized at the sensor receiver, and a target playing a dominant role.This phenomenon is called the windowdisjoint orthogonality (WDO) of signals and is widely used in blind signal separation for multi-source resolution and separation [21,22].The higher the WDO characteristic of the signal in a certain time frequency point, the greater the energy of the dominant signal is compared to the sum of other signals, and the direction estimation result of the frequency point will be biased to the target direction of the dominant signal.If significant numbers of TF points possess the WDO property, in the DOA histogram the DOA estimates will cluster around the actual DOAs of the sources to achieve multi-target resolution of the histogram.
With multi-target resolution, it is possible to achieve source counting through multiple source detection.Traditional multiple source detection methods such as Minimum Description Length (MDL) [23], Akaike Information Criterion (AIC) [24], and Random Matrix Theory (RMT) [25] are overdetermined.They work well when the sensor channels outnumber the sources.However, for two-dimensional AVSs, these methods will fail when there are more than two sources, in which case the applicability of the AVS will be greatly limited.
As a result, an underdetermined method based on the density clustering algorithm was formed to solve the source counting problem for single AVSs in [18].The density clustering algorithm was used to classify the DOAs of all the time-frequency points, where the cluster centers and the number of the clusters were obtained as the DOA estimations and target number, respectively.However, WDO is the inherent characteristic of the signal.With the increase in targets, the WDO decreases, which limits the multi-target resolution performance of the DOA histogram, and the performance of the density clustering algorithm deteriorated.
For this reason, a multi-source detection based on multimodal fusion was developed in [10].The output of the AVS was decomposed into multiple modes by intrinsic time-scale decomposition (ITD), so that the source number in each mode decreased.For each mode, the WDO increased, leading to a better source counting accuracy.Therefore, the fused source counting performance was improved.However, the counting performance varied with the number of modes employed and it was difficult to determine the optimal number of employed modes.
In this paper, the local confidence level is adopted to weigh the density of each DOA data point to highlight the samples with the dominant source and downplay those without, so that the differences in densities for the cluster centers and sidelobes are increased.Therefore, the performance of the density clustering algorithm is enhanced.An analysis of lake trial data is conducted by comparing the proposed local-confidence-level-enhanced density clustering method with the methods in [10] and [18].The results confirm the availability of the proposed local-confidence-level-enhanced density clustering method in improving source counting performance.The source counting accuracy of the proposed method is better than the methods it is compared with.The probability distribution of the source counting result for the local-confidence-level-enhanced method is more concentrated to the source number.As the SNR decreases, the proposed method undergoes slower degradation.
Model and Cross-Spectral DOA Histogram of Single-Vector Sensor
The output of an AVS contains information on both sound pressure and vibration velocity.The model of two-dimensional AVSs in the horizontal free sound field can be expressed as: where N is the number of sources, P(t) is the sound pressure component of the AVS output, and V x (t) and V y (t) are the components of vibration velocity on two axes, respectively.
x n (t) represents the signal radiated from the nth source to the AVS, α n (t) indicates the angle of the nth sound source relative to the vector hydrophone (x axis is zero-degree orientation), e(t) = [n p (t), n x (t), n y (t)] T is the noise vector, and n p (t), n x (t), and n y (t) are noises of the sound pressure, x-axis, and y-axis vibration velocities, respectively.By taking the short-time Fourier transform (STFT) of each channel of the mixture, the AVS model can be expressed in the time-frequency domain as: where ω and m are the frequency bin and time frame indices, respectively.P(ω, m), V x (ω, m), V y (ω, m), X n (ω, m), and E(ω, m) are the STFTs of P(t), V x (t), V y (t), and e(t), respectively.The DOA estimation at each time-frequency point can be obtained by: where ∠ represents the phase angle, and V * x (ω, m) and V * y (ω, m) represent the complex conjugates of V x (ω, m) and V y (ω, m), respectively.The DOA histogram can be obtained by counting the estimation results of all time-frequency points.Supposing that the azimuth interval of histogram statistics is ∆θ, we count the number of DOA estimation results in the interval [θ − ∆θ/2, θ + ∆θ/2]: when θ − ∆θ/2 < θ(ω, m) ≤ θ + ∆θ/2, the amplitude of the histogram at the corresponding position is increased by 1: (4)
Density-Clustering-Based Source Counting
The density clustering algorithm depends on two important hypotheses: (1) The density for the center of a cluster is higher than that of the neighboring points; (2) The minimum distance between the center of a cluster and a higher-density point is relatively large [26,27].The data set Θ = [θ 1 , θ 2 , . . ., θ K ] is established with the DOA estimations obtained by Equation (3) in the time-frequency domain, where K is the number of the DOA estimations.Since the distance between two DOA estimations does not exceed 180 • , the distance d kl between each two samples in Θ is defined as: where θ k and θ l are two different samples in the data set Θ.They are angles limited within the interval of [−180 Then, the local density of each sample ρ k is calculated as: where χ(x) = 1 if x < 0 and χ(x) = 0 otherwise, and d c > 0 represents the cut-off distance used to keep a region for each sample.ρ k is the number of the samples within a limited distance of d c from the sample θ k , i.e., ρ k is equal to the number of points that are closer than d c to the sample θ k .The more samples there are near the sample θ k , the larger the local density of the sample θ k is.d c can be chosen so that the average number of neighbors is around 1 to 2% of the total number of points in the data set [28].A sample is assigned to the same cluster of its nearest sample with a higher density only if their distance is smaller than d c .The minimum distance δ k between the sample θ k and any other sample with a higher density is measured as: For the point with the highest density, we conventionally take δ k as: The product γ k = ρ k δ k , k = 1, 2, . . ., K is used as the feature, and its value for a cluster center sample is obviously larger than that for the other sample.Thus, the differences among the ordered feature sequence are used to find the samples with significantly larger features, whose number corresponds to the number of clusters or targets [29].In this procedure, the features γ k , k = 1, 2, . . ., K are sorted in descending order, i.e., Suppose that there are L targets in the detection range of the AVS.Since the features of the cluster center samples are much larger than the other samples, the first L features {γ 1 , γ 2 , . . . ,γ L } are significantly larger than the other K − L features {γ L+1 , γ L+2 , . . . ,γ K }.Consequently, the difference between γ L and γ L+1 is relatively large.To obtain L, the differences in the ordered features are computed as: And the variance of the sequence {∆γ i } K−1 i=n is calculated as: where n = 1, 2, . . ., K − 2. Further, the second-order statistic of the features is defined as: Finally, the target (or cluster) number is estimated as: And the L DOA samples with the L largest features are obtained as the final multitarget DOA estimation results.
Local-Confidence-Level-Enhanced Density Clustering Algorithm
With the increase in sources, the WDO decreases, which limits the multi-target resolution performance of the DOA histogram.DOA estimations of different time-frequency points have different contributions to density clustering.For a certain time-frequency point, the larger the proportion of the dominant signal energy is, the closer the DOA estimation is to the truth value, and the greater the contribution this sample provides to the clustering.A local-confidence-level-based density enhancement algorithm is proposed in which the contribution of the samples with a high proportion of the dominant signal energy are exaggerated, so that the accuracy of the clustering and the precision of the target number estimation are improved.
For a certain time-frequency point (ω, m) (read circle) in the STFT domain, define the rectangular area Ω ω,m (Dotted Box) around it, as illustrated in Figure 1.The width and length of this area are l m time points and l ω frequency points, respectively.Therefore, there are l ω l m time-frequency points in the rectangular area Ω ω,m .The local confidence Γ(ω, m) of this time-frequency point (ω, m) is estimated by performing a principal component analysis (PCA) [30] on the snapshot vector Y(Ω ω,m ) in the region Ω ω,m , which reflects the strength of the dominant signal at the time-frequency point (ω, m).It is obvious that the local confidence level is proportional to the ratio of the dominant signal.
Then, the local confidence is used to enhance the cross-spectral DOA histogram.In the original orientation histogram statistics, the density of each sample is 1.The localconfidence-weighted DOA histogram with a sample density of ( , ) m Substituting Equation (4) into Equation ( 16) yields: where ( ) is the average local confidence level.
The enhanced local density k of the k th sample is weighted with the correspond- The minimum distance is calculated according to the weighted local density k , and the target number is estimated by the ordered feature sequence afterward.The remaining For each time-frequency point, a snapshot Y(Ω ω,m ) is formed with all snapshots Y(ω, m) in the region Ω ω,m .It is a 3 × l ω l m matrix with columns Y(ω, m), (ω, m) ∈ Ω ω,m .A positive semidefinite complex Hermitian matrix is then constructed with Y(Ω ω,m ) for the region Ω ω,m as: For a two-dimensional AVS, the rank of the matrix R(ω, m) is 3. Performing an eigenvalue decomposition on R(ω, m), we obtain three real-valued positive eigenvalues in decreasing order λ 1 (ω, m) ≥ λ 2 (ω, m) ≥ λ 3 (ω, m) of the matrix R(ω, m).The local confidence level of the time-frequency point (ω, m) is expressed as: If there is only one dominant signal whose power is much larger than the noise in the time-frequency point (ω, m), λ 1 is the eigenvalue of the dominant signal which is much larger than λ 2 and λ 3 , which are the eigenvalues of the noise.Otherwise, if there are two similar signals whose power is much larger than the noise, λ 2 will be much larger than λ 3 , and the local confidence will become much smaller than the one dominant signal case.Therefore, it can be used to represent the WDO.
It is obvious that the local confidence level is proportional to the ratio of the dominant signal.
Then, the local confidence is used to enhance the cross-spectral DOA histogram.In the original orientation histogram statistics, the density of each sample is 1.The localconfidence-weighted DOA histogram with a sample density of Γ(ω, m) is: Substituting Equation (4) into Equation ( 16) yields: where Γ(θ) is the average local confidence level.
The enhanced local density ρ k of the kth sample is weighted with the corresponding local confidence level Γ(k) as: The minimum distance is calculated according to the weighted local density ρ k , and the target number is estimated by the ordered feature sequence afterward.The remaining steps are the same as in Section 3.1.
With the enhanced local density, the density of the sample points that are closer to the cluster center will increase while the density of the sample points far away from the cluster center will decrease.Consequently, the performance of the density-clustering-based multi-target detection is improved.To summarize, the proposed source counting method based on the local confidence-enhanced density clustering is depicted in Algorithm 1.
Experimental Settings
The experimental data were collected in FuXian Lake.The experimental scenario is depicted in Figure 2, in which the location of a two-dimensional co-vibration AVS is in the middle of the lake.The sensitivity of the velocity channel decreased with increasing frequency (as the slope of −6 dB/octave), and there was a 90-degree phase difference between the vibration velocity channel and the sound pressure channel.The AVS was rigidly fixed about 4 m underwater on the side of the survey ship.The output signals of the AVS were collected and stored by a multi-channel synchronous data acquisition system.In the experiment, four ships (yachts) acted as moving targets located 1-2 km around the AVS.The ships were distributed in four directions of about 40 • , 140 • , 210 • , and 330 • at the beginning.The sampling rate was 48 kHz, the length of the STFT window was 8192 points, the frequency band was 0.5-8 kHz, the integral length of the azimuth histogram was 1 s (six windows), the sliding step was 8192 points (one window), and the rectangular area was l m = l w = 3.The signals of two vibration velocity channels were compensated in the frequency domain according to the slope and phase of sensitivity after STFT.
Resolution Performance
Figure 3 illustrates the temporal course of the cross-spectral DOA histogram computed with Equation (4) at each time bin.Four targets marked as <1>, <2>, <3>, and <4> are presented in the histogram.The red color stands for the peaks of the histogram, which indicate the locations of the targets.Continuous peaks in time compose the time-bearing course of a target, which is illustrated roughly with straight black, red, green, and blue dash-dot lines, respectively, for each target.As shown in Figure 3, the azimuths of targets 3 and 4 gradually become closer over time, so the spectral peaks of these two targets become hard to distinguish (red color connects in one piece around time (d)).Since the signal from the target ship is weak when the ship decelerates or stops, the spectral peaks of targets 1 and 2 are unapparent or even submerged by the background during 38-40 s in the DOA histogram, as with the time points 44-50 s of target 3 and 50-53 s of target 4 (red color absent).Thus, the target courses have obvious discontinuity. 3 and 4 gradually become closer over time, so the spectral peaks of these two targets become hard to distinguish (red color connects in one piece around time (d)).Since the signal from the target ship is weak when the ship decelerates or stops, the spectral peaks of targets 1 and 2 are unapparent or even submerged by the background during 38-40 s in the DOA histogram, as with the time points 44-50 s of target 3 and 50-53 s of target 4 (red color absent).Thus, the target courses have obvious discontinuity.16) at each time bin.It can be seen that the course of target 3 is significantly enhanced (red color is more continuous in time and peaks are higher than those in Figure 3), and the course of other targets is also enhanced relative to the original ones in Figure 3.It indicates that by using the local confidence level enhancement, the DOA courses of these targets become apparent, and both the resolution of targets and the continuity of the courses are improved (red color areas are thinner in bearing axis direction and more clearly separated compared to Figure 3).16) at each time bin.It can be seen that the course of target 3 is significantly enhanced (red color is more continuous in time and peaks are higher than those in Figure 3), and the course of other targets is also enhanced relative to the original ones in Figure 3.It indicates that by using the local confidence level enhancement, the DOA courses of these targets become apparent, and both the resolution of targets and the continuity of the courses are improved (red color areas are thinner in bearing axis direction and more clearly separated compared to Figure 3).3 and 4, respectively.As shown in Figure 5, both the original and the enhanced histograms present spectral peaks at the directions of the targets, while the enhanced one possesses glaringly obvious peaks.This reveals that the enhanced histogram is able to estimate the azimuths of targets at a higher resolution.In particular, when the signal from the target ship is weak, such as target 3 in Figure 5c, the enhanced cross-spectrum can significantly upgrade its peak, which makes it possible to detect weak targets.Additionally, as depicted in Figure 5d, the distinguishability of adjacent targets (targets 3 and 4) is also promoted by the enhanced cross-spectrum.
Resolution Performance
Figure 6 explains the enhancement performance of the local confidence level by the DOA histogram at time (c).The average local confidences of the directions of the targets are larger than the others.As a result, the peaks of the targets become higher and sharper.3 and 4, respectively.As shown in Figure 5, both the original and the enhanced histograms present spectral peaks at the directions of the targets, while the enhanced one possesses glaringly obvious peaks.This reveals that the enhanced histogram is able to estimate the azimuths of targets at a higher resolution.In particular, when the signal from the target ship is weak, such as target 3 in Figure 5c, the enhanced cross-spectrum can significantly upgrade its peak, which makes it possible to detect weak targets.Additionally, as depicted in Figure 5d, the distinguishability of adjacent targets (targets 3 and 4) is also promoted by the enhanced cross-spectrum.when the signal from the target ship is weak, such as target 3 in Figure 5c, the enhanced cross-spectrum can significantly upgrade its peak, which makes it possible to detect weak targets.Additionally, as depicted in Figure 5d, the distinguishability of adjacent targets (targets 3 and 4) is also promoted by the enhanced cross-spectrum.
Figure 6 explains the enhancement performance of the local confidence level by the DOA histogram at time (c).The average local confidences of the directions of the targets are larger than the others.As a result, the peaks of the targets become higher and sharper.
Source Counting Performance
To explain the advantage of the local-confidence-level-enhanced density clustering algorithm, Figure 7 illustrates the decision graphs of the basic and enhanced density clustering algorithms with the data in Figure 6.Due to the inconspicuousness of the peaks of targets 1 and 3 (i.e., points c and d in Figure 6a), the cluster centers (i.e., decision points c and d in Figure 7a) of the basic density clustering are close to the cluster halos, which are recognized as the background.Nevertheless, in the enhanced density clustering algo-
Source Counting Performance
To explain the advantage of the local-confidence-level-enhanced density clustering algorithm, Figure 7 illustrates the decision graphs of the basic and enhanced density clustering algorithms with the data in Figure 6.Due to the inconspicuousness of the peaks of
Source Counting Performance
To explain the advantage of the local-confidence-level-enhanced density clustering algorithm, Figure 7 illustrates the decision graphs of the basic and enhanced density clustering algorithms with the data in Figure 6.Due to the inconspicuousness of the peaks of targets 1 and 3 (i.e., points c and d in Figure 6a), the cluster centers (i.e., decision points c and d in Figure 7a) of the basic density clustering are close to the cluster halos, which are recognized as the background.Nevertheless, in the enhanced density clustering algorithm, the local density is enhanced with the corresponding local confidence level which is represented by the size of the points in Figure 7.The local confidence level is higher around the cluster center than that in the cluster halo.Therefore, the background becomes more clustered and the cluster centers are far away from the background, especially for targets 1 and 3 (i.e., points C and D in Figure 7b).Therefore, the enhanced local density leads to a clearer distinction between the cluster centers and the background in the decision graph, which makes it easier to achieve correct counting.Figure 8 demonstrates the second-order statistic S n of the basic and enhanced density clustering algorithms with the data in Figure 6.Obviously, the basic density clustering obtains the wrong counting result while the enhanced density clustering obtains the correct one.These results are consistent with the analysis of Figure 7.
Figure 9 illustrates the courses of the multi-target DOA estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.Compared with the courses estimated by the basic density clustering, the outliers in the courses estimated by the enhanced density clustering significantly reduce by weighting the density with the local confidence level, and the continuity of the courses of the multi-target detection is remarkably improved, especially in Regions A and B.Moreover, the distribution of the DOA estimations of targets is more compact for enhanced density clustering.
Figure 10 depicts the results of the source number estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.Obviously, in Regions A, B, and C, the enhanced density clustering achieves the correct estimation results while the basic density clustering does not.In Region D, although both methods obtain incorrect results, the outcomes of the enhanced density clustering are closer to the true value.Targets 3 and 4 stop for a while during the periods of 44-50 s and 50-53 s, respectively.During these periods, there are only three targets in the field.As a result, the source number estimation result in the period before 44 s was used to calculate the source counting accuracy.A comparison was conducted between the proposed local-confidence-level-enhanced density clustering method, the multimodal-fusion-based method in [10], and the basic density-clustering-based method in [18].As shown in Figure 11, the basic densityclustering-based method obtained 48.82% accuracy of target number estimation, the multimodal-fusion-based method obtained an accuracy of 51.15% with four modes employed, A comparison was conducted between the proposed local-confidence-level-enhanced density clustering method, the multimodal-fusion-based method in [10], and the basic density-clustering-based method in [18].As shown in Figure 11, the basic densityclustering-based method obtained 48.82% accuracy of target number estimation, the multimodal-fusion-based method obtained an accuracy of 51.15% with four modes employed, while the local-confidence-level-enhanced method achieved 63.39% accuracy.The probability distribution of the source counting result for the local-confidence-levelenhanced method is more concentrated to the source number.
estimating the errors of one and two sources and worsens the probability of overestimat ing errors of five sources.The WDO increases and sources are more detectable in each mode, so the probability of missed detections is reduced.However, a source signal may be divided into parts and be distributed in more than one mode.These parts may not be fully recognized as the same source during the fusion step, leading to an overestimation As a result, compared to the basic density-clustering-based method, despite an advantage in the estimation accuracy, the probability distribution of estimated results is much better for the multimodal-fusion-based method.In addition, the estimation accuracy varies with the number of modes employed.Therefore, it can be seen that the presetting of the numbe of employed modes is quite important.
In contrast, the local-confidence-level-enhanced density clustering dramatically im proves the accuracy of the source counting.In the enhanced method, the local density o the samples around the cluster centers with high local confidence is increased.Therefore the clusters become more compact, which improves the source counting accuracy.In order to evaluate the effect of the noise on the proposed method, an experimen with different SNRs was performed.The real SNR of the lake trial is unknown.It was assumed, empirically, to be 24 dB.The SNR was adjusted by adding simulated Gaussian white noise to the lake trial signal.The performance of the basic density-clustering-based method and the enhanced density-clustering-based method for 8, 12, 16, 20, and 24 dB (without simulated noise) is displayed in Figure 12 comparatively.As the SNR decreases the performance of both methods degenerates gradually.However, the enhanced density clustering-based method performs better than the basic density-clustering-based method and undergoes slower degradation.The multimodal-fusion-based method significantly reduces the probability of underestimating the errors of one and two sources and worsens the probability of overestimating errors of five sources.The WDO increases and sources are more detectable in each mode, so the probability of missed detections is reduced.However, a source signal may be divided into parts and be distributed in more than one mode.These parts may not be fully recognized as the same source during the fusion step, leading to an overestimation.As a result, compared to the basic density-clustering-based method, despite an advantage in the estimation accuracy, the probability distribution of estimated results is much better for the multimodal-fusion-based method.In addition, the estimation accuracy varies with the number of modes employed.Therefore, it can be seen that the presetting of the number of employed modes is quite important.
In contrast, the local-confidence-level-enhanced density clustering dramatically improves the accuracy of the source counting.In the enhanced method, the local density of the samples around the cluster centers with high local confidence is increased.Therefore, the clusters become more compact, which improves the source counting accuracy.
In order to evaluate the effect of the noise on the proposed method, an experiment with different SNRs was performed.The real SNR of the lake trial is unknown.It was assumed, empirically, to be 24 dB.The SNR was adjusted by adding simulated Gaussian white noise to the lake trial signal.The performance of the basic density-clustering-based method and the enhanced density-clustering-based method for 8, 12, 16, 20, and 24 dB (without simulated noise) is displayed in Figure 12 comparatively.As the SNR decreases, the performance of both methods degenerates gradually.However, the enhanced densityclustering-based method performs better than the basic density-clustering-based method and undergoes slower degradation.To investigate the frequency bands of the targets in the experiment, each TF point was assigned to a target according to its dominant signal.If the DOA estimation of a certain TF point was 2° away from the nearest cluster center, it was assigned to this target.Otherwise, it is considered that there is no dominant signal in this time-frequency point.Figure 13 shows the assignment of the first 1.4 s.The frequencies of all four targets spread over the whole broadband of 500-8000 Hz randomly, and clearly cannot be divided into four different separated sub-bands.This result indicates that the WDO assumption holds in this experiment.
Conclusions
An underwater source counting method with local-confidence-level-enhanced density clustering is proposed.In this method, the STFT of the AVS output signal is calculated, and the DOAs of all time-frequency points consist of a data set.Then, the DOAs in the data set are classified with density clustering, where the cluster centers and the number of clusters are obtained as the DOA estimations and the target number, respectively.Finally, the local confidence level is applied to further enhance the performance of the density clustering algorithm.A lake trial is conducted to verify the local-confidence-level-enhanced source counting method which achieves a better source counting performance than the multimodal-fusion-based method and the enhanced density-clustering-based method.
Author Contributions: All the authors contributed equally.All authors have read and agreed to the published version of the manuscript.To investigate the frequency bands of the targets in the experiment, each TF point was assigned to a target according to its dominant signal.If the DOA estimation of a certain TF point was 2 • away from the nearest cluster center, it was assigned to this target.Otherwise, it is considered that there is no dominant signal in this time-frequency point.Figure 13 shows the assignment of the first 1.4 s.The frequencies of all four targets spread over the whole broadband of 500-8000 Hz randomly, and clearly cannot be divided into four different separated sub-bands.This result indicates that the WDO assumption holds in this experiment.To investigate the frequency bands of the targets in the experiment, each TF point was assigned to a target according to its dominant signal.If the DOA estimation of a certain TF point was 2° away from the nearest cluster center, it was assigned to this target.Otherwise, it is considered that there is no dominant signal in this time-frequency point.Figure 13 shows the assignment of the first 1.4 s.The frequencies of all four targets spread over the whole broadband of 500-8000 Hz randomly, and clearly cannot be divided into four different separated sub-bands.This result indicates that the WDO assumption holds in this experiment.
Conclusions
An underwater source counting method with local-confidence-level-enhanced density clustering is proposed.In this method, the STFT of the AVS output signal is calculated, and the DOAs of all time-frequency points consist of a data set.Then, the DOAs in the data set are classified with density clustering, where the cluster centers and the number of clusters are obtained as the DOA estimations and the target number, respectively.Finally, the local confidence level is applied to further enhance the performance of the density clustering algorithm.A lake trial is conducted to verify the local-confidence-level-enhanced source counting method which achieves a better source counting performance than the multimodal-fusion-based method and the enhanced density-clustering-based method.
Author Contributions: All the authors contributed equally.All authors have read and agreed to the published version of the manuscript.
Conclusions
An underwater source counting method with local-confidence-level-enhanced density clustering is proposed.In this method, the STFT of the AVS output signal is calculated, and the DOAs of all time-frequency points consist of a data set.Then, the DOAs in the data set are classified with density clustering, where the cluster centers and the number of clusters are obtained as the DOA estimations and the target number, respectively.Finally, the local confidence level is applied to further enhance the performance of the density clustering algorithm.A lake trial is conducted to verify the local-confidence-level-enhanced source counting method which achieves a better source counting performance than the multimodal-fusion-based method and the enhanced density-clustering-based method.
Figure 3
Figure3illustrates the temporal course of the cross-spectral DOA histogram computed with Equation (4) at each time bin.Four targets marked as <1>, <2>, <3>, and <4> are presented in the histogram.The red color stands for the peaks of the histogram, which indicate the locations of the targets.Continuous peaks in time compose the time-bearing course of a target, which is illustrated roughly with straight black, red, green, and blue dash-dot lines, respectively, for each target.As shown in Figure3, the azimuths of targets 3 and 4 gradually become closer over time, so the spectral peaks of these two targets become hard to distinguish (red color connects in one piece around time (d)).Since the signal from the target ship is weak when the ship decelerates or stops, the spectral peaks of targets 1 and 2 are unapparent or even submerged by the background during 38-40 s in the DOA histogram, as with the time points 44-50 s of target 3 and 50-53 s of target 4 (red color absent).Thus, the target courses have obvious discontinuity.
Figure 3 .
Figure 3.The temporal course of the cross-spectral DOA histogram of vector hydrophone from lake trial.((a), (b), (c) and (d) denote the time of 6.7 s, 11.6 s, 21.3 s and 36.0 s).
Figure 4
Figure 4 depicts the temporal course of the local-confidence-level-enhanced crossspectral DOA histogram computed with Equation (16) at each time bin.It can be seen that the course of target 3 is significantly enhanced (red color is more continuous in time and peaks are higher than those in Figure3), and the course of other targets is also enhanced relative to the original ones in Figure3.It indicates that by using the local confidence level enhancement, the DOA courses of these targets become apparent, and both the resolution of targets and the continuity of the courses are improved (red color areas are thinner in bearing axis direction and more clearly separated compared to Figure3).
Sensors 2023 , 16 Figure 4
Figure 4 depicts the temporal course of the local-confidence-level-enhanced crossspectral DOA histogram computed with Equation (16) at each time bin.It can be seen that the course of target 3 is significantly enhanced (red color is more continuous in time and peaks are higher than those in Figure3), and the course of other targets is also enhanced relative to the original ones in Figure3.It indicates that by using the local confidence level enhancement, the DOA courses of these targets become apparent, and both the resolution of targets and the continuity of the courses are improved (red color areas are thinner in bearing axis direction and more clearly separated compared to Figure3).
Figure 5
Figure 5 shows the comparative results between the cross-spectral DOA histogram and the enhanced cross-spectral DOA histogram at different time points corresponding to (a), (b), (c), and (d) in Figures3 and 4, respectively.As shown in Figure5, both the original and the enhanced histograms present spectral peaks at the directions of the targets, while the enhanced one possesses glaringly obvious peaks.This reveals that the enhanced histogram is able to estimate the azimuths of targets at a higher resolution.In particular, when the signal from the target ship is weak, such as target 3 in Figure5c, the enhanced cross-spectrum can significantly upgrade its peak, which makes it possible to detect weak targets.Additionally, as depicted in Figure5d, the distinguishability of adjacent targets (targets 3 and 4) is also promoted by the enhanced cross-spectrum.Figure6explains the enhancement performance of the local confidence level by the DOA histogram at time (c).The average local confidences of the directions of the targets are larger than the others.As a result, the peaks of the targets become higher and sharper.
Figure 5
Figure 5 shows the comparative results between the cross-spectral DOA histogram and the enhanced cross-spectral DOA histogram at different time points corresponding to (a), (b), (c), and (d) in Figures3 and 4, respectively.As shown in Figure5, both the original and the enhanced histograms present spectral peaks at the directions of the targets, while the enhanced one possesses glaringly obvious peaks.This reveals that the enhanced histogram is able to estimate the azimuths of targets at a higher resolution.In particular, when the signal from the target ship is weak, such as target 3 in Figure5c, the enhanced cross-spectrum can significantly upgrade its peak, which makes it possible to detect weak targets.Additionally, as depicted in Figure5d, the distinguishability of adjacent targets (targets 3 and 4) is also promoted by the enhanced cross-spectrum.
Figure 6 .
Figure 6.DOA histogram: (a) The cross-spectral DOA histogram and average local confidence level; (b) The enhanced cross-spectral DOA histogram.(The circles a-d and A-D denote the data samples considered as the clusters centers)
Figure 6 Figure 5 .Figure 6 .
Figure 6 explains the enhancement performance of the local confidence level by the DOA histogram at time (c).The average local confidences of the directions of the targets are larger than the others.As a result, the peaks of the targets become higher and sharper.
Figure 6 .
Figure 6.DOA histogram: (a) The cross-spectral DOA histogram and average local confidence level; (b) The enhanced cross-spectral DOA histogram.(The circles a-d and A-D denote the data samples considered as the clusters centers).
Sensors 2023 , 16 Figure 8 Figure 7 .Figure 8 .
Figure8demonstrates the second-order statistic of the basic and enhanced density clustering algorithms with the data in Figure6.Obviously, the basic density clustering obtains the wrong counting result while the enhanced density clustering obtains the correct one.These results are consistent with the analysis of Figure7.
Figure 9
Figure 9 illustrates the courses of the multi-target DOA estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.Compared with the courses estimated by the basic density clustering, the outliers in the courses estimated by the enhanced density clustering significantly reduce by weighting the density with the local confidence level, and the continuity of the courses of the multi-target detection is remarkably improved, especially in Regions A and B.Moreover, the distribu-
Figure 7 .
Figure 7.The decision graphs of the basic and enhanced density clustering algorithms for the data in Figure 6: (a) Basic density clustering; (b) Enhanced density clustering.(The points a-d and A-D denote the data samples considered as the clusters centers).
Figure 7 .Figure 8 .
Figure 7.The decision graphs of the basic and enhanced density clustering algorithms for the data in Figure 6: (a) Basic density clustering; (b) Enhanced density clustering.(The points a-d and A-D denote the data samples considered as the clusters centers)
Figure 9
Figure 9 illustrates the courses of the multi-target DOA estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.Compared with the courses estimated by the basic density clustering, the outliers in the courses estimated by the enhanced density clustering significantly reduce by weighting the density with the local confidence level, and the continuity of the courses of the multi-target detection is remarkably improved, especially in Regions A and B.Moreover, the distribution of the DOA estimations of targets is more compact for enhanced density clustering.
Figure 8 . 16 Figure 9 .
Figure 8.The second-order statistic S n of the basic and enhanced density clustering algorithms for the data in Figure 6.Sensors 2023, 23, x FOR PEER REVIEW 12 of 16
Figure 9 .
Figure 9. Courses of the multi-target DOA estimations with the basic density clustering, and the local-confidence-level-enhanced density clustering.(Comparing to the basic density clustering, the continuity of the courses of the enhanced density clustering is remarkably improved in Regions A and B).
Sensors 2023 , 16 Figure 9 .
Figure 9. Courses of the multi-target DOA estimations with the basic density clustering, and the local-confidence-level-enhanced density clustering.(Comparing to the basic density clustering , the continuity of the courses of the enhanced density clustering is remarkably improved in Regions A and B ) Figure 10 depicts the results of the source number estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.Obviously, in Regions A, B, and C, the enhanced density clustering achieves the correct estimation results while the basic density clustering does not.In Region D, although both methods obtain incorrect results, the outcomes of the enhanced density clustering are closer to the true value.Targets 3 and 4 stop for a while during the periods of 44-50 s and 50-53 s, respectively.During these periods, there are only three targets in the field.As a result, the source number estimation result in the period before 44 s was used to calculate the source counting accuracy.
Figure 10 .
Figure 10.Courses of the source number estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.(The enhanced density clustering achieves the correct estimation results while the basic density clustering does not in Regions A, B, and C. The outcomes of the enhanced density clustering are closer to the true value in Regions D)
Figure 10 .
Figure 10.Courses of the source number estimations obtained by the basic density clustering and the local-confidence-level-enhanced density clustering.(The enhanced density clustering achieves the correct estimation results while the basic density clustering does not in Regions A, B, and C. The outcomes of the enhanced density clustering are closer to the true value in Regions D).
Figure 11 .
Figure 11.The estimated source number histogram of the basic density-clustering-based method the multimodal-fusion-based method, and the enhanced density-clustering-based method for the experiment.
Figure 11 .
Figure 11.The estimated source number histogram of the basic density-clustering-based method, the multimodal-fusion-based method, and the enhanced density-clustering-based method for the experiment.
Figure 13 .
Figure 13.Frequency distribution of targets in time-frequency domain.
Figure 13 .
Figure 13.Frequency distribution of targets in time-frequency domain.
Figure 13 .
Figure 13.Frequency distribution of targets in time-frequency domain.
|
2023-10-18T15:13:09.057Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e8ca29f8bff713b7fd5f2e1e79fe514f775a83b0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/20/8491/pdf?version=1697439170",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "02133564b57aba9d0023ddb5ea78dd9b16b623c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
231787254
|
pes2o/s2orc
|
v3-fos-license
|
Yeast Fermentate Prebiotic Ameliorates Allergic Asthma, Associating with Inhibiting Inflammation and Reducing Oxidative Stress Level through Suppressing Autophagy
Methods Ovalbumin was used to induce allergic asthma following administration of YFP for one week in mice, to collect the lung tissues, bronchoalveolar lavage fluid (BLFA), and feces. The pathological state, tight-junction proteins, inflammatory and oxidative stress-associated biomarkers, and TLRs/NF-κB signaling pathway of the lung tissues were evaluated by HE staining, immunofluorescence, ELISA, and WB, separately. RT-PCR was used to test oxidative stress-associated genes. Leukocyte counts of BLFA and intestinal microbiota were also analyzed using a hemocytometer and 16S rDNA-sequencing, separately. Result YFP ameliorated the lung injury of the mouse asthma model by inhibiting peribronchial and perivascular infiltrations of eosinophils and increasing tight-junction protein expression. YFP inhibited the decrease in the number of BALF leukocytes and expression of inflammatory-related genes and reversed OVA-induced TLRs/NF-κB signaling pathway activation. YFP ameliorated the level of oxidative stress in the lung of the mouse asthma model by inhibiting MDA and promoting the protein level of GSH-PX, SOD, CAT, and oxidative-related genes. ATG5, Beclin1, and LC3BII/I were significantly upregulated in asthma mice, which were greatly suppressed by the introduction of YFP, indicating that YFP ameliorated the autophagy in the lung of the mouse asthma model. Lastly, the distribution of bacterial species was slightly changed by YFP in asthma mice, with a significant difference in the relative abundance of 6 major bacterial species between the asthma and YFP groups. Conclusion Our research showed that YFP might exert antiasthmatic effects by inhibiting airway allergic inflammation and oxidative stress level through suppressing autophagy.
Introduction
Allergic asthma is a common chronic inflammatory respiratory disease with high morbidity and mortality all over the world. According to the World Health Organization (WHO) report in 2015 [1], approximately 0.3 billion patients are diagnosed with allergic asthma. Allergic asthma is mainly clinically characterized by discontinuous reversible airway obstruction and bronchial hyperresponsiveness [2]. Although numerous investigations have explored allergic asthma's pathogenesis in the past decades, allergic asthma's aetiology and pathogenesis remain unknown, which prevents pharmaceutical companies from developing effective targeted drugs and the clinicians from diagnosing accurately [3].
Airway allergic inflammation is regarded as one of the leading theories on allergic asthma [4]. As allergic asthma develops, large amounts of inflammatory cells infiltrate into the lung tissues in the pathological biopsy of both clinical allergic asthma patients and the experimental animal models, including granulocytes [5], mastocytes [6], macrophages [7], dendritic cells [8], and T cells and B cells [9]. The NF-κB signal pathway is reported to be both involved in inflammatory activation, including TLR4/NF-κB [10], CX3CR1/NF-κB [11], p120/NF-κB [12], and TRAF6/NF-κB [13] signal pathways. Hong et al. [14] reported that bromodomaincontaining protein 4 inhibition alleviated matrix degradation by enhancing autophagy and suppressing NLRP3 inflammasome activity through regulating NF-κB signaling in nucleus pulposus cells. Besides, Qi et al. [15] reported that MSTN attenuated cardiac hypertrophy through the inhibition of excessive cardiac autophagy by blocking AMPK/m-TOR and miR-128/PPARγ/NF-κB signal pathways. These studies reveal the critical roles of NF-κB for the regulation of inflammations.
Furthermore, the mice with knockout of autophagy gene ATG5 in dendritic cells are more susceptible to steroltolerant neutrophilic airway inflammation [16], indicating that autophagy might be involved in the development and progressing of allergic asthma. Autophagy is a relatively conservative degradation of cellular materials, such as damaged organelles or reactive oxygen species (ROS) [17]. Recent studies reveal the high correlation of the nucleotide polymorphism of ATG5/7 and the development of asthma in pediatrics [18] and adults [19]. Besides, more autophagic vacuoles are observed in the clinical-pathological biopsy of allergic asthma patients [20]. Excessive production of ROS induced by the aggravation of autophagy in the tissues further contributes to oxidative stress, while oxidative stress could be suppressed by 3-MA, an autophagy inhibitor, in a murine allergic asthma mouse model [21]. Poon et al. also reported an important role of autophagy-regulated oxidative stress in the developing and processing of asthma [20]. Therefore, investigations on oxidative stress induced by autophagy should help understand allergic asthma's pathogenesis better.
Yeast Fermentate Prebiotics (YFP), a group of live microorganisms, have benefit roles in maintaining the health of the host [22], which could be associated with balancing the microbial community structure, inducing the degradation of antigens [23], and regulating immunity [24]. Recently, YFP was reported to exert therapeutic effects on the treatment of allergic asthma [25][26][27][28][29]. Besides, prebiotic was reported to regulate the NF-κB signal pathway in colitis [30] and diabetes [31] by modulating gut microbiota. In the present study, the antiasthma effects of YFP and the underlying mechanism will be investigated to provide the fundamental basis for the potential therapeutic application of YFP against allergic asthma.
Animals and Allergic Asthma
Model. Twenty-four 6week old BABL/c male mice were purchased from Beijing Vital River Laboratory Animal Technology Co., Ltd. The mice were divided into three groups: the control group, the asthma group, and the asthma+YFP group. The mice in the asthma+YFP group were orally administrated with YFP (1 × 10 9 CFU/day) from day 0 to day 6, while the other mice received oral administration of normal saline. Ovalbumin (OVA) was used to establish the murine asthma model according to a modification of the methods proposed by Yu et al. [29]. Briefly, the mice in the asthma group and the asthma+YFP group were administrated with an intraperitoneal injection of 20 μg OVA emulsified in 2.25 mg alum hydroxide in a total volume of 100 μL at day 7 and day 14 and inhaled with 1% OVA through an ultrasonic sprayer (Nescosonic UN-511, Alfresa, Osaka, Japan) for three days from day 21. Normal saline was administered orally instead of OVA to the control mice. On day 23, all the mice were sacrificed for the collection of lung and feces. The lungs were flushed twice with cold 0.5% fetal bovine serum in 1 mL PBS, and bronchoalveolar lavage fluid (BALF) was obtained for leukocyte counts using a hemocytometer (Thermo) after lavage and centrifuged at 2000 g at 4°C for 5 min. We declare that all animal experiments involved in this manuscript were authorized by the ethical committee of The Second Xiangya Hospital of Central South University and carried out according to the guidelines for care and use of laboratory animals as well as to the principles of laboratory animal care and protection.
Hematoxylin and Eosin (HE)
Staining. The lungs were washed over by sterile water for three hours, dehydrated by 70%, 80%, and 90% ethanol solution successively, and mixed with equal quality of ethanol and xylene. After 15 min incubation, the tissues were mixed with equal quality of xylene for 15 min. Repeat the steps until the tissues looked transparent. Subsequently, the tissues were embedded in paraffin, sectioned, and stained with hematoxylin and eosin (HE). H&E-stained tissue sections were analyzed under a microscope (Olympus).
Real-Time Polymerase Chain Reaction (RT-PCR).
Total RNA of the lungs was extracted using a TaKaRa MiniBEST Universal RNA Extraction Kit (TaKaRa, Dalian, China), according to the manufacturer's instructions, and quantified with a NanoDrop spectrophotometer (NanoDrop Technologies, Wilmington, DE). Complementary DNA was generated with a specific RT primer. RT-PCR was performed with SYBR Premix Ex TaqTM (Tli RNaseH plus) ( TaKaRa, 2 Mediators of Inflammation Dalian, China) by the Applied Bio-Rad CFX96 Sequence Detection System (Applied Biosystems). The expression level of GPX1-4, CAT, SOD1, SOD2, and UCP2 was defined from the threshold cycle (Ct), and relative expression levels were calculated using the 2 −ΔΔCt method after normalization regarding the expression of U6 small nuclear RNA. The expression level of GAPDH in the tissues was taken as the negative control. Three independent assays were performed. The information of the primers is shown in Table 1.
2.5. Immunofluorescence. The BALF cells were incubated with primary rabbit anti-ZO-1, anti-Claudin1, anti-Clau-din4, and anti-Occludin (OmnimAbs, 1 : 1000) antibody overnight at 4°C. Following washed three times with PBS, cells were incubated with secondary Cy3-conjugated antirabbit IgG (Abcam, 1 : 200) for 30 min at room temperature. The DAPI was added to dye the nuclear for 5 min, and 50% glycerin was used to block the medium. Stained cells were photographed under a fluorescence microscope (Olympus, Tokyo, Japan). A horseradish peroxidase-conjugated antibody against rabbit IgG (1 : 5000, Abcam, USA) was used as a secondary antibody. Blots were incubated with the ECL reagents (Beyetime, Jiangsu Province, China) and exposed to Tanon 5200-multi to detect protein expression. Three independent assays were performed.
YFP Ameliorated the Lung Injury of the Mouse Asthma
Model. HE staining was used to check the pathological state of the lung tissues, and the immunofluorescence assay was used to determine the expression level of tight junctionrelated proteins. As shown in Figure 1(a), significant peribronchial and perivascular infiltrations of eosinophils were observed in the asthma group, compared to the control, which were significantly suppressed by YFP. ZO-1, Claudin1, Claudin4, and Occludin were significantly downregulated in the asthma group, compared to the control, the expression of which was significantly promoted by the treatment of YFP (Figures 1(b)-1(d)).
Mediators of Inflammation
YFP on the inflammation, the expression of related proteins in the lung tissue was detected by Western blot. As shown in Figures 2(g) and 2(h), the expression level of NF-κB, p-NF-κB, TLR1, TLR, TLR3, TLR4, and Myd88 was significantly promoted in the asthma group, compared with the control group, which was also inhibited by the treatment with YFP ( * * P < 0:01 vs. control, * * * P < 0:001 vs. control, and ### P < 0:001 vs. asthma).
3.3. YFP Ameliorated the Level of Oxidative Stress in the Lung of the Mouse Asthma Model. Oxidative stress-related factors were detected in the BALF by ELISA to evaluate the effect of YFP on oxidative stress. Compared to the controls, MDA was significantly improved in the asthma group, which was inhibited by the introduction of YFP (Figure 3(a)), while the decreased production of GSH-PX, SOD, and CAT in the asthma group was significantly elevated by the treatment and UCP2 was significantly suppressed in the asthma group, compared to the control group, and was promoted comparing with that by the asthma+YFP group ( * * P < 0:01 vs. control, * * * P < 0:001 vs. control, and ### P < 0:001 vs. asthma).
YFP Ameliorated the Autophagy in the Lung of the Mouse Asthma Model.
To evaluate the effects of YFP on autophagy in the lung tissues induced by asthma modeling, Western blot was used to determine the expression level of autophagyrelated proteins. As shown in Figure 4, ATG5, Beclin1, and LC3BII/I were significantly upregulated in asthma mice compared to control, which were greatly suppressed by the introduction of YFP ( * * P < 0:01 vs. control, * * * P < 0:001 vs. control, and ### P < 0:001 vs. asthma).
The Distribution of Bacteria Species Was Slightly Changed by YFP in Asthma
Mice. 16S rDNA sequencing analysis was performed to explore the effect of YFP on the gut microbiota of allergic asthma mice. As shown in Figure 5(a), the Venn diagram revealed that a total of 1045 distinct genera were identified upon YFP treatment. These observations indicated that the YFP treatment increased the bacterial diversity in the gut of the animal, although not statistically significant. Figures 5(b)-5(d) showed the microbial richness (Chao1) analysis, alpha diversity (Shannon) analysis, and PCoA analysis, respectively. However, no significant difference was observed between the asthma group and the YFP-treated group. As shown in Figure 5(e) and Table 2, the major difference in the relative abundance of the bacteria species in the feces of mice was observed between the control group and the asthma group, of which the top 15 species with the highest abundance were listed. Interestingly, except for Akkermansia (higher in the control group), the relative abundance of Prevotella, Oscillospira, Helicobacter, Coprococcus, Ruminococcus, Bacteroides, Flexispira, Odoribacter, and Turicibacter in the asthma mice was significantly elevated, compared to control ( * P < 0:05 vs. control). By the treatment of YFP, the relative abundance of Oscillospira, Helicobacter, Coprococcus, Ruminococcus, Flexispira, and Odoribacter was greatly decreased ( # P < 0:05 vs. asthma).
Discussion
Airway allergic inflammation is reported to be one of the pathological basis of allergic asthma [32]. Large amounts of inflammatory immune cells are recruited into the lung tissues when allergic occurs and release inflammatory cytokines to accelerate airway allergic inflammation, promoting aggravate allergic asthma process [33]. Therefore, it is of great importance to inhibiting airway inflammation in allergic asthma. In the present study, the antiasthma effect of YFP was investigated. HE staining showed that YFP ameliorated the lung injury caused by asthma modeling in mice. The tightjunction protein of lung tissues is closely related to the pathological state of allergic asthma, which is represented by the expression level of ZO-1, Claudin1, Claudin4, and Occludin (g, h) The expression of NF-κB, p-NF-κB, TLR1, TLR2, TLR3, TLR4, and Myd88 was evaluated by Western blot. Data are presented as mean ± SEM. * * * P < 0:001 vs. control, ## P < 0:01 vs. asthma, and ### P < 0:001 vs. asthma. 6 Mediators of Inflammation 7 Mediators of Inflammation [34,35]. After the administration of YFP, the tight junction of lung tissue was greatly improved. The results indicated that the symptom of allergic asthma in mice was significantly improved by the administration of YFP. Besides, YFP pro-moted inflammation in the lung. By exploring the state of the inflammatory signal pathways in the lung tissues following treatments with YFP, we found that YFP greatly inhibited the TLR/NF-κB signal pathway in the lung tissue. These data To further investigate the possible mechanism underlying the inflammation inhibitory effects of YFP, the distribu-tion of gut microbiota in the feces of mice was explored. Salameh et al. [36] reported that gut microbiota plays a great role in the pathogenesis of allergic asthma, to which the inflammatory factors might be the mediators. In the present study, we found that although the relative abundance of the 10 Mediators of Inflammation top 3 bacteria species was not reversed by YFP in the asthma mice, a significant inhibitory effect on the relative abundance of the top 4-8 bacteria species was achieved by the treatment of YFP, indicating a minor impact of YFP on the distribution of gut microbiota of the asthma mice. However, more evidence of the positive correlation between the antiasthma property of YFP and the change of gut microbiota distribution should be provided in our future work. In the present study, only one dosage of YFP was applied and we suspected that the change of gut microbiota distribution, such as the relative abundance of the bacteria species, α-diversity index, and β-diversity index, might be enlarged if we increased the dosage, which will be verified in our subsequent work. Besides, in our future work, the inflammation profile both in the central nervous system and in the peripheral region will also be described to analyze the change of inflammation state before and after the treatment of YFP, which might bring direct evidence to explain the antiasthma property of YFP. Oxidative stress is also a pathological manifestation of allergic asthma, the representative indexes of which are MDA, GSH-PX, SOD, and CAT [37]. In the present study, the oxidative stress level in the asthmatic mice was significantly inhibited by the treatment of YFP. Silveira et al. [21] reported that autophagy could induce eosinophil extracellular trap formation and allergic airway inflammation in a murine asthma model by activating oxidative stress. Tsai et al. [38] also reported that autophagy against oxidative stress-mediated apoptosis in normal and asthmatic airway epithelium is induced by complement regulatory protein CD46. In the present study, the expression level of autophagy-related proteins (ATG5, Beclin1, and LC3BII/I) was significantly suppressed by YFP. These data indicated that YFP might exert antiasthmatic effects by inhibiting oxidative stress through suppressing autophagy.
According to the data achieved in the present study, on the one hand, the inflammation induced in the murine asthma model could be alleviated by the treatment of YFP, which might be related to the regulatory effect of YFP on the TLR4/NF-κB signaling pathway. In our future work, the specific inhibitor against the TLR4/NF-κB signaling pathway will be introduced to verify the mechanism underlying the inhibitory effect of YFP against inflammation. On the other hand, oxidative stress in the lung tissues was ameliorated by YFP by regulating autophagy, which should also be verified by introducing the agonist of autophagy in our future work. As a result, the symptom of asthma in the murine model was significantly alleviated.
Collectively, our research showed that YFP might exert antiasthmatic effects by inhibiting airway allergic inflammation through regulating gut microbiota distribution and inhibiting oxidative stress levels through suppressing autophagy.
Data Availability
The data can be available if requested by the editor.
Conflicts of Interest
The authors declare there are no conflicts of interest regarding the publication of this paper.
|
2021-02-04T05:09:44.628Z
|
2021-01-19T00:00:00.000
|
{
"year": 2021,
"sha1": "6aa9d3f25f48073881a7720af75a11f71b3f770c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mi/2021/4080935.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6aa9d3f25f48073881a7720af75a11f71b3f770c",
"s2fieldsofstudy": [
"Medicine",
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
8865487
|
pes2o/s2orc
|
v3-fos-license
|
Climate Change, Crop Yields, and Undernutrition: Development of a Model to Quantify the Impact of Climate Scenarios on Child Undernutrition
Background: Global climate change is anticipated to reduce future cereal yields and threaten food security, thus potentially increasing the risk of undernutrition. The causation of undernutrition is complex, and there is a need to develop models that better quantify the potential impacts of climate change on population health. Objectives: We developed a model for estimating future undernutrition that accounts for food and nonfood (socioeconomic) causes and can be linked to available regional scenario data. We estimated child stunting attributable to climate change in five regions in South Asia and sub-Saharan Africa (SSA) in 2050. Methods: We used current national food availability and undernutrition data to parameterize and validate a global model, using a process-driven approach based on estimations of the physiological relationship between a lack of food and stunting. We estimated stunting in 2050 using published modeled national calorie availability under two climate scenarios and a reference scenario (no climate change). Results: We estimated that climate change will lead to a relative increase in moderate stunting of 1–29% in 2050 compared with a future without climate change. Climate change will have a greater impact on rates of severe stunting, which we estimated will increase by 23% (central SSA) to 62% (South Asia). Conclusions: Climate change is likely to impair future efforts to reduce child malnutrition in South Asia and SSA, even when economic growth is taken into account. Our model suggests that to reduce and prevent future undernutrition, it is necessary to both increase food access and improve socioeconomic conditions, as well as reduce greenhouse gas emissions.
Hunger and under nutrition are pervasive, thought to be worsening in absolute terms, and are major contributors to global ill health [Black et al. 2008; Food and Agricultural Organization of the United Nations (FAO) 2009]. More than one billion people are under nourished (FAO 2009), and about a third of the burden of disease in children < 5 years of age is attributable to under nutrition (Black et al. 2008). Economic growth is anticipated by many to reduce future under nutrition (Smith and Haddad 2002), although recent observations do not support this assumption (Subramanyam et al. 2011).
Global food security depends on a range of factors (Schmidhuber and Tubiello 2007), with cereal production playing a major role (Parry et al. 2009). Data suggest that global per capita cereal production plateaued during the 1980s and has since declined (Magdoff and Tokar 2010), despite production increases in some regions (FAO 2011). Further, with eco nomic growth, dietary preferences tend toward greater meat consumption, placing greater demands on cereal production to provide ani mal feed (Msangi and Rosegrant 2011).
Concern is growing that efforts to reduce under nutrition in the coming decades may be threatened by global climate change (Nelson et al. 2010;Parry et al. 2009;Schmidhuber and Tubiello 2007). Scientific assessments indicate that warming will have an overall negative impact on major cereal yields in lowlatitude areas, although yields may increase in some highlatitude areas (Easterling et al. 2007). Climate change could place an additional 5-170 million people "at risk of hunger" by the 2080s (Parry et al. 1999(Parry et al. , 2004Rosenzweig and Parry 1994). Food security is now one of the leading concerns associated with anthropo genic climate change (Parry et al. 2009).
A number of terms are used to describe hun ger and under nutrition. "Undernourishment" is not a health outcome per se; it is a theoreti cal modelbased estimate of access to calories developed by the FAO and is defined as the proportion of people "whose dietary energy consumption is continuously below a minimum dietary energy requirement for maintaining a healthy life and carrying out light physical activ ity with an acceptable minimum bodyweight for attainedheight" (FAO 2010). That is, it has one final cause: a lack of food. "At risk of hun ger" is synonymous with under nourishment.
"Undernutrition" refers to a physical state and is measured using (among other things) anthropometric indices such as stunting (heightforage) and underweight (weightfor age) [World Health Organization (WHO) 2010]. A lack of food-that is, under nourishment-is one of the many causes of under nutrition, which also include poor water and sanitation provision, low levels of women's education, repeated episodes of infec tious diseases, and low birth weight [United Nations Children's Fund (UNICEF 1990); for more details on causes, see Black et al. 2008;UNICEF 1990]. Checkley et al. (2008), for example, estimated that 25% [95% confidence interval (CI): 8, 38%] of irreversible stunting at 24 months of age could be attributed to having had five or more episodes of diarrhea. Although it can be argued that under nutrition itself is not a health outcome, under nutrition can be directly linked to increased risk of death and poor health (Black et al. 2008). Additionally, child under nutrition has long term consequences for the health and earning potential of adults (Victora et al. 2008).
To quantify future health burdens, it is preferable to model under nutrition (which refers to a physical state and accounts for com plex causation) rather than under nourishment (which is a theoretical concept). They are often poorly correlated (Klasen 2006;Svedberg 2002) and this suggests that under nourishment is a poor proxy for under nutrition. The WHO concluded that (using a number of simplify ing assumptions) under nutrition represented a significant proportion of the total burden of disease estimated to be attributable to cli mate change in (McMichael et al. 2004). Only one group has provided more recent quantitative estimates of future under nutrition attributable to climate change. Nelson et al. (2009) Smith and Haddad (2000), which is driven by per capita calorie availability and socioeconomic indica tors: the ratio of female to male life expectancy, female enrollment in secondary education, and access to improved water supply. Future per capita calorie availability was estimated by modeling crop yield and global food trade. All other non climate factors were assumed to stay constant over time (i.e., unchanged from baseline values). These assumptions are likely to have led to an overestimate of the future burden attributable to climate change because this approach assumes that living conditions in countries will improve little over the next 40 years. This is not consistent with historical trends; between 1970 and 1995, 43% of the reduction in child underweight has been attrib uted to improved female education, compared with 26% for increased food availability and 19% from improved water access (Smith and Haddad 2000).
More recently, the same group produced updated estimates for a broader range of sce narios using a similar strategy (Nelson et al. 2010). Based on expert opinion, the socioeco nomic variables driving the underweight model were varied with time but were considered constant across three socioeconomic scenarios broadly representing pessimistic, businessas usual, and optimistic economic growth.
Despite the importance of socioeconomic influences on health, the data currently avail able for climate impact studies are largely lim ited to population and gross domestic product (GDP) projections that were created for esti mating future greenhouse gas emission concen trations. At present, any modeling efforts must work within these constraints. However, atten tion is now being focused on creating a wider range of plausible socioeconomic scenarios for climate impact assessments (Moss et al. 2010).
We developed a parsimonious model for estimating future under nutrition attributable to global climate change, specifically due to its impacts on crop productivity. We then esti mated the future impact of climate scenarios on under nutrition in children for five world regions in Africa and Asia in 2050 using previ ously published estimates of climate changeattributable changes in calorie availability from Nelson et al. (2009). [The more recent esti mates (Nelson et al. 2010) are not included in our assessment because they were released after the completion of our project.]
Materials and Methods
We first describe the development and fit ting of a model for estimating the prevalence of stunting. Second, we outline the process of estimating the proportion under nourished (PoU) using per capita calorie availability esti mates from Nelson et al. (2009). Finally, we discuss the simulation process for estimating future under nutrition attributable to global climate change.
Model development. Our outcome of interest is stunting in children < 5 years of age, because this best captures the impact of condi tions over the long term (Black et al. 2008). Children are considered moderately stunted if they are > 2 SDs below the mean expected heightforage and severely stunted if > 3 SDs below the mean (de Onis and Blossner 2003).
Scenario data are limited essentially to future food availability and per capita GDP, and many causes of stunting cannot be explic itly modeled. We considered stunting to have two main causes, which we refer to as "food causes" and "non food causes." Food causes are represented as PoU, which accounts for climate change effects on calorie availability (via changes in crop productivity) and food access. [Stunting has food causes other than calories, e.g., micronutrient deficiencies (Black et al. 2008), but these are not represented in PoU, nor are they modeled in climatecrop models.] Non food causes are represented as a "black box cluster" of socioeconomic fac tors acting at various levels and represent the wide range of social and demographic causes of stunting, such as low female literacy and poor health care access (Frongillo et al. 1997). Non food causes are modeled using per capita GDP and the Gini coefficient for income dis tribution to generate a "development score," as described below.
The conceptual model is represented by two general equations: [1] for every i, j; k = 2, 3, [2] for every i, j; k = 1, where y ijk is the proportion of children < 5 years of age stunted in country i, in region j, at level k, where k is 1 if no/mild stunting, 2 if moderate stunting, or 3 if severe stunting; x ij is food causes of stunting, represented by the PoU in country i, in region j; and w ij is non food causes of stunting, represented by the "development score" (defined below) in coun try i, in region j. The parameters α k , β k , γ k , and θ k are to be determined: β k is the physi ological relation between under nourishment and stunting (details given below), γ k relates the development score to stunting, θ k relates the interaction between under nourishment and the development score to stunting, and α k is the regression constant.
Equation 1 is a bilinear model because it is a linear function of the independent vari ables (x ij and w ij ) and their product (x ij w ij ). After estimating moderate (y ij2 ) and severe (y ij3 ) stunting, we estimated the proportion not or mildly stunted (y ij1 ) as described in Equation 2.
The "development score" is an indicator of the non food causes of stunting. It is driven by countrylevel projections of future per capita GDP and the baseline (i.e., most recent esti mate available) Gini coefficient (because no projections were available). The development score is scaled from 0 to 1; it equals 0 when socioeconomic conditions are optimal (in terms of avoiding under nutrition) and all under nutrition is attributable to food causes, and it equals 1 when non food causes are at their cur rent (baseline) global maximum [for additional information on development score calculations, see Supplemental Material, Annex 1 (http:// dx.doi.org/10.1289/ehp.1003311)].
To parameterize the equations, we assem bled a global data set obtaining countrylevel under nourishment estimates from the FAO (FAO 2010), per capita GDP and Gini data from the World Bank Development Indicators (WBDI) database (World Bank 2010), and stunting data from the WHO's Global Database on Child Growth and Malnutrition (WHO 2010).
Stunting data were matched to under nourishment data to within a 1year period. Per capita GDP and Gini coefficient estimates were matched as closely as possible to the stunting data year. The data set covered the period 1988-2008 and contained 186 records with complete data. Countries were included in the data set more than once if they had data for multiple years.
Fitting the model. We decided, a priori, to use a processdriven (theorybased) rather than a standard datadriven (statistical) approach to develop and parameterize the model equations. The purpose of the model is to describe plausible futures, so we designed it to be driven as much as possible by relation ships that will be stable over time.
Of the two model variables, we assumed that food causes have a more stable relation ship with stunting than do non food causes because food causes are physiologically related to stunting, and it is reasonable to assume that this relationship will hold over the next 50 years. In contrast, we assumed that non food causes-which we modeled using per capita GDP and the Gini coefficient-do not necessarily have a stable relationship with stunting because the relationship is mediated, at least partly, by social and political factors that may change over time. Therefore, when fitting our model, we first quantified the rela tionship between stunting and food causes and then considered socioeconomic factors.
We assumed that if someone had insuffi cient food, and non food causes of stunting were absent (i.e., socioeconomic conditions were optimal in terms of avoiding under nutrition), there would be a predictable risk of stunting; that is, we assumed the relationship between food intake and stunting is physiologically determined and holds globally. This assumption is supported by ample evidence that, at least until 6 years of age, all adequately nourished and optimally cared for children will have simi lar, predictable growth rates (WHO 2006). In addition to this food intake-related burden, if socioeconomic conditions are poor, there is an additional risk of stunting from non food causes and their interaction with food causes, for example, high rates of diarrhea associated with inadequate sanitation. We do not consider it probable that a country will lack sufficient food but otherwise have "optimal" socio economic conditions; our conception is theoretical.
Using the data set, we estimated the pre dictable but unknown physiologically based relationship between under nourishment and stunting at level k (β k ) as (The operator min i,j {•} means the minimum of the argument in {•}.) This minimum pro portion was obtained by finding the mini mum value of the ratio of y ijk to x ij among all the countries in all regions, where, as defined above, y ijk represents the proportion stunted < 5 years of age in country i, in region j, and stunting level k; and x ij represents the pro portion of the population under nourished in county i, in region j. Because it is unlikely that all stunting in a country is caused by food causes alone, our estimate of β k will be an overestimate of the purely physiological relationship between food and stunting. In practice, because the minimum observed value may be too low because of data errors, we chose to use the 5th percentile of the distribution of y ijk /x ij as the best estimate of β k and used the 1st and 10th percentiles as the boundaries of its plausible range (see "Estimating future stunting," below).
Once the above relationship was found, onefifth of the data set (37 records) was ran domly selected and reserved for model valida tion; the remainder (149 records) was used to parameterize the equations. (To obtain the best possible estimate, and considering that our method of estimation provides a rough approximation, we used the entire data set to estimate β k .) We parameterized the equations in a step wise manner. In the first step, we used β k to attribute a proportion of stunting to food causes in all countries in the parameterization data set: where r ijk is the proportion of stunting attributable to food causes in country i, in region j, at level k.
In the second step, we attributed the remaining proportion of stunting to non food causes and the interaction between food and non food causes: where s ijk is the proportion of stunting attrib utable to non food causes and the interaction between food and non food causes in coun try i, in region j, at level k. We then used lin ear methods to estimate the parameters (α k , γ k , θ k ) of the bilinear model: The model was validated by comparing levels of stunting predicted by the model to observed stunting in the reserved portion of the data set (37 records).
For α k , γ k , and θ k we used the standard errors of the estimates to describe the plau sible range of their true values. We carried out our analysis with Stata (version 11; StataCorp, College Station, TX, USA).
Estimating future population under nourished. The model required estimates of future PoU with and without climate change. Calculation of PoU requires data for a) the coef ficient of variation for withinpopulation calorie distribution, b) the average minimum calorie requirements to avoid under nourishment in the population, and c) per capita calorie avail ability (FAO 2003). Because projection data for a) and b) are not available, we assumed they remain at baseline levels. For c), we used esti mates made by Nelson et al. (2009) (2000)]. The two climate scenarios were used to address uncertainty in the climate system; the NCAR model is warmer and wetter than the CSIRO model. The global average increases in maximum temperature and precipi tation over land by 2050 were 1.9°C and 10%, and 1.2°C and 2% for the NCAR and CSIRO scenarios, respectively. For details of the assumptions in the crop modeling (e.g., carbon dioxide fertil ization, irrigation, and adaptation responses), extrapolations to other food groups, and the trade model, see Nelson et al. (2009). For additional information on PoU estimation, see Supplemental Material, Annex 2 (http://dx.doi. org/10.1289/ehp.1003311).
Estimating future stunting. The principal input to our simulation model was future countrylevel PoU derived from Nelson et al. (2009). We ensured withinscenario consis tency by using the same GDP (G. Nelson To account for parameter uncertainty, we used a standard Monte Carlo approach. Each of α k , γ k , and θ k were assumed to be nor mally distributed about their point estimates as defined by their respective standard errors. β k was assumed to be uniformly distributed between the 1st and 10th percentiles of the distribution of y ijk /x ij . This method produced probability density functions (PDFs) of future stunting.
We aimed to base each PDF on 100,000 estimates. We selected the first 100,000 esti mates that were > 0 and < 1. By rejecting low and high estimates, we potentially intro duced an upward or downward bias; to assess this, we quantified the proportion of rejected results [see Supplemental Material, Final estimates were produced at the regional level for South Asia and four regions in subSaharan Africa [SSA; central, east, south, and west; see Supplemental Material, Table 2 (http://dx.doi.org/10.1289/ehp.1003311)]. We aggregated stunting from the country to regional level using population weighting. We ran the simulation using MATLAB (version 2009b; MathWorks, Natick, MA, USA). Table 1 summarizes the data set used to parameter ize our model. The correlation coefficients between stunting and PoU were 0.16 and 0.19 for moderate and severe stunting, respec tively. For univariate analysis of stunting and the development score, R 2 was 0.40 for mod erate stunting and 0.45 for severe stunting; when PoU was added to these models, R 2 was unchanged. That is, using a datadriven approach, including PoU as an explanatory variable would not improve the model fit to estimate stunting in the present compared with using the development score alone. This supported our approach using a theorybased model that accounts for both food access and socioeconomic conditions. The model parameter estimates are shown in Table 2. The β parameter is an estimate of the assumed physiological relationship volume 119 | number 12 | December 2011 • Environmental Health Perspectives between a lack of food and stunting. Thus, the central estimate of β = 0.35 for moder ate stunting suggests that for every 1% of the population who are under nourished, on aver age 0.35% of children < 5 years of age will be moderately stunted. Using the validation data set, the predicted and observed values are well correlated, with correlation coefficients of 0.78, 0.66, and 0.80 for no/mild, moderate, and severe stunting, respectively [for scatter plots, see Supplemental Material, Figure 1 (http://dx.doi.org/10.1289/ehp.1003311)].
Estimates of future proportions under nourished.
The proportions of regional pop ulations projected to be under nourished in 2050 are shown in Table 3. Countries for which complete data were not available were excluded [see Supplemental Material, Table 2 (http://dx.doi.org/10.1289/ehp.1003311)]. The estimates suggest that climate change will increase PoU compared with a future without climate change, and also that climate change and population growth will increase it to above current levels in all regions.
Projections of stunting in 2050.
We estimate that climate change will increase stunting in all regions (Table 3), with severe stunting increas ing by 30-50%. The estimated relative change in stunting was smaller than the estimated rela tive change in under nourishment. Figure 1 shows the uncertainty in the stunting estimates as histograms of probabilistic outcomes derived from the Monte Carlo simulation.
We compared our stunting estimates with underweight estimates made by Nelson et al. (2009) (Table 4). The results are not directly comparable, but we have assumed that the ratio of underweight to stunting at baseline remains constant in the future. The final column shows this ratio as a regional, populationweighted average calculated using the most recent estimates of underweight and stunting (FAO 2010).
Discussion
We have developed the first global model to estimate the impact of climate change on future stunting-a more relevant outcome measure for human population health than "population at risk of hunger" (i.e., under nourishment) or underweight. Additionally, our model distin guishes moderate from severe stunting, which bring substantially different health risks (Black et al. 2008). Based on our conservative assump tions, the model suggests that climate change will have significant effects on future under nutrition, even when the beneficial effects of economic growth are taken into account. This is particularly so for severe stunting, with a 62% increase in South Asia and a 55% increase in east and south SSA. The health implica tions of this are large: according to Black et al. β k is the physiological relation between undernourishment and stunting [5th percentile (1st-10th percentile)]; α k is the regression constant, γ k relates the development score to stunting, and θ k relates the interaction between undernourishment and the development score to stunting (regression estimate ± SE). (2008), moderate stunting increases the risk of allcause death 1.6 times (95% CI: 1.3, 2.2) and severe stunting increases the risk 4.1 times (95% CI: 2.6, 6.4). Comparing our results with those of Nelson et al. (2009) should be done cau tiously because the outcome measures are dif ferent. Our estimates for stunting are lower than estimates from Nelson et al. (2009) for underweight in both South Asia and SSA (Table 4). Our estimates for SSA are closer but still lower. It is likely these differences are largely explained by how the models account for socioeconomic conditions. Nelson et al. (2009) estimated underweight using a complex model that accounted for many socio economic factors, but because of a lack of data, all the factors (except for food access) were held at baseline levels. Our stunting equation repre sents socioeconomics more simply but is able to account for expected changes over the next 40 years. World Bank projections suggest that in South Asia, GDP will increase nine times between 2005 and 2050-an absolute increase of about $7,000 billion (year 2000 US$); in SSA the figures are five times and $1,700 bil lion. Hence, allowing for these changes results in lower future stunting estimates, with a greater reduction in South Asia.
Model approximations and assumptions. We used a theorybased rather than statisti cally based approach to modeling. Although we accept that a statistical approach would a Calculated as [(moderate + severe underweight)/(moderate + severe stunting)] using data for the present (FAO 2010) and as a regional, population-weighted average. b Underweight estimates for 2050 are from Nelson et al. (2009). c Stunting estimates are the sum of the numbers moderately and severely stunted, based on the mean estimates of the empirically derived PDFs.
volume 119 | number 12 | December 2011 • Environmental Health Perspectives be sound if our aim were to estimate current stunting, our aim was to estimate future stunt ing. Thus, we developed a model that was driven as much as possible by a relationship that can reasonably be expected to remain con stant over time. We assumed that the physi ological relationship between stunting and under nourishment will remain constant and approximated this relationship in the first step. After this, because the relationship between stunting and GDP (which is mediated by, among other things, political and social con ditions) may vary significantly over time, we fitted the development score and interaction term as a second step. We made several key approximations in constructing the model. The first approxi mation was to fit a separate bilinear regres sion model to two of the stunting levels and then use these to estimate no/mild stunting. Although a more rigorous approach would fit the three regression models simultane ously while ensuring that the proportions (for each country) are positive and always add up to unity, this could lead to an imbalance in the goodness of model fit of one level at the expense of another. The second approximation was to treat the food causes and the product of the food causes and non food causes as two independent variables in the least squares fit. This, of course, would introduce errors because the variables are correlated. Nevertheless, the approximation was validated against a data set different from that on which it was based. The third approximation concerns the approach we adopted for the probabilistic (Monte Carlo) simulations. Simulated values that were either < 0 or > 1 were discarded. This could intro duce bias, and we quantified this potential. No estimates were rejected for being > 1, mean ing there is no risk of downward biasing. For estimates < 0, no moderate stunting estimates were rejected, but severe stunting estimates were rejected in all regions [see Supplemental Material, Table 1 (http://dx.doi.org/10.1289/ ehp.1003311)], meaning there is some poten tial for upward bias. Because more estimates were rejected in the "no climate change" future compared with the "climate change" future, this may have reduced the apparent impact of climate change on severe stunting.
The fourth approximation was the estimate of the physiological relationship between stunt ing and a lack of food (as represented by under nourishment). We ran our model assuming that a uniform distribution of values between the 1st and 10th percentile of the ratio of stunting to under nourishment adequately represented the true value. In support of our estimates, our parameters suggest that about 60% of stunt ing could not be directly attributed to a lack of food; this is in line with previous estimates that around 40-60% of under nutrition could be attributed to environmental conditions (predominantly a lack of water and sanitation) (PrussUstun and Corvalan 2006).
Although a more elaborate approach could have been used, inevitably there is always a tradeoff between model complexity and ease of model use. We have tilted more toward model simplicity but at the same time quanti fied the errors induced by the approximations, as far as possible.
We made estimates of future under nourishment from projected calorie availability. In doing so we assumed that both withincoun try food distribution and average minimum calorie requirement remained at baseline levels. In support of these assumptions, we note that FAO estimates of withincountry food distribu tion are based on extrapolations of infrequently collected data from relatively few countries and are restricted to lie between values represent ing a given maximum and minimum equity of distribution (based on estimated require ments). Varying values within this range has been found to have little impact on PoU in countries with low calorie availability (FAO 1996;Svedberg 2002). Considering mini mum calorie requirements, the estimated mean change in requirements across all countries was just 0.1% per year over the period 1990-1992-2006(FAO 2010. Further, accord ing to FAO data (FAO 2010), the average minimum calorie requirements are increasing in most lowincome countries and are higher (and increasing) in middleincome countries. This means our estimate may be conservative. Finally, Svedberg (2002) estimated that over a 20year period, 88% of the change in regional under nourishment was explained by changes in per capita calorie availability.
We assumed that, once per capita GDP reached $10,000 (2000 US$; with an associ ated Gini coefficient of 0.38), socioeconomic conditions no longer contributed to stunt ing. We tested the sensitivity of the model to this assumption by rerunning it without this assumption. This made a negligible difference to estimates (data not shown).
Finally, a limitation of the overall model ing strategy is that climate change is assumed to enter the system only through its impact on crop production. First, this allows only a par tial consideration of future food security: food availability and, to a degree, access are mod eled, but stability and utilization are not (for a discussion, see Schmidhuber and Tubiello 2007). Second, climate change is likely to affect under nutrition by a variety of routes, including plant diseases, extreme drought events, infectious disease, labor productivity, water availability, and overall impact on GDP. So far, these aspects have not been accounted for, and we recommend that future assess ments (of all health impacts, not just under nutrition) attempt to account for the multiple effects of climate change.
Model behavior. We examined model behavior over the range of plausible input vari able values. When either under nourishment or the development score are high (a high develop ment score indicates poor socio economic conditions), moderate stunt ing decreases. However, this is accompanied by increases in severe stunting, provid ing that under nourishment is not too high [for the model's equations surface plots, see Supplemental Material, Figure 2 (http://dx.doi. org/10.1289/ehp.1003311)]. As with any model, output for input variable values falling outside the range within which the model was fitted should be interpreted with caution. In the data used to parameterize the equations, the maximum value for under nourishment was 76% (Table 1), and the surface plots suggest that above this value, stunting estimates may be invalid. In our future estimates, only under nourishment in central SSA under climate change exceeded this (80% and 81%; Table 3); although these PoU estimates are only just out side the fitting range, the resulting stunting estimates should be interpreted cautiously.
The model's equations suggest that, as either food access or general socioeconomic conditions worsen, severe stunting increases more rapidly than moderate stunting; that is, more children shift from moderate to severe stunting than shift from no/mild stunt ing to moderate stunting. It is likely that this behavior is partly because the model assumes that, regardless of conditions, the distribu tion of access to food remains constant. This assumption is a property of the FAO under nourishment model (FAO 2003) and of our development score (i.e., the Gini coefficient is assumed to remain constant at baseline levels). We believe that allowing distributions to vary should be considered in future work.
The θ parameters have negative values. This was unexpected but, when considered in the context of the full equation and in terms of observed model behavior, the model equa tions predicted stunting changes as expected. Thus, if either food or non food causes are high and those causes are then reduced, the impact on stunting is greater than if both food and non food causes are high and only one vari able is lowered. This suggests, as expected, that to best deal with stunting it is necessary to address both food and non food causes.
Dealing with uncertainty. It is axiomatic that there are uncertainties in any risk assess ment model. In this assessment, we have addressed parametric uncertainty in the stunting model through the use of Monte Carlo simula tions. Structural uncertainty will be addressed in future work by exploring non linear interactions. It was not possible to assess the uncertainty in the upstream models (e.g., climate models, crop models, trade model) that drive our model (i.e., the input uncertainties associated with x ij and w ij ) because we lacked the necessary infor mation. Future assessments should use a wide range of climate and socioeconomic scenarios in order to capture the uncertainty of future emission pathways and the world in which the climate impacts will occur.
Conclusions
Previous studies have shown that climate change is likely to have negative effects on future hunger and under nutrition (Nelson et al. 2009(Nelson et al. , 2010Parry et al. 1999Parry et al. , 2004Rosenzweig and Parry 1994), and our results are consistent with these. This reinforces the evidence base for action to be taken to reduce carbon emissions and the impacts of the cli mate change to which we are already commit ted. Additionally, our model suggests that to reduce and prevent future under nutrition, it is necessary to both increase food access and improve socioeconomic conditions.
Quantifying the size of the impact pres ents difficulties. Our work illustrates the importance of the outcome considered-for example, under nourishment versus stunting, and moderate stunting versus severe stunting. These outcomes have different implications for adaptation and decision making (e.g., whether adaptation policies should focus only on food supplies or consider water and sani tation provision) and different implications for health (e.g., severe stunting is a much greater health threat than is moderate stunt ing). Further, future socioeconomic condi tions must be considered; this involves both developing new data sets and designing mod els that recognize data constraints. Above all, because none of the above issues will be easily overcome, modeling efforts should explicitly describe their assumptions and limitations.
|
2014-10-01T00:00:00.000Z
|
2011-08-15T00:00:00.000
|
{
"year": 2011,
"sha1": "f8585ec83dc8d97caa0e0a26a31e12f24e29e96a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1289/ehp.1003311",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49f85b60cbadfdcf9328dcf2cc8f47de4f1ac206",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Geography"
]
}
|
136816476
|
pes2o/s2orc
|
v3-fos-license
|
An Integrated Procedure for the Structural Design of a Composite Rotor-Hydrofoil of a Water Current Turbine (WCT)
This paper shows an integrated structural design optimization of a composite rotor-hydrofoil of a water current turbine by means the finite elements method (FEM), using a Serial/Parallel mixing theory (Rastellini et al. Comput. Struct. 86:879–896, 2008, Martinez et al., 2007, Martinez and Oller Arch. Comput. Methods. 16(4):357–397, 2009, Martinez et al. Compos. Part B Eng. 42(2011):134–144, 2010) coupled with a fluid-dynamic formulation and multi-objective optimization algorithm (Gen and Cheng 1997, Lee et al. Compos. Struct. 99:181–192, 2013, Lee et al. Compos. Struct. 94(3):1087–1096, 2012). The composite hydrofoil of the turbine rotor has been design using a reinforced laminate composites, taking into account the optimization of the carbon fiber orientation to obtain the maximum strength and lower rotational-inertia. Also, these results have been compared with a steel hydrofoil remarking the different performance on both structures. The mechanical and geometrical parameters involved in the design of this fiber-reinforced composite material are the fiber orientation, number of layers, stacking sequence and laminate thickness. Water pressure in the rotor of the turbine is obtained from a coupled fluid-dynamic simulation (CFD), whose detail can be found in the reference Oller et al. (2012). The main purpose of this paper is to achieve a very low inertia rotor minimizing the start-stop effect, because it is applied in axial water flow turbine currently in design by the authors, in which is important to take the maximum advantage of the kinetic energy. The FEM simulation codes are engineered by CIMNE (International Center for Numerical Method in Engineering, Barcelona, Spain), COMPack for the solids problem application, KRATOS for fluid dynamic application and RMOP for the structural optimization. To validate the procedure here presented, many turbine rotors made of composite materials are analyzed and three of them are compared with the steel one.
Introduction
Water Current Turbines (WCT) are an important topic of study at present, because of the concept of using the most powerful and predictable renewable energy (current water), without modifying the environment. In this particular case the studied rotor belongs to an axial riverbed WCT designed by the authors (Fig. 1) [1]. Its compact design, its condition of axial flow and its low rotational inertia due to the composite material rotors, confer the functionality at low speed fluvial beds, avoiding the requirement of great earthworks and expensive civil constructions.
Multi-laminated composite structures are an ever-increasingly important topic in the fields of fabrication of mechanical, aerospace, marine, and machinery industries due to their advantages such as durability (no corrosion-lower maintenance cost), survivability (fire resistance, crash energy absorption), excellent resistance against cyclic loading (low fatigue), reparability (restoration and repair), etc. Multilayered fiber-reinforced material systems can offer versatility in composite design due to the fact that the stacking sequence of each orthotropic layer can take full advantage of the superior mechanical properties in terms of its strength, stiffness, and total weight. One of the goals in design optimization of multilayered composite structure is to increase its strength while lowering its weight/rotational inertia with a given set of fibrous materials. Laminate of fiber-reinforced composites are very useful when low weight/rotational inertia together high strength/stiffness are required, like the case of axial water turbines. As an additional advantage, it is possible to fit the weight without downgrade the efficiency through the design of the fiber orientation, fraction reinforcement volume, choice of large or short fibers, layer thickness and stacking sequence. This paper studies the structural performance of a turbine rotor made-up of a laminate of fiberreinforced composites material using the mixing anisotropic theory. This formulation manages several damage constitutive models simultaneously and its homogenized damage composite variables, defining the inelastic limit behavior taking into account the reinforced of the matrix and the fiber-matrix debounding effect. The composite material used is a laminate composed by epoxy matrix reinforced with carbon fibers. The variations of the angle of fibers orientation and the thickness and stacking sequence are controlled by the Multi-objective Optimization module, allowing obtaining better values of stiffness and strength with a smaller weight and rotationalinertia. The anisotropic mixing theory has been implemented for the structural analysis on "COMPack" FEM explicit code [2][3][4][5] coupled with "KRATOS" [6] fluid dynamic FEM code, and the above mentioned optimization module. These codes (see Fig. 2) allows the fiberreinforced composites rotor design takes into account the successive geometric and mechanical changes of each component materials that forming the composite. The finite elements code for the study of the behavior of the fluid-dynamic problem of the rotor (KRATOS Multi-Physics [6]) give the state of pressures and speeds of the fluid in each point of the rotor blades [1]. Although in this work the structural analysis of the turbine rotor is mainly shown, the achievement of such results has been possible thanks to the coupling between both codes, COMPack-FEM and KRATOS-FEM (Fig. 2). Details about this fluid dynamic numerical simulation are available at the reference Oller et al. [1]. Thus, the fluid pressures and speeds distribution in the axial camera of the turbine obtained by means the KRATOS-FEM code, are introduced to the COMPack-FEM code (here reported) by a "staggered" procedure [7,8], solving each problem per time (Fig. 2).
To conclude the influence of the fiber direction and layers configuration in the composite of the turbine rotor proposed in this work, the maximum stress states and the stiffness in a hydrofoil constituted by a reinforced of the matrix with carbon fibers with three different configurations (±45º, 0º/45º and 0º/90º) are reported. These values are compared between them and with those obtained considering the rotor made of a steel isotropic material (Fig. 7). The numerical simulation of the hydrofoil of composite material is obtained using a formulation based on the generalized anisotropic theory of mixtures [3,9].
Numerical Model for the Structural Design of a Turbine Rotor Made of a Laminate of Fiber-Reinforced Composite
This section presents the proposed constitutive model for the turbine rotor hydrofoil design on epoxy matrix reinforced with carbon fibers. For this purpose a summary of the constitutive damage model [5,10,11] managed by an orthotropic Serial/Parallel mixing theory [3,5,10,12] coupled with an optimal multi-objective genetic algorithm design [13,14] are introduced in this paper. The coupling of these numerical models provides a laminate safety factor obtained from a new definition of homogenized laminate damage index which is obtained from the local damage provided by the constitutive damage model previously mentioned.
A stacking sequence and carbon fiber orientation design optimization for multilayered composite blade is made-up using an evolutionary algorithm [13,14]. The stacking sequence and orientation of fibers has a strong influence on the strength of multilayered composite blades. Multiple layers of fiber-reinforced material systems offer flexibility in engineering material design, offering several types of composite materials with the best mechanical properties. Numerical results show that optimal composite turbine rotor designed has a lower rotational-inertia, higher stiffness and also affordable cost when compared with other solutions, included the classical steel hydrofoils.
Next sections present a summary of the proposed mechanical formulation used in this paper: 1) Orthotropic Serial/Parallel mixing theory for the laminate composition material, 2) Constitutive damage model for the simple matrix numerically reinforced with carbon fibers by means the mixing theory, and 3) the definition of the "homogenized laminate damage index" and its corresponding "laminate safety factor". All these mechanical formulations will be reiteratively called by an iterative numerical procedure controlled by the "multi-objective genetic algorithm", that will provide a well-designed composite rotor after a finite number of iteration.
Serial/Parallel Mixing Theory for One Layer
The classical rule of mixtures, originally developed by Trusdell and Toupin, [4,9,15,16], uses a phenomenological approach based on continuum mechanics in the macro-scale for the composite mechanical analysis. The main problem of classical mixing theory is the poor ability to represent the serial behavior of the components in the composite (Fig. 3, iso-stress case).
The serial/parallel rule of mixtures improves the classical mixing theory replacing the isostrain hypothesis for an iso-strain condition in the fiber direction and an iso-stress condition in the transversal one (Eq. (6)). This allows modeling all the components distribution in the composite as shown in Fig. 3. This formulation is an alternative to the homogenization technique, based on the multiple scale study [17,18]. An extensive description of this formulation can be found in Rastellini, [3].
The serial/parallel (SP) model [3] considers that the component materials of the composite act in parallel along a certain direction and in serial in the remaining directions. Consequently, it is necessary to define and separate the serial and parallel components of the strain and stress tensors. Defining e 1 as the director vector that determines the parallel behavior (fiber direction), the parallel projector tensor N P can be defined as Using N P , the 4th-order parallel projector tensor P P , and the complementary serial projector tensor P S , are defined as: Both tensors are used to find the parallel and serial part of the strain tensor ε P and ε S respectively, Hence, the strain and stress tensors are split into its parallel and serial parts The main hypothesis in which the numerical model of the Serial/Parallel mixing theory is based are: (i) the composite is composed by two component materials: fiber and matrix; (ii) the component materials have the same strain in parallel (fiber) direction; (iii) the component materials have the same stress in the serial direction; (iv) the composite material response is in direct relation with the volume fractions of the compounding materials; (v) the homogeneous distribution of phases is considered in the composite; (vi) perfect bounding between components is assumed.
The equations that define the stress equilibrium and establish the strain compatibility between components are obtained from the analysis of the model hypothesis. Thus, Serial behavior : where, ε P and ε S are the parallel and serial components of the stress tensor respectively, σ P and σ S are the parallel and serial components of the strain superscripts, c, m and f denote the The serial/parallel mixing theory can use any constitutive equation to describe the behavior of each component material. The constitutive equations chosen can be different for each component (for example, an elastic law to describe the fiber behavior and a damage formulation to describe the matrix behavior). The constitutive equations for the matrix and the fiber can be expressed as: where k σ is the stress tensor of the kth component material, k ε is the total strain tensors, C is the respective damaged secant constitutive tensor and its elements are: The schematic S/P (or Generalized) Mixing Theory flow diagram implemented in the COMPack-FEM code, is shown in Fig. 4.
Serial/Parallel Mixing Theory for a Stacking Layers Composites
Laminate composites are formed by different layers with different fiber orientations. The orientation of the fiber can be defined by the engineer or automatically by an optimization process in order to obtain the better performance of the composite according to its application. The S/P-Rule of Mixtures (RoM) formulation can be applied to each layer of the composite and, afterwards, the composite behavior is computed by combining the performance of each constituent layer. The classical mixing theory is applied to each layer to obtain the laminate behavior.
Applying the classical mixing theory onto the different layers of the laminate composite implies the assumption that all laminate are undergoing the same strain. This assumption can be considered correct, since the different laminate usually have fiber orientation distributions disposed in such a way that provide the laminate with an in-plane homogeneous stiffness.
Local and Global Homogenized Damage Index
In this section are defining two damage indexes (local and global) to ensure the composite laminate being within the elastic range, avoiding these two damage thresholds.
The local damage constitutive model [11,19] used to set the threshold for the initiation of non-linear damage process in each point of component material integrated in the composite one and its subsequent evolution is presented in this subsection. This concept allows the new definition of a global homogenized threshold criterion of damage for the entire laminate (see section 2.3.2), resulting from the composition of the local damage index over all involved material in the laminate composite.
Local Damage Model
Material degradation -or damage-in a simple continuum material component due to a dissipative process can be simulated by means a local damage formulation [11,19,20]. This model is used at each simple matrix material embedded in the composite one, inducing a stiffness degradation and strength reduction in the entire laminate.
The isotropic damage formulation is based on a scalar internal variable d that represents the level of degradation at each simple component material. This variable are bounded between 0 and 1, being zero for undamaged and one for completely damaged point of component material. The local damage variable d is used to links the real stress tensor σ with the effective undamaged stress tensor σ 0 . Therefore, the relation between the damaged stress and the strain in the matrix included in each layer depends on the damage d internal variable and the elastic constitutive tensor (C 0 ) The stress condition at which damage starts and the evolution of the damage variable can be described by: being F(σ 0 ; q) the damage threshold function, f (σ 0 ) the scalar equivalent stress function and c (d) the uniaxial strength evolution, depending on internal damage variable d.
This formulation allows the damage onset and evolution using any limit criterion already defined in literature (von Mises, Mohr Coulomb, Drucker Prager, etc.) [8,19], but in this paper we have used the norm of the principal stresses, with a different degradation path for tension and compression loads, being σ I the principal stress tensor and ρ a function that weights the proportion of tension and compression stresses that are applied to the material. This weight function is defined as, being τ c and τ t the ultimate strength of the material in compression and tension, respectively, and 〈χ〉 the Macaulay function.
The mechanical evolution of the damage inner variable d is obtained using the damage consistency and the Kuhn-Tucker load/unload conditions [11,19], being possible to explicitly integrate the damage internal variable to obtain: The function G defines the softening evolution of the material. The behavior evolution of the damage material in the present work uses an explicit exponential softening, which is defined as, where A is a parameter that depends of the fracture energy of each simple material [11]. For an exponential softening this parameter can be obtained as being C 0 the Young modulus of the material, G c the fracture compression energy of the material and l f the geometrically regularization parameter, called fracture length related with the characteristically size of the finite element used. The introduction of the fracture length in the formulation makes the degradation process mesh independent [11,19].
Global Homogenized Laminate Damage Index or Security Laminate Factor
According to the Serial/Parallel mixing theory previously introduced, fibers only collaborate to the composite strengthening in longitudinal direction of the fibers. Thus, the damage on the composite material is mainly concentrated in the matrix but not in the reinforcement fibers. Thereby, when stresses in matrix reach their maximum elastic value (damage threshold), the material falls according to the damage constitutive law previously presented and, when the total fracture energy has been achieved in each point of simple component material, the material cannot support the level of the stresses and its contribution to the structure strength an stiffness disappears, starting a crack evolution/progression producing a delamination phenomenon. Also, the lack of strength in all directions is produced, except in the longitudinal fibers (because fibers do not reach the damaged threshold). Hence, this mechanism induces localized fracture (delamination) at constitutive level without the computational cost of breaking the mesh and re-meshing the new delaminated area.
In the case of laminates, the global composite damage variable d L is obtained by the homogenization of de local damage variable d of each simple component materials (see section 2.3.1). This definition of homogenized laminar damage can also be interpreted as the safety laminates factor that will be used in the optimization process and will be introducing in next section. This new structural damage variable d L is also bounded between 0 and 1 as local damage and it is defined as, where d i and V i are the local damage variable and its Gauss point volume for each single material, n GP is the number of Gauss points involved in all materials included in the layers participating in the shell element. The delamination phenomenon stops when a damaged point can provide enough shear strength to equilibrate the shear stresses that appears in the inter-laminar zone.
Optimal Multi-Objective Algorithm Design for Laminate of Fiber-Reinforced Composites
This section presents a stacking sequence and fiber orientation design optimization for multilayered composite structure in a discretized multi-objective approach using multiobjective genetic algorithm [13,14,21,22]. A combination of thickness and orientation angles of each layer is computed by using reiteratively the COMPack-FEM code [2]. This code allows calculate the stiffness and strength of a fiber-reinforced layer, and also the rotational-inertia of laminates.
For the optimization, a multi-objective algorithm in a optimization platform (RMOP) developed by CIMNE [13,14] is used as optimization system coupled to COMPack (see Fig. 2) to find the optimal combination of fiber orientation for multilayered composite hydrofoils which should have lower rotational-inertia, higher stiffness and affordable total cost.
Engineering design problems require a simultaneous optimization of conflicting objectives and an associated number of constraints. Unlike single objective optimization problems, the solution is a set of points known as Pareto optimal set [13,14].
Robust Multi-Objective Optimization Platform (RMOP)
RMOP is a computational intelligence framework which is a collection of population based algorithms including a searching method based on Genetic Algorithm [21]. This algorithm uses a Pareto selection operator which ensures that the new individual is not dominated by any other solutions.
In references [13,14] is shown more detail about of RMOP and the corresponding interaction modules, used in the computational code. This platform is easily coupled to any kind analysis tools, and particularly in this paper the coupling was performed with the COMPack finite element code [2], designed for the structural analysis of laminate of fiberreinforced composites (see Fig. 2). As mentioned above, COMPack-FEM code support several kinds of constitutive models and use de S/P mixing theory for the numerical simulation of the laminate composite material. One of the principal benefits of COMPack it's the capability of working with the constitutive model of each component material, matrix and/or fiber, considering its distribution and orientation.
Numerical Optimization Problem
The problem considers a multi-objective composite fiber orientation design optimization to find a lower rotational-inertia and stiffer multilayered symmetric balanced composite structures. The geometry, finite element mesh and loads can be seen in Figs. 1, 5 and 6.
However the proposed optimization technique is very powerful, and since the numerical model to be solved in this paper is very large, we have simplified the number of the optimization variables to solve the optimal mechanical problem of the turbine rotor, assuming constant the type of fiber (carbon fiber) and thickness of the laminate (fixed stacking) and leaving as a variable to optimize the carbon fiber angle orientation and minimization of the rotational inertia. Thus, for the complete design of the turbine rotor has been taken into account several pairs of fiber orientation, but have been chosen the three pairs more . Also, the material used in the blade rotor is composed for six layers of e=0.3mm, forming a laminate thickness of t=1.8mm. The matrix has a density of m matrix =1,200kg/m 3 , occupying a 60 % of the total volume. The carbon fibers have a density of m fiber =1,800kg/m 3 and involved in 40 % of the total volume. The complete description of the materials properties for matrix and fibers components can be seen in Table 1.
The rotor blade solution considers a multi-objective multilayered composite structure design optimization using RMOP. The fitness functions to minimize the rotational-inertia of the multilayered composite I comp , while maximizing the strength of the composite and minimizing its homogenized laminate damage index d L , are; Figure 7 show the solution obtained for the different composite materials, including the steel results commonly obtained for a classical turbine rotor.
Numerical Simulation of the Structural Behavior of a Rotor-Hydrofoil Water Current Turbine
The numerical simulation of the structural behavior of the rotor of the axial flow turbine by means of finite elements method (COMPack) coupled with a multi-objective multilayered composite structure design optimization (RMOP) is presented in this section. A comparative study considering the structural response of the steel turbine rotor vs. fibers-reinforced composite material is carried out [23,24]. The composite material analysis is developed employing the orthotropic mixing theory previously presented, while an isotropic constitutive model is used for the steel rotor. During this analysis the fitness functions are controlled (Eq. 16).
Geometry, Boundary Conditions and Finite Element Mesh
The rotor is put under an axial water flow that causes a distribution of pressures on the hydrofoils. These flow pressures are obtained by KRATOS CFD finite element code and, particularly, in the leading edge of the hydrofoils of the rotor. The 8 rotor blades have a hydrodynamic profile with 15º of attack angle (Fig. 5).
From the geometry of the rotor a mesh of 4100 linear shell triangular finite elements is generated with GiD (rotational-free shell triangle, [25]), with 2012 nodes (Fig. 6).
These shells structure are analyzed firstly made of steel material and then made of laminate composites material with layers of epoxy matrix reinforced by unidirectional carbon fibers. In both cases, the properties of the materials are detailed in the corresponding sections together with the respective analysis. Fig. 7 Relative comparisons among the composite rotors with different fiber orientation vs. steel rotor: a "Starting torque", b "Stress field", c "Maximum displacements", d "Stiffness"
Action on the Hydrofoil's Rotor
In this section the water pressures acting on the turbine rotor of axial axis are applied. These pressures are obtained from KRATOS CFD code for the fluvial flow at low speed river (Oller et al. [1]). This pressures cause two kinds of loads in the rotor: -Load 1: Rotation loads on the surfaces of the hydrofoils produced by the differential pressures between the up and down surfaces of the wing. This load is obtained by the interaction with KRATOS finite element code (CFD-Computational Fluid Dynamics) to obtain the speed, correct attack angle of hydrofoils, diagrams of pressures on the wing areas, etc. -Load 2: Axial loads caused by the directly applied pressures over the attack edge of the hydrofoil, that cause its deformation and the tensional state of the rotor, trending to break it in the perpendicular direction of the plane of the rotor. This reaction is studied and analyzed in this work through the program of finite elements COMPack (Fig. 7).
Kinematic pressures are obtained by KRATOS CFD finite element procedure which is available on the reference Oller et al. [1]. Thus, using this pressure has been obtained an applied load of F Rotor =672N at the leading edges surfaces of the n wing hydrofoils. This load is distributed over all nodes of the blades, and is applied in one time step, mining that it is 100 % of applied load in the rotor at time t=1×10 −5 s. This study exceeds the target of this paper and the complete fluid dynamic numerical simulation can be found in reference Oller et al. [1].
The restrictions of movement are applied to the nodes corresponding to the turbine shaft, representing the sharing points between the rotor and the axis of the turbine.
Numerical Simulations of the Rotor Made of Steel and Composite Material
The details of the structural behavior of each rotor conformed in steel and composite laminate are described below [23,24]. In the analysis corresponding to composite laminate the previously orthotropic Parallel/Serial theory of mixtures is used. An additional parametric comparison is also carried out in this case chosen from three pairs of fiber directions and stacking sequences.
Numerical simulations involves the turbine steel rotor with the following mechanical parameters: density m=7,850kg/m 3 , Young modulus E=210.0×10 6 Pa, Poisson ratio v=0.3, and thickness t=1.2mm; and the turbine rotor made by composite six layers, each one with e=0.30mm thickness, and three different layups: ±45 0 ,0 º -90º and 0º-45º, (see mechanical properties of components in Table 1), are used in the analysis comparison.
It can be observed after the 1sapplied load, that the minimum stress, σ=10,622Pa (Table 2 and Fig. 7b), corresponds to the non-orthogonal 0º-45º layup configuration, which occurs in the blades near to the shaft junction (Fig. 8). Figure 9 shows the displacements in the rotor.
Rotational Inertia
In WCT axial turbines, the task of reducing the rotational inertia of the rotor is as important as possible. This will lower resistance to rotation in front of the river speed changes, allowing more flexibility in starting and stopping of the turbine rotation. With this aim, the values of the rotational inertia of the turbine rotor are shown below. Inertia of the Steel Rotor For 1.8 mm thickness layer steel, and a m steel =7,850kg/m 3 of density, and a V steel =0.014m 3 of rotor's volume, the mass rotor is M steel =108Kg, the rotational inertia is I steel =1736Kg m, and the starting torque results equal to T steel =217Nm .
Inertia of the Composite Rotor Taking into account a 1.8 mm thickness layer composite, and a density matrix of m matrix =1200kg/m 3 (participating in a volume fraction of the 60 % of the composite material), and a density fiber of m fiber =1800kg/m 3 (participating in a volume fraction of the 40 % of the composite material). In this case the mixing theory gives a density for the composite of m comp =1440kg/m 3 , and a volume of the rotor of V comp =0.014m 3 , resulting a mass rotor of M comp =20Kg, the rotational inertia of I comp =318Kg m 2 , and the machine starting torque results equal to T comp =40Nm. This latter value is 5.5 times less than the steel rotor.
Conclusions
In this paper an integrated structural design of a composite rotor hydrofoil is presented. It also introduces a new index of structural damage to control composite elastic threshold. As a result of this formulation, we can see that the "0º/45º" laminate composite rotor works with nearly 40 % level of stresses of the steel rotor. The "±45º" composite laminate rotor and the "0º/90º" composite laminate also work at less stresses than steel rotor (Table 2 and Fig. 7b). All composite laminate rotors have less stiffness than the steel rotor, but particularly the composite with fibers oriented to "±45º" has a high stiffness and near to the steel value (see its relative comparison, Fig. 7d). However, the composite laminate of "0º/45º" has a much lower stiffness than the other (Fig. 7d), but is enough for this machine requirements, as its maximum displacement is tolerable in these work functions (see its relative comparison, Fig. 7c).
The reduced starting torque of composite laminate rotor is a big advantage during the operating work of the water turbine, since composites have 5.5 times less starting torque than the steel rotor (Fig. 7a). It means a machine with better performances at low water flux velocities, easier to ship, handle, repair, start, etc.
Concluding, the composite laminated rotor of fibers oriented to "±45º" is the best suited material for this function, since it has a very low rotational inertia (18.3 % of the steel rotor. See Fig. 7a), a maximum working stress a 47 % lower than the steel rotor (Fig. 7b), and finally has a good stiffness (53 % of the value corresponding to the steel rotor. See Fig. 7d) and consistent with a relatively low maximum displacement (Fig. 9c).
|
2019-04-28T13:08:03.897Z
|
2013-07-25T00:00:00.000
|
{
"year": 2013,
"sha1": "1b3930341cf5cfcb2efb77c422afe52fd02efefa",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.scipedia.com/wd/images/0/02/Draft_Samper_599885807_4351_OllerAramayo2013_Article_AnIntegratedProcedureForTheStr.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "e954f60eea6ce9d39bdc3cd9a04c52c93526052a",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
219948245
|
pes2o/s2orc
|
v3-fos-license
|
Mechanical Behavior of Single Patch Composite Repaired Al Alloy Plates: Experimental and Numerical Analysis
In this paper, glass fiber reinforced polymer (GFRP) materials were used to repair cracked Al plates. In order to study the influences of resin properties and repair configurations, three resins and two patch configurations were selected to manufacture six groups of specimens. It turned out that only little differences (less than 3%) were found in tensile strength among the six groups. Compared with the parent plates, the strength recovery ratio was higher than 80% after the GFRP repair, representing excellent repair efficiency. Moreover, a finite element model (FEM) was established to analyze the failure process of the repaired structure under tensile loading. The FEM results show good agreement with the experimental results, indicating good precision. Both the experimental and numerical work found that the damage initiated in the plies adjacent to the crack surface and the failure modes was mainly delamination and fiber breakage. This work will be meaningful for the future application of GFRP in metallic structures.
Introduction
Nowadays, fiber reinforced polymer (FRP) materials were widely used to strengthen or repair many different kinds of structures, including steel reinforced concrete in civil engineering [1][2][3][4], oil and gas transmission/transportation pipelines [5][6][7], aircraft metal structures [8,9] and also composite aircraft structures [10][11][12]. Compared with a traditional mechanical fasten repair, the FRP repair technique provides lower stress concentration, lightweight, excellent corrosion resistance, lower cost, easier operation and higher repair efficiency. Moreover, continuous fiber reinforced polymer composites possess improved strength-to-weight ratios and designability. From the above research work, it was worth noting that there will be prosperity and development in FRP composite repair techniques.
Many different kinds of FRP composite repaired structures were investigated to acquire their static strength, fatigue strength, durability and so on. A large number of experimental tests and numerical models were conducted. Mohsen et al. [13] reviewed the environmental durability of adhesively bonded FRP/steel joints in civil engineering. Moisture and temperature are identified as the most critical environmental factors resulting in complex failure mechanisms. Rohem et al. [7] investigated the performances of GFRP composite repaired steel pipes. Two types of defects, i.e., through wall and non-through wall, in steel pipes, were chosen and good satisfactory was obtained for both. Failure occurs at the laminate-substrate interface for through-wall defects, while no failure occurs in the composite for non-through wall defect. Fujimoto et al. [14] identified the locations and shapes of crack and disbond fronts of the aircraft structural panel repaired with bonded FRP composite patches through an analytical model. Measured strain data has verified the model's validity. Tao Chen et al. [15] studied the fatigue crack growth life of cruciform steel welded joints repaired with FRP materials. A numerical model was built, and the adhesive used to bond the composites with the steel plate was modeled by interface spring elements between the composite patches and the steel plate. The crack growth rate was correlated with the amplitude of the applied stress intensity factor (SIF). It showed that the FRP patch can reduce the crack growth rate and crack opening displacement. Benyahia F et al. [16] investigated the aging effects on a repaired aluminum alloy 7075 T6 plate. It showed that fatigue life can be increased by fifteen times by the patch. A three-dimensional virtual crack closure technique (3D VCCT) was used to calculate the SIF. Results showed that the SIF reduced 80% by the composite patch. In addition, it was found that an aged composite patch can reduce the SIF further. Albedah et al. [17] studied the effects of patch length on fatigue life of the repaired plate in both the 2024 T3 and 7075 T6 aluminum alloy. It showed that the fatigue life increases with the patch length. The SIF at the patched surface reduces with patch length, while SIF at the free surface increases with the patch length. Ouinas et al. [18] investigated the effects of the disbond area on SIF of aluminum panels repaired with boron composite patch. In addition, the effects of the orientation, thickness and width of the patch were also discussed. Results showed that setting fiber direction along the crack can reduce the SIF. The SIF increases with the increase of the disbonds. Shafique et al. [19] carried out research on the influences of CFRP repair patch on the fatigue properties of steel plates. The results showed that fatigue life increased significantly, especially for the combination usage of drilling a conventional crack-stop hole with CFRP composite repair. Sotirios et al. [12] employed thermography to investigate the patch debonding propagation subject to cycling mechanical loading, which demonstrates both quantitative and qualitative information for repair efficiency. Pradhan et al. [20] investigated the mechanical properties of edge-cracked aluminum plates repaired with a woven glass/epoxy composite patch. Tensile tests, flexural tests and Rockwell hardness tests were conducted, and effects of patch thickness and orientation were discussed. Maleki et al. [21] used acoustic emission and fractography images to investigate fatigue properties of single-sided composite patch repaired aluminum 6061 alloy plates. It was found that eight layers of the patch improved fatigue life about 197% compared with the unrepaired. Effects of plasticity on the composite patched central cracked aluminum plate were investigated by Hart and Bruck [22], which indicates that the patched specimen has a much larger area with high plastic strain under the same crack opening displacement (COD). Bouzitouna et al. [23] combined the composite patch repair technique and hole drilling technique, which showed an increase in mechanical strength and fatigue life.
In this article, aluminum alloy structures applied in aircraft were chosen as to-be-repaired objects. As for aircraft manufactured in the last three decades, there is an urgent need for extending their service life, especially for navy aircraft. The FRP composite repair technique, as an efficient and cost-effective method, would play an important role. However, more data and clearer cognition on mechanical behavior are needed for better engineering applications. In order to realize an economic and practical maintenance solution, a woven fabric made of glass fiber and a room temperature curing resin system was used to repair Al plates. Specimens were manufactured by laying up wet GFRP prepregs onto the cracked Al plates. Tensile tests were carried out to evaluate the repair efficiency. Failure mechanisms were analyzed by observation of fracture surfaces. A numerical model was also established for a better understanding of its mechanical behavior.
Material and Specimen Preparation
The repair efficiency of three material systems was compared in the present work: SW90-a/EA9396, SW90-a/EA9394, SW90-a/EA9396+EA9394. Three different resins were selected to compare their characteristics in practical repair process and repair efficiency, so as to choose the easiest repair process. The properties of the two epoxy resins EA9394 and EA9396 are listed in Table 1. EA9394 had better mechanical properties, but its viscosity was much higher leading to poor handleability. While EA9396 resin had a relatively low viscosity and good infiltration, which was beneficial for the hand lay-up repair process. Moreover, these two resins can be cured under room temperature. To simulate the service damages, we manufactured pre-fabricated cracks in rectangle aluminum LY12CZ plates using the laser cutting equipment. The LY12CZ is a kind of common aeronautical engineering Al alloy. The thickness of the Al plate was 1.5 mm. The crack size was 10 mm (length) × 0.3 mm (width), as shown in Figure 1. Before the repair procedure, the surface of the Al plates was abraded and treated with two kinds of silane coupling agents: AC130 and KH550. Afterward, wet prepregs were patched to the surface ply by ply according to the following configurations of patches ( Figure 2). Six plies were laid in two ways: triangle and inverted triangle. Each layer was 0.1mm after cured and the overall thickness of the patch was approximately 0.6 mm. The width of prepregs was the same as the Al plate, which was 50 mm. Lengths of the prepregs were 62 mm, 72 mm, 82 mm, 92 mm, 102 mm and 112 mm for each ply. The selection of the shortest length of the prepreg aimed to reduce the stress concentration around the crack tips. The specimens were cured in vacuum bags at room temperature for 48 h and post-cured for more than 3 days before testing. EA9396 resin had a relatively low viscosity and good infiltration, which was beneficial for the hand lay-up repair process. Moreover, these two resins can be cured under room temperature. To simulate the service damages, we manufactured pre-fabricated cracks in rectangle aluminum LY12CZ plates using the laser cutting equipment. The LY12CZ is a kind of common aeronautical engineering Al alloy. The thickness of the Al plate was 1.5 mm. The crack size was 10 mm (length) × 0.3 mm (width), as shown in Figure 1. Before the repair procedure, the surface of the Al plates was abraded and treated with two kinds of silane coupling agents: AC130 and KH550. Afterward, wet prepregs were patched to the surface ply by ply according to the following configurations of patches ( Figure 2). Six plies were laid in two ways: triangle and inverted triangle. Each layer was 0.1mm after cured and the overall thickness of the patch was approximately 0.6 mm. The width of prepregs was the same as the Al plate, which was 50 mm. Lengths of the prepregs were 62 mm, 72 mm, 82 mm, 92 mm, 102 mm and 112 mm for each ply. The selection of the shortest length of the prepreg aimed to reduce the stress concentration around the crack tips. The specimens were cured in vacuum bags at room temperature for 48 h and post-cured for more than 3 days before testing. EA9396 resin had a relatively low viscosity and good infiltration, which was beneficial for the hand lay-up repair process. Moreover, these two resins can be cured under room temperature. To simulate the service damages, we manufactured pre-fabricated cracks in rectangle aluminum LY12CZ plates using the laser cutting equipment. The LY12CZ is a kind of common aeronautical engineering Al alloy. The thickness of the Al plate was 1.5 mm. The crack size was 10 mm (length) × 0.3 mm (width), as shown in Figure 1. Before the repair procedure, the surface of the Al plates was abraded and treated with two kinds of silane coupling agents: AC130 and KH550. Afterward, wet prepregs were patched to the surface ply by ply according to the following configurations of patches ( Figure 2). Six plies were laid in two ways: triangle and inverted triangle. Each layer was 0.1mm after cured and the overall thickness of the patch was approximately 0.6 mm. The width of prepregs was the same as the Al plate, which was 50 mm. Lengths of the prepregs were 62 mm, 72 mm, 82 mm, 92 mm, 102 mm and 112 mm for each ply. The selection of the shortest length of the prepreg aimed to reduce the stress concentration around the crack tips. The specimens were cured in vacuum bags at room temperature for 48 h and post-cured for more than 3 days before testing.
Test Procedures
To evaluate the repair efficiency, we performed tensile tests on a material testing machine TST-DL4205 with a loading capacity of 10 tons. The schematic of specimens and tensile testing instruments is shown in Figure 3. The cross-head speed was 1 mm/min and a strain data acquisition system was used to monitor deformation in critical positions. Figure 4 shows typical load-displacement curves (P-δ curves) of GFRP-repaired cracked plates under unidirectional tensile loads. After the reparation with the wet GFRP prepregs, the cracked plate recovered strength and structural modulus to some extent. However, the plastic mechanical characteristic is lost. The type of prepregs with different resin materials had little influence on the repair efficiency. Table 2 summarizes the experimental data for the tensile strength and restoration ratio for all specimens. The tensile strength of the specimen was calculated by dividing the ultimate load by cross-sectional area of the specimen. The load capacity restoration ratio was calculated by dividing the ultimate tensile load of the repaired plate with that of the original plate. The specimens with triangle patches had a slightly higher restoration ratio than those with inverted-triangle patches. The mixture usage of two different resins exhibited the highest tensile strength, while the specimen with EA9396 resin showed the lowest tensile strength. However, the differences between these three groups were very small, less than 3%. The load capacity recovery ratio was higher than 80% after the GFRP repair, representing excellent repair efficiency. Figure 4 shows typical load-displacement curves (P-δ curves) of GFRP-repaired cracked plates under unidirectional tensile loads. After the reparation with the wet GFRP prepregs, the cracked plate recovered strength and structural modulus to some extent. However, the plastic mechanical characteristic is lost. The type of prepregs with different resin materials had little influence on the repair efficiency. Table 2 summarizes the experimental data for the tensile strength and restoration ratio for all specimens. The tensile strength of the specimen was calculated by dividing the ultimate load by cross-sectional area of the specimen. The load capacity restoration ratio was calculated by dividing the ultimate tensile load of the repaired plate with that of the original plate. The specimens with triangle patches had a slightly higher restoration ratio than those with inverted-triangle patches. The mixture usage of two different resins exhibited the highest tensile strength, while the specimen with EA9396 resin showed the lowest tensile strength. However, the differences between these three groups were very small, less than 3%. The load capacity recovery ratio was higher than 80% after the GFRP repair, representing excellent repair efficiency. Figure 4 shows typical load-displacement curves (P-δ curves) of GFRP-repaired cracked plates under unidirectional tensile loads. After the reparation with the wet GFRP prepregs, the cracked plate recovered strength and structural modulus to some extent. However, the plastic mechanical characteristic is lost. The type of prepregs with different resin materials had little influence on the repair efficiency. Table 2 summarizes the experimental data for the tensile strength and restoration ratio for all specimens. The tensile strength of the specimen was calculated by dividing the ultimate load by cross-sectional area of the specimen. The load capacity restoration ratio was calculated by dividing the ultimate tensile load of the repaired plate with that of the original plate. The specimens with triangle patches had a slightly higher restoration ratio than those with inverted-triangle patches. The mixture usage of two different resins exhibited the highest tensile strength, while the specimen with EA9396 resin showed the lowest tensile strength. However, the differences between these three groups were very small, less than 3%. The load capacity recovery ratio was higher than 80% after the GFRP repair, representing excellent repair efficiency.
Damage Observation in Specimens
As can be seen in Figure 5, the damage to the sample under axial tensile load can be visually observed at the crack when the load increased to 18 kN. Afterward, the damage extended around the crack with a further increase in the load. New damaged places appeared around the corner of the patch at a load of 27 kN and the final fracture happened at approximately 29 kN.
The visual inspection of fracture surfaces is shown in Figure 6. A neat fracture along the crack of the Al plate can be observed. The damage modes of the patch were mainly the fracture of adhesively bonded joints and adhesive-composite interface damage. Fiber breakage occurred subsequently after adhesive failure. The fracture area highlighted by the red box in Figure 6 was examined through scanning electronic micrograph. From the scanning electronic micrograph shown in Figure 7, adhesive failure of the adhesively bonded composite joint can be seen.
Damage Observation in Specimens
As can be seen in Figure 5, the damage to the sample under axial tensile load can be visually observed at the crack when the load increased to 18 kN. Afterward, the damage extended around the crack with a further increase in the load. New damaged places appeared around the corner of the patch at a load of 27 kN and the final fracture happened at approximately 29 kN.
The visual inspection of fracture surfaces is shown in Figure 6. A neat fracture along the crack of the Al plate can be observed. The damage modes of the patch were mainly the fracture of adhesively bonded joints and adhesive-composite interface damage. Fiber breakage occurred subsequently after adhesive failure. The fracture area highlighted by the red box in Figure 6 was examined through scanning electronic micrograph. From the scanning electronic micrograph shown in Figure 7, adhesive failure of the adhesively bonded composite joint can be seen.
Finite Element Model
A three-dimensional finite element model was built in the commercial finite element analysis software Abaqus 6.11. Hexahedral elements C3D8R were selected to represent the behavior of both composites and Al materials in this model. As presented in Figure 8, green elements denote composite materials, while grey elements illustrate Al. To simulate the GFRP patch, there was one element in each layer and the material orientation was defined. The area near the crack was meshed with much smaller elements with sizes of 0.06 mm × 0.2 mm × 0.3 mm for Al and 0.06 mm × 0.1 mm × 0.3 mm for the composite. The left edge was fixed and the axial tensile load was applied on the right edge. Coupling constraints were applied on the edges to reference points RP1 and RP2, respectively.
Finite Element Model
A three-dimensional finite element model was built in the commercial finite element analysis software Abaqus 6.11. Hexahedral elements C3D8R were selected to represent the behavior of both composites and Al materials in this model. As presented in Figure 8, green elements denote composite materials, while grey elements illustrate Al. To simulate the GFRP patch, there was one element in each layer and the material orientation was defined. The area near the crack was meshed with much smaller elements with sizes of 0.06 mm × 0.2 mm × 0.3 mm for Al and 0.06 mm × 0.1 mm × 0.3 mm for the composite. The left edge was fixed and the axial tensile load was applied on the right edge. Coupling constraints were applied on the edges to reference points RP1 and RP2, respectively.
Finite Element Model
A three-dimensional finite element model was built in the commercial finite element analysis software Abaqus 6.11. Hexahedral elements C3D8R were selected to represent the behavior of both composites and Al materials in this model. As presented in Figure 8, green elements denote composite materials, while grey elements illustrate Al. To simulate the GFRP patch, there was one element in each layer and the material orientation was defined. The area near the crack was meshed with much smaller elements with sizes of 0.06 mm × 0.2 mm × 0.3 mm for Al and 0.06 mm × 0.1 mm × 0.3 mm for the composite. The left edge was fixed and the axial tensile load was applied on the right edge. Coupling constraints were applied on the edges to reference points RP1 and RP2, respectively. Materials 2020, 13, x FOR PEER REVIEW 7 of 12
Damage Criteria and Progressive Damage Model
To predict the loading capacity and the damage modes of the GFRP-repaired Al plates, both the failure mechanism of Al and the composite were considered. The ductile mechanical behavior of the Al plate was simulated using a traditional metal ductile model and the orthotropic mechanical behavior and damage procedure of the patch were simulated through implanting a USDFLD subroutine. A modified failure model based on 3D stresses [24] in a composite layer with progressive failure modeling capability was established for each plain-weaved glass fabric layer. In this study, seven failure modes were evaluated and the damage initiation rules were list as follows: in-plane fill direction tensile failure ( 22 0 σ ≥ ): in-plane fill direction compression failure ( 22 0 σ < ): in-plane warp direction tensile failure ( 1 1 0 σ ≥ ): warp yarn compressive failure ( 1 1 0 σ < ): in-plane shear-out failure: out-of-plane delamination in tension ( 33 0 σ ≥ ): out-of-plane delamination in compression ( 33 0 σ < ):
Damage Criteria and Progressive Damage Model
To predict the loading capacity and the damage modes of the GFRP-repaired Al plates, both the failure mechanism of Al and the composite were considered. The ductile mechanical behavior of the Al plate was simulated using a traditional metal ductile model and the orthotropic mechanical behavior and damage procedure of the patch were simulated through implanting a USDFLD subroutine. A modified failure model based on 3D stresses [24] in a composite layer with progressive failure modeling capability was established for each plain-weaved glass fabric layer. In this study, seven failure modes were evaluated and the damage initiation rules were list as follows: in-plane fill direction tensile failure (σ 22 ≥ 0): in-plane fill direction compression failure (σ 22 < 0): in-plane warp direction tensile failure (σ 11 ≥ 0): warp yarn compressive failure (σ 11 < 0): in-plane shear-out failure: out-of-plane delamination in tension (σ 33 ≥ 0): out-of-plane delamination in compression (σ 33 < 0): Materials 2020, 13, 2740 8 of 12 The mechanical properties of the plain-weaved glass fabric composite layer and the Al plate used in the model are shown in Table 3. When damage predicted by the above criterion occurred within elements, a set of degrading rules was used to relate the damage growth with the stiffness loss of the material. Table 4 shows the stiffness degrading ratio according to each damage mode. For the sake of numerical convergence, when the mechanical property was degraded to zero, it was replaced by a very small value of 0.01.
Failure Modes Degradation Rules of Laminate Material Properties
In-plane warp and fill direction failure in tension
Explaining Damage Mechanism by Using Simulations
Load-displacement curves obtained through the model were compared with experimental results as shown in Figure 9. It indicated that the structural stiffness and load capacity were restored almost 80%~90% of the original plate. The boundary condition and the material failure model the FEA model can hardly be the same as those of the practical tests. It may lead to the differences between the FEA results and test results. Meanwhile, the strains from the model were also investigated and a good correlation was found with experimental results, see in Figure 10, which suggests a relatively high precision of the model. Strains 2 and 4 were on the patch side, while strains 1 and 3 were on the back side. It showed that strain on the patch side was smaller, which indicated that the repaired plate was bending towards the back side.
As can be seen in Figures 11 and 12, stress concentration around the tips decreased due to the addition of the patch. The stress near the patch side (top view) was significantly lower than on the back of the Al plate (bottom view). At 80% of the ultimate load, the tip of the Al plate came to the plastic yielding stage and crack propagates until the final ductile fracture. The most dominant damage modes of the GFRP patch were delamination and fiber breakage in the warp yarn direction, shown in Figure 13. Delamination in compression occurred first in the patch adjacent to the Al plate around the crack and then propagated along the length direction. Fiber breakage in the yarn direction occurred later and it was mainly located in the two edges of the patch adjacent to the Al plate.
Conclusions
In this study, both experimental tests and numerical analysis were performed to investigate the tensile strength and failure mechanism of the GFRP-repaired Al plate, and the results can be summarized as follows:
Conclusions
In this study, both experimental tests and numerical analysis were performed to investigate the tensile strength and failure mechanism of the GFRP-repaired Al plate, and the results can be summarized as follows:
Conclusions
In this study, both experimental tests and numerical analysis were performed to investigate the tensile strength and failure mechanism of the GFRP-repaired Al plate, and the results can be summarized as follows:
Conclusions
In this study, both experimental tests and numerical analysis were performed to investigate the tensile strength and failure mechanism of the GFRP-repaired Al plate, and the results can be summarized as follows: 1. We compared three different resins used for prepregs, and no significant difference was found, being less than 3%. Moreover, two different patch configurations showed little influence in the ultimate strength.
2. The GFRP repair's strength recovery ratio was found to be higher than 80% of the original Al plate, which represents an excellent repair efficiency for a single strap repair.
3. The FE model exhibited good accuracy in predicting the load-displacement response and the predicted failure modes were also consistent with experimental observation.
4. Both the experimental and numerical results illustrate that the damage initiated around the crack tip and was mainly adhesive bond failure and interfacial failure between the patch and Al plate. At the ultimate load, final ductile failure in the Al plate and fiber breakage in the warp yarn direction occurred.
|
2020-06-18T09:06:15.421Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c27f169d564ce0e2852b37bf39c54a5ff8825617",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/12/2740/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e8c53f140cf47eded2bb6c53c576d1e94fb0db55",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
3553873
|
pes2o/s2orc
|
v3-fos-license
|
High Throughput Determination of Plant Height, Ground Cover, and Above-Ground Biomass in Wheat with LiDAR
Crop improvement efforts are targeting increased above-ground biomass and radiation-use efficiency as drivers for greater yield. Early ground cover and canopy height contribute to biomass production, but manual measurements of these traits, and in particular above-ground biomass, are slow and labor-intensive, more so when made at multiple developmental stages. These constraints limit the ability to capture these data in a temporal fashion, hampering insights that could be gained from multi-dimensional data. Here we demonstrate the capacity of Light Detection and Ranging (LiDAR), mounted on a lightweight, mobile, ground-based platform, for rapid multi-temporal and non-destructive estimation of canopy height, ground cover and above-ground biomass. Field validation of LiDAR measurements is presented. For canopy height, strong relationships with LiDAR (r2 of 0.99 and root mean square error of 0.017 m) were obtained. Ground cover was estimated from LiDAR using two methodologies: red reflectance image and canopy height. In contrast to NDVI, LiDAR was not affected by saturation at high ground cover, and the comparison of both LiDAR methodologies showed strong association (r2 = 0.92 and slope = 1.02) at ground cover above 0.8. For above-ground biomass, a dedicated field experiment was performed with destructive biomass sampled eight times across different developmental stages. Two methodologies are presented for the estimation of biomass from LiDAR: 3D voxel index (3DVI) and 3D profile index (3DPI). The parameters involved in the calculation of 3DVI and 3DPI were optimized for each sample event from tillering to maturity, as well as generalized for any developmental stage. Individual sample point predictions were strong while predictions across all eight sample events, provided the strongest association with biomass (r2 = 0.93 and r2 = 0.92) for 3DPI and 3DVI, respectively. Given these results, we believe that application of this system will provide new opportunities to deliver improved genotypes and agronomic interventions via more efficient and reliable phenotyping of these important traits in large experiments.
INTRODUCTION
The rate of genetic gain per year for yield potential of wheat over the last two decades has stabilized at <1% per annum (Reynolds et al., 1999;Fischer et al., 2012). Various interventions have been proposed to maintain or improve this rate. Field phenomics, with its potential to non-destructively and remotely-sense crop traits associated with performance in a high-throughput fashion (White et al., 2012;Araus and Cairns, 2014;Deery et al., 2014Deery et al., , 2016Rebetzke et al., 2016;Shakoor et al., 2017), has gained more attention as a promising intervention in recent years. Key physical parameters that are targets for field phenomics include canopy height, early ground cover, distribution, and maintenance of green leaf area, and biomass production.
The importance of canopy height and its relationship to harvest index (defined as the ratio between harvested grain and total above-ground biomass) are well known from the Green Revolution, where the introduction of dwarfing genes resulted in semi-dwarf wheat varieties with increased harvest index and yield (Reynolds and Borlaug, 2006). Canopy height is typically measured with a graduated stick or ruler by holding together a handful of stems from a representative part of a given experimental plot and recording the height to the tip of the spike for an average stem (ignoring the awns) (Rebetzke et al., 2013b). Plant height is elsewhere defined (perhaps more appropriately as canopy height) as the shortest distance between the upper boundary of the main photosynthetic tissues (excluding inflorescences) on a plant and the ground level (Pérez-Harguindeguy et al., 2016). Despite efforts to standardize the canopy height measurement, there is a larger component of subjectivity with operators potentially having a different perception of what constitutes the height of the canopy.
Canopy ground cover (GC) represents the fraction of the soil covered by the crop. High GC is necessary for intercepting light needed for growth, shading the soil to reduce soil evaporation (Fischer, 1981;Botwright et al., 2002;Richards and Rebetzke, 2002;Rebetzke et al., 2004;Mullan and Reynolds, 2010), and for weed competitiveness (Coleman et al., 2001). The GC assessments are generally made from early emergence until complete canopy cover. There are three standard methods for the estimation of GC: (1) Digital photographs taken at constant height over a representative area of the experimental plot and then processed with image analysis software for the classification of vegetation and non-vegetation pixels (Li et al., 2010;Mullan and Reynolds, 2010;Pask et al., 2012;Kipp et al., 2014); (2) Spectral indices such as the normalized difference vegetation index (NDVI), including active spectral sensors like the GreenSeeker R (Trimble, USA), which have shown strong associations with GC up to the stem elongation growth stage (Gitelson et al., 2002); and 3) Visual scores, where expert estimates of GC are made on ordinal scale (e.g., on a 1-9 scale, with 1 = no GC and 9 = complete GC).
Opportunities to increase the yield potential of wheat are now focusing on increasing above-ground biomass, while maintaining high harvest index (Reynolds et al., 2009(Reynolds et al., , 2011. However, measurement of above-ground biomass requires cutting the culms at ground level for a defined portion of the experimental plot (normally 0.1-0.3 m²), and then weighing after drying in an oven until constant weight (Pask et al., 2012). The reliability and resulting confidence in these above-ground biomass measures are limited by: (1) the sample representing a small section of the experimental plot, which could be a misrepresentation if the plot is not uniform; (2) samples are destructive, thereby limiting the number of samples before destroying the entire plot; and 3) sampling and subsequent processing requires transport, drying, and manual handling, which contributes to sample loss and can be restrictive in large experiments. Thence, measurement of above-ground biomass is laborious and subject to large experimental error. Recent developments in remote and proximal sensing for high-throughput field phenotyping have led to proposed alternatives to destructive sampling, including the use of digital photography and NDVI sensors (Li et al., 2010;Pask et al., 2012), across multiple scales (Hawkesford and Lorence, 2017) using both aerial (see review by Yang et al., 2017) and ground platforms (Busemeyer et al., 2013;Andrade-Sanchez et al., 2014;Deery et al., 2014;Barker et al., 2016;Liu et al., 2016;Virlet et al., 2016;Kirchgessner et al., 2017).
Light Detection and Ranging (LiDAR) mounted on a field phenotyping buggy was recently proposed for quantifying a number of traits, including canopy height, GC and above-ground biomass (Deery et al., 2014). LiDAR, as an active sensor, confers many advantages over passive sensing (see also Lin, 2015), including: (1) operation regardless of ambient light conditions; and (2) direct measurement of canopy height and architecture. LiDAR-derived images can avoid some of the limitations of RGB images which have been used for these purposes, namely that changes in ambient light conditions and shadows can result in over or under exposure, thereby reducing image quality and data reliability (Deery et al., 2014).
While the use of LiDAR for the estimation of physical height and above-ground biomass in forestry applications is well established (Lefsky et al., 1999(Lefsky et al., , 2002Clark et al., 2004;Hyyppä et al., 2008;Lucas et al., 2008;Eitel et al., 2013;Kankare et al., 2013;Greaves et al., 2015), its application in crops is still in its infancy. Saeys et al. (2009) estimated crop density and spike number using statistical models for two LiDAR scanning frequencies. Another approach that has been widely adopted is the use of canopy height as a surrogate for crop biomass (Ehlert et al., 2008(Ehlert et al., , 2009(Ehlert et al., , 2010Gebbers et al., 2011;Tilly et al., 2014). Most of these methodologies are based on the extraction of crop height using a surface differencing approach (Louise Loudermilk et al., 2009), where the digital terrain model is subtracted to the crop surface model. More recently, some authors have proposed the combined use of LiDAR-derived canopy height and reflectance information (Eitel et al., 2014;Geipel et al., 2014;Tilly et al., 2015). Also, the option of estimating canopy height using stereo reconstruction from aerial imagery (Bendig et al., 2014;Geipel et al., 2014;Aasen et al., 2015) and ground platforms (Salas Fernandez et al., 2017), has been proposed as an alternative to LiDAR. However, prediction of above-ground biomass from height is unlikely to be of benefit in breeding trials, where variation in plant height is commonly restricted. For this reason, alternative approaches are required to provide robust estimates of biomass production.
In this paper, we present the development and early application of the Phenomobile Lite TM (http://www. plantphenomics.org.au/services/phenomobile/) as an evolution of the original Phenomobile (Deery et al., 2014). The Phenomobile Lite, conceived as a manually-operated buggy, is designed to be lightweight, cost-effective, and transportable across multiple field sites, thereby providing reliable field phenotyping amenable to deployment in multi-site managed environment facilities for targeted trait and germplasm evaluation (Rebetzke et al., 2013a). We describe the algorithms developed for non-destructive measurement of canopy height, GC, and above-ground biomass using LiDAR data, and demonstrate the utility of the Phenomobile Lite and LiDAR for use in plot-scale phenotyping within genetics, physiology or agronomy studies, or in a plant breeding program.
Phenomobile Lite Description and Components
The Phenomobile Lite is a portable buggy consisting of a lightweight extruded aluminum frame with three wheels and an instrument platform (Figure 1). The front-left leading wheel is powered by an electric motor with manual speed control and the rear wheel, trailing the front powered wheel, acts as a caster wheel for steering. The adjustable wheelbase on the Phenomobile Lite can accommodate different plot widths (1.75-2.20 m). The height-adjustable instrument boom (ground clearance of 1.5 m), located at the front of the frame, can be adjusted as the crop develops to maintain a constant distance above the canopy and restrict the plot within the field of view of the instruments. At the rear of the unit, the operator has access to a digital display with touchscreen for controlling the devices.
The Phenomobile Lite comprises the following instrumentation: 1) A high-frequency laser scanner or LiDAR. The model selected (SICK LMS 400-2000, SICK AG, Waldkirch, Germany) works on the phase-shift principle for estimating the distance. Light with a given wavelength that travels to an object and then back will be shifted in phase compared to the emitted light, being the phase-shift proportional to the distance between the sensor and the object. The laser operates at 650 nm (visible red light) and 4 mW of power, generating a spot diameter of ca. 2 mm at 3 m distance. The scanning rate is 270 Hz with an angular resolution 0.1 • . 2) An inertial navigation unit (IMU) with GPS for registering the position and orientation of the LiDAR. The IMU (Spatial, Advanced Navigation, Australia) has 0.2 • accuracy and 0.6 m horizontal accuracy when differential GPS corrections are provided. It can operate over a broad range of temperatures and presents a minimal form factor (37 grams). 3) Incremental wheel encoder (SICK DFV60A, SICK AG, Waldkirch, Germany) with a maximum angular resolution of 65,536 counts per revolution which translate to sub-millimeter linear resolution. 4) Computer with touch screen (Toughpad, Panasonic, Osaka, Japan). 5) Additionally, the Phenomobile Lite can integrate other instruments such an active NDVI sensor (GreenSeeker, Trimble, USA) and digital camera (Canon 6D, Canon Inc., Tokio, Japan), which are triggered by the control software based on traveled distance or time intervals.
The operating software and user-interface were developed in Java programming language (Oracle, https://www.oracle.com/java/ index.html), designed for the non-technical user (i.e. intuitive and user-friendly). The experiments are typically organized in field layouts of columns and rows. Whereby, we considered columns to be experimental plots in the direction of the sowing and rows the position of each experimental plot within the columns. To acquire data, the user drives the Phenomobile Lite to the first plot of the experiment and sets the experiment name on the control software. Then, the user presses a start button and drives the Phenomobile Lite along the column, without any additional input. Maintaining a constant speed is not critical as the Phenomobile Lite registers the speed with the GPS/IMU and wheel encoder, which is later used to post process the data. A cruise control function is available to maintain the electric wheel at a constant speed. At the end of the column, the user presses "Stop" and turns the Phenomobile Lite to the start of the next column. A map is displayed in real time with the GPS track and an aerial display of the experiment. The direction of the operation is automatically taken into account based on the heading information from the IMU/GPS, which simplifies the data processing when the operation is done in zigzag mode. Once all the columns from the experiment are completed, the raw data is compressed and uploaded to the web server for processing.
Phenomobile Lite Data Capture and Pre-processing
Data Files
The data from the LiDAR and associated instruments are collected on the touch screen computer and stored in the following data formats: 1) LiDAR data is stored as binary files (one file per column). The data format was custom-made and is composed of a header and a collection of LiDAR scans with an associated header for each scan. The file header contains basic information about the location of the LiDAR unit and offsets with the GPS/IMU and other instruments. It also includes the configuration of the LiDAR device (scan rate and angular resolution). Then follows a sequence of scans, each one with a header defining information associated with each scan, such as the timestamp, encoder count, GPS/IMU position and orientation. Finally, the scan contains the range and intensity observations (generally 700 points per scan). With the LiDAR operating at rate of 270 scans per second, each file will contain a number of scans that will depend on the length of the column and speed of operation. 2) GPS/IMU position and orientation, containing the output from the GPS/IMU with GPS timestamps, is stored as comma separated values (CSV). 3) GreenSeeker NDVI measurements are linked to the GPS/IMU position and timestamps. These measures include the red and near-infrared reflectance as well as the derived vegetation indices calculated by the GreenSeeker unit (NDVI and RVI). This information is stored as CSV. 4) Coordinates for the trigger events for the RGB camera are stored as CSV. The images from the camera also contain GPS information such as approximate position and GPS timestamps used to link the trigger event with the actual image and therefore plot.
The files are identified by the name of the experiment, date, column/row coordinates and column number for unique identification. It is possible to add suffixes to denote multiple passes for a given column on a single day (e.g., column pass before and after biomass sample).
Processing Workflow
The processing workflow is designed as a pipeline with multiple steps (Figure 2) and was developed with Python 2.7 (Python Software Foundation, https://www.python.org/) and Go (Google, https://golang.org/) programming languages. The processing architecture is designed to be amenable to parallel computing and deployment into cloud infrastructures. The workflow is presented to the user as a web interface that provides access control, data upload, interactive visualization, plot selection and export of the results to standard formats. A web-based architecture allows the user to process the data without installing any specific software other than a web browser.
The raw data, collected in the field and comprising the LiDAR scans, their location, and orientation from the GPS/IMU and wheel encoder, are compressed and uploaded to the web server. Once the data is uploaded, it is transformed from raw LiDAR returns (containing the range to the object and angle) into Cartesian coordinates, which creates the point cloud for the entire column with each point comprising an x, y, and z coordinate. Where: the x coordinate is the position across the column (along the width of the column); the y coordinate is the position down the column (along the length of the column); the z coordinate is the vertical position. The point cloud can contain a number of outliers and spurious points that usually appear when the laser hits the edge of leaves or in very bright light conditions. In the next step, the point cloud is filtered using Point Cloud Library (Rusu and Cousins, 2011) and the statistical outlier removal filter (Rusu et al., 2008). Once the point-cloud is clean, a rasterised version of the point-cloud is displayed on the web interface, thereby enabling the user to visualize all the experimental plots for a particular column (Figure 3). The user then draws a rectangle around every experimental plot, which defines the areas of interest for each plot on which the feature extraction algorithms will be applied. The user-supervised selection of the experimental plots enables the user to avoid areas of the plot that would normally be discarded in the field experiment (e.g., plot borders) and prevents measurements on areas where biomass has been removed for sampling or where Frontiers in Plant Science | www.frontiersin.org FIGURE 2 | Schematic of the Phenomobile Lite LiDAR data processing workflow, whereby each box represents the following steps (from left to right). Data collection: Raw data collection with the Phenomobile Lite touch screen computer. Point Cloud Creation: the collected LiDAR data is converted into X,Y,Z coordinates. Point Cloud Cleaning: LiDAR data is cleaned and outlier points, resulting from partial returns. Plot Extraction: LiDAR data is segmented into experimental plots by the user, through a web interface ( Figure 3). Feature Extraction: Trait data are extracted from the LiDAR data for each experimental plot. The outgoing data file format for each step is indicated in the respective red rectangle.
FIGURE 3 | Web interface of the LiDAR data processing pipeline showing a rasterised point-cloud for a column of experimental plots. The user-selected sections of the experimental plots are denoted with numbered orange rectangles. Note how the user has avoided existing biomass samples (blue boxes to the right of the numbered orange rectangles).
Frontiers in Plant Science | www.frontiersin.org a plot has been damaged (e.g., wheel tracks). The plot-selection could be automated in the future with image analysis techniques.
Once the areas delimiting the plots have been defined, the point-cloud representing each experimental plot is extracted. Finally, the point clouds associated with each plot are processed using different algorithms for the extraction of biologically meaningful measures representing traits of interest. Once the processes are completed, the pipeline generates a table with the associated measurements for each plot. The user can download these measures, linked to the traits of interest, as a CSV file or upload them to an existing online database or virtual laboratory such as SensorDB (Salehi et al., 2015).
The workflow, described in Figure 3, is controlled by a coordinator server that monitors the queue of the different tasks associated with the pipeline and delegates the tasks to different processing instances that can be distributed across several servers. This coordination allows a potential elastic load balancing by launching multiple instances of the data processing software during peak sampling periods of the field season or where there may be many concurrent users.
Trait Extraction from LiDAR
The LiDAR data provides a 3D representation of the canopy which can be processed in multiple ways, enabling the extraction of multiple measurements from the same original raw data. In this manuscript, we focus on three key traits that are relevant in breeding, agronomy and crop physiology field experiments: canopy height, ground cover and above-ground biomass.
Canopy Height
Determining the canopy height with LiDAR requires estimation of the ground elevation and subtracting this from the absolute height of the points. Most of the published methodologies are based on the determination of a crop surface model and a digital terrain model, where the difference provides the crop height (Louise Loudermilk et al., 2009). The digital terrain model can be obtained from a scan with bare soil, while the crop surface model is calculated from the topmost points of the point cloud, using a selection based on a top percentile (Hämmerle and Höfle, 2014;Friedli et al., 2016). For the Phenomobile Lite, the nominal distance from the LiDAR sensor to the ground is fixed; therefore, it could be measured manually or automatically determined from the LiDAR. This provides a relative coordinate system where the ground elevation is always known. To avoid the manual measurement of the LiDAR position, the ground elevation for a given column of plots was assumed as the peak of the histogram (i.e., mode) of heights in the point cloud ( Figure 4A). The prominence of this peak is evident in Figure 4A and arises because a column regularly includes sections of space between plots with bare soil. For a given experimental plot (Figures 4B,C), canopy height is estimated as the difference between the ground elevation and a quantile value in the z coordinate. To determine the optimum quantile defining the top of the canopy, quantile values ranging from 0.8 to 1.0 at increments of 0.005 were tested against manual height measures with a ruler. The root mean square error (RMSE) was calculated between LiDAR canopy height and canopy height measured manually.
Canopy Ground Cover
We evaluated two LiDAR algorithms for the classification of vegetation and soil to derive GC. In both cases, the LiDAR point cloud is transformed into a raster image that then is evaluated for the estimation of GC.
Images generated from the red reflectance of the LiDAR
The LiDAR's red laser enables discrimination of plants from soil based on the assumption that green tissue from plants will absorb most of the red light, while the soil will have a higher reflectance in the red region of the spectrum. Therefore, the histogram analysis of the red reflectance from the LiDAR typically shows two distinct peaks for the vegetation and soil (Figure 5). This was used to estimate GC whereby LiDAR intensity less than five was classified as vegetation and LiDAR intensity of five and above was classified as soil (Figures 6a,b).
Height analysis
Since the LiDAR provides a 3D representation of the canopy, any organ or tissue above the ground can be considered vegetation and therefore, the calculation of GC can be derived from that classification (Figures 6c,d). This method may be suitable when the canopy is senesced, with little green tissue, or if the soil is dark. However, it may be unreliable during early crop stages where the height of the canopy is too close to the ground or when plants are grown in deep furrows.
For the validation of the LiDAR-derived GC, RGB digital color images were acquired with the Canon 6D DSLR camera mounted on the Phenomobile Lite. The camera was triggered automatically based on distance, and one image was taken every 1 m. The focus was fixed, and camera settings were set to manual with fast exposure and high ISO for minimizing any motion blurring. The camera integrates an internal GPS that is configured to set the internal clock in sync with the GPS time. Since the internal GPS update rate is 1 s and does not have DGPS capabilities, there was not enough accuracy for geolocating the RGB images to the plots. Instead, we used the GPS timestamp for each image and the GPS-IMU track. Once the plots have been extracted in the data processing pipeline it is possible to attribute each RGB image to a different plot. The result is multiple RGB images per plot, depending on the length of the plots. The RGB images were analyzed using the methodology and software developed in Li et al. (2010). The software imports the RGB images and estimates the portion of green pixels based on the SAVI green vegetation index (Huete, 1988) for each image. The mean GC of each plot was calculated averaging the images belonging to the plot. We also used NDVI measurements acquired simultaneously with a GreenSeeker sensor mounted on the Phenomobile Lite. The raw NDVI measurements from the GreenSeeker were registered using a serial port (RS232) and integrated with the GPS-IMU track. The NDVI measurements were averaged at the plot level using the position and plot location.
Above-Ground Biomass
Two LiDAR methods for the estimation of above-ground biomass were evaluated: FIGURE 5 | Frequency distribution of LiDAR red reflectance for a typical experimental plot. Green tissue from plants will tend to absorb the red light, while soil will tend to reflect the red light. GC was estimated using a binary rule, where LiDAR intensity less than five was classified vegetation and LiDAR intensity five and above was classified as soil.
(a) Voxel-based method
The 3D box containing all the points for a given experimental plot were subdivided into voxels of regular dimensions (height = width = length), creating a 3D grid with the number of elements being a function of the total size of the 3D box for the experimental plot and the voxel size. Only the points above a 10 cm ground offset were included in the analysis. The ground elevation was calculated using the same methodology that was used for the canopy height. The coordinates of each point in the point-cloud were checked to allocate each point to its respective voxel. The resulting 3D grid, containing the number of points within each voxel, was filtered to eliminate voxels with less than 10 points as a way to remove spurious points and outliers that may not have been eliminated by the cleaning algorithm. The ratio of the number of voxels containing points to the number of subdivisions in the horizontal plane (width × length) was calculated and herein referred to as the 3D Voxel Index (3DVI). The 3DVI was calculated for voxel sizes ranging from 10 mm to 200 mm, at 10 mm increments. To determine the optimal voxel size, the RMSE and coefficient of determination (r²) were determined for the linear regression between 3DVI, at the given voxel size, and above-ground biomass measured manually.
(b) Profile-based method
The point-cloud for a given experimental plot was divided into vertical layers of 10 mm and, for each layer, the fraction of points divided by the total number of points for the point-cloud was calculated. Then, starting from the top, each layer was corrected by a factor, analogous to the extinction coefficient in Beer's law (Richardson et al., 2009), calculated as the exponential of the correction factor (k), multiplied by the total fraction of points intercepted above that layer. The sum of the corrected fraction of points for each layer was calculated and herein referred to as the 3D Profile Index: Where: i is a given 10 mm vertical layer with 0 and n the lower and uppermost layers respectively; p i is the LiDAR points for a given layer; p t is the total LiDAR points for all layers; p cs is the cumulative sum of LiDAR points intercepted above a given layer. Figure 7 shows an illustrative example of the profile-based method. The optimal value of k was determined by calculating 3DPI, for k values ranging from −3.5 to 2.25 at increments of 0.05, and determining the RMSE and coefficient of determination between 3DPI, at the given k value, and above-ground biomass measured manually. A 10 cm ground offset was used to determine the lowest layer and the ground elevation was calculated using the same methodology that was used for the canopy height.
Field Experiments
We conducted three separate experiments for the validation of canopy height, canopy ground cover, and above-ground biomass using the Phenomobile Lite. Canopy height was validated in Experiment 1 (EXP1), canopy ground cover was validated in Experiment 2 (EXP2) and above-ground biomass was validated in Experiment 3 (EXP3).
(a) Experiment 1: Validation of Canopy Height
EXP1 comprised a selection of 18 near-isogenic wheat lines, arising from mutagenesis of the Brazilian bread wheat cultivar Maringá, with allelic variants of the Rht-B1 allele and therefore known phenotypic variation for height (Chandler and Harding, 2013). The experiment was sown on 12th June 2014, at Yanco NSW (34.62S, 146.43E, elevation 164 m) in SE Australia, and comprised three replicate experimental plots per genotype (54 experimental plots in total) sown in a randomized complete block design (orientated North-South). Experimental plots were 12 m long and 10 rows across with a row spacing of 0.18 m and sowing density of 250 seeds/m 2 . Phenomobile Lite measurements with LiDAR were made on the 23rd Oct. 2014. On the same day, canopy height was measured manually following Rebetzke et al. (2013b), with three replicate measures of canopy height per experimental plot.
(b) Experiment 2: Validation of Canopy Ground Cover
EXP2 comprised 90 entries of commercial and advanced breeder lines and was sown on the same date and location as EXP1 into a randomized complete block design with three replicate experimental plots per genotype (270 experimental plots in total and orientated North-South). Experimental plots were 6 m long and 10 rows across with a row spacing of 0.18 m and sowing density of 250 seeds/m 2 .
(c) Experiment 3: Validation of Above-Ground Biomass
Thirteen contemporary bread wheat (Triticum aestivium L.) and two triticale (x Tricosecale) genotypes were sown on the 12th June 2015 at the CSIRO Agriculture and Food Ginninderra Experiment Station (GES), Canberra, ACT, Australia (35.20S, 149.09E, elevation 577 m). The genotypes were selected to represent a broad range of canopy architecture, including very erect to prostrate, and thereby enable robust evaluation of the LiDAR for phenotyping. The experimental plots were 15 m long and 10 rows across with a row spacing of 0.18 m, allowing for multiple destructive assessments of above-ground biomass. Sowing density was 250 seeds/m 2 and in five genotypes, an additional low-density treatment (125 seeds/m 2 ) was added to increase the range of above-ground biomass. The experiment comprised 60 experimental plots in total (three replicates of 15 genotypes sown at 250 seeds/m 2 and five genotypes sown at 125 seeds/m 2 ), orientated North-South and sown in a randomized complete block design.
Eight destructive above-ground biomass sampling events were undertaken at different phenological stages from tillering to maturity. Plant development stage was recorded at each above-ground sampling event, using the Zadoks development scale (Zadoks et al., 1974; Table 1). Above-ground biomass was determined from shoots cut at ground level from the central six rows of each plot along 1.0 m in length (sample area of 1.08 × 1.0 m = 1.08 m 2 ). The dry weight was determined from the samples after drying at 65 • C until reaching a constant dry weight. LiDAR measurements with the Phenomobile Lite were obtained on the same day or immediately prior to an above-ground biomass sampling event. For the LiDAR analysis, we selected the Above-ground biomass samples were of 1.08 × 1.0 m dimension and Phenomobile Lite measurements were made on the same day and immediately prior to an above-ground biomass sampling event.
section of the plot where the above-ground biomass sampling was going to be performed (ca. 1.0 m 2 ).
Canopy Height
The results from the canopy height validation (EXP1) are shown in Figure 8. The optimum quantile was 0.955, as determined by the smallest RMSE between measurements made on 23rd Oct. 2014 of canopy height measured manually and LiDAR canopy height, derived from quantile values ranging from 0.8 to 1.0 at increments of 0.005 in the z coordinate ( Figure 8A). The coefficient of determination (r²) and RMSE between canopy height measured manually and LiDAR canopy height, with data aggregated by genotype ( Figure 8B) was 0.993 and 0.017 m respectively (data from 23rd Oct. 2014), with a slope of 0.943.
Canopy Ground Cover
Canopy ground cover (GC) estimates derived from the LiDAR using red reflectance and height were compared with GC derived from the RGB images using the protocol described in Li et al. (2010), for each experimental plot in EXP2 on 13th August 2014. LiDAR red reflectance GC was strongly associated with RGB GC (Figure 9A, r² = 0.82), and NDVI ( Figure 9C, r² = 0.88), but the LiDAR red reflectance tended to underestimate GC compared with the RGB camera (slope = 0.80) and NDVI (slope = 0.70). The association between NDVI and RGB GC was also strong (Figure 9E, r² = 0.78). The LiDAR height GC resulted in the smallest coefficient of determination values between RGB GC ( Figure 9B, r² = 0.46) and NDVI ( Figure 9D, r² = 0.60). The LiDAR height method clearly underestimated GC when compared to the values obtained from the RGB images and NDVI.
To evaluate the robustness of the GC methodologies as the crop develops, GC derived from the LiDAR using both red reflectance and height-based methods were compared to GreenSeeker NDVI measurements for each experimental plot in EXP2 on five different occasions from tillering to head emergence (Figure 10). The results show a strong association between LiDAR GC and NDVI in the early developmental stages with coefficients of determination ranging from 0.77 to 0.90 for the reflectance method and 0.60 to 0.82 for height, for the first three dates. However, for the final two dates, the range of NDVI and GC decrease and the association with NDVI weakens for both methodologies, with r² ranging from 0.25 to 0.34. This suggests that NDVI saturates at GC values above 0.8 as previously reported (Prabhakara et al., 2015). The association between the two LiDAR GC methodologies was highly linear (Figure 10C) and the r² progressively increased from 0.61, for the first run on the 13th August 2014, to 0.92 by canopy closure. However, the height method tended to underestimate GC when compared with the reflectance method, especially at the earlier dates.
Above-Ground Biomass
Data from EXP3 was used to evaluate and compare the two LiDAR methodologies presented here for estimating aboveground biomass. We evaluated the effects of the key parameters, namely the voxel size for the 3DVI and the correction factor, k, for the 3DPI, to determine the optimum values across the different sample events (Figure 11). In both cases, the indices were calculated for a range of parameter values (voxel sizes from 10 to 200 mm, at 10 mm increments, and k from −3.5 to 2.25 at increments of 0.05). For each parameter value, at each sampling event, the RMSE and r² were calculated for the linear regression between 3DVI, 3DPI, and the field measurements of above-ground biomass. For the 3DVI, the voxel size had a strong effect on the r² and RMSE for all sample dates, with r² values ranging from 0.0 to 0.64 depending on the sample date and voxel size. 3DPI was less sensitive to changes in the k parameter for each date and changes only marginally affected the coefficient of determination, thereby providing more stable above-ground biomass estimates across the evaluated range of k.
Above-ground biomass was estimated using a combination of Equations (4, 6) ( Figure 12A) for 3DVI and Equations (5, 7) for 3DPI ( Figure 12B). In each case, the equations and the optimal voxel size or k, for 3DVI and 3DPI respectively, shown in Table 2
DISCUSSION
Field phenotyping still remains a bottleneck in the pipeline of high throughput phenotyping (Araus and Cairns, 2014), where limited options are readily available for performing measurements of physiological traits at a large scale (Furbank and Tester, 2011). The Phenomobile Lite was designed for routine operation in large field experiments and breeding trials and deployment in such applications has clear advantages over current practice. For example, the Phenomobile Lite is easily transported to the field, thereby overcoming a major limitation of fixed phenotyping platforms where experiments are constrained to their occupied space (Virlet et al., 2016;Kirchgessner et al., 2017). When operated at walking speed, the Phenomobile Lite can measure multiple traits simultaneously on ∼800 10 m² plots/h and measurements can be repeated at different developmental stages. The simple operation of the platform ensures that non-technical users can operate the instrument, and its design is amenable to automation and autonomous navigation in future versions. Further, the Phenomobile Lite is modular, enabling the integration of multiple sensors. Herein we demonstrate the integration of a digital RGB camera and an active NDVI sensor that operate in coordination with the LiDAR using a standard spatial reference provided by a GPS/IMU and present and validate LiDARbased algorithms for providing high-throughput non-destructive estimates of canopy height, ground cover and aboveground biomass.
High Accuracy of LiDAR Canopy Height
The low RMSE of 0.017 m (r²: 0.993, slope: 0.943), between LiDAR canopy height and canopy height measured manually was consistent with other reported accuracies such as 0.018 m in wheat (Virlet et al., 2016), 0.024 m in triticale (Busemeyer et al., 2013), ∼0.03-0.06 in barley (Tilly et al., 2015), ∼0.05 m in rice (Tilly et al., 2014). Although estimating height from the LiDAR is an obvious use of this instrument, this still required determination of the top of the canopy and the ground elevation. The top of the canopy was determined by analyzing the frequency distribution of height from the LiDAR and using the optimum quantile of 0.955, as defined in the EXP1 validation. This optimum value is smaller than the value (0.99) obtained in Friedli et al. (2016), using a terrestrial LiDAR scanner (TLS), but it is close to the 0.95 quantile used originally in Deery et al. (2014) and greater than the 0.9 used in (Hämmerle and Höfle, 2014). The TLS used in Friedli et al. (2016) and (Hämmerle and Höfle, 2014) perform the scans from a single point, creating spheres of point clouds where the laser beam penetrates into the canopy from a tilt angle that is steeper as one gets away from the scanning point. As the laser beam will not penetrate much into the canopy after closure, most of the points will come from the top of the canopy, which could explain the higher quantile when compared with a line scanner system that scans from a nadir perspective. In this study, the ground elevation was determined for a given experimental column from the bare soil between experimental plots. For closely-spaced experimental plots with little to no bare soil in-between, the estimation of ground elevation may fail and would require manual specification of the distance from the LiDAR to the ground. Most published methods for the estimation of canopy height from LiDAR or aerial photography, are based on the determination of crop surface models (Hoffmeister et al., 2013), which require determining the ground elevation from a scan with bare soil. In this case, the fixed geometry of the LiDAR with respects to the ground makes this step unnecessary. Determination of the top of the canopy assumes a uniform canopy, which may not be the case for plots with poor establishment or lodging. In these cases, given the large number of sampling points for canopy height, the plot could be subdivided into smaller areas where the presented algorithm is applied. This could provide a measurement of the plot uniformity with the statistical distribution of the plot height, which could lead to an indicator of the plot health or lodging score.
Relationships between LiDAR-Based Ground Cover, NDVI and RGB-Based Ground Cover
Canopy GC is an important trait in wheat, relevant in early developmental stages for both enhancing water-use efficiency through minimizing water loss through soil evaporation and for maximizing canopy light interception (Fischer, 1981;Richards and Rebetzke, 2002;Rebetzke et al., 2004;Mullan and Reynolds, 2010). Digital RGB images and NDVI are commonly used Optimum parameters are also calculated for all the sampling dates (overall), pre-and including anthesis and post-anthesis.
for quantifying GC and can provide relatively high-throughput (Rebetzke et al., 2013a). However, there are potential issues with both approaches. As a passive sensor, RGB imaging could be adversely impacted by the light conditions (over and underexposure) and, as the crop develops, classifying the green vs. non-green pixels could also be problematic because of shadowing from the canopy or senescence . The GC measurements made using active NDVI sensors, such as the GreenSeeker R (Trimble, USA), are not impacted by light conditions but they can be negatively impacted by the soil reflectance (Huete, 1988). Further, when used for the determination of GC, variation in canopy greenness can influence the measurement of NDVI, resulting in potentially reduced accuracy in the presence of different nitrogen status or during the onset of senescence (Hansen and Schjoerring, 2003). In this paper, we have shown that LiDAR can be used in two different ways to determine GC: (1) using red reflectance from the LiDAR's red laser to separate vegetation from soil; and (2) using height as a threshold to determine the vegetation above that height. These two approaches were tested and compared with GC estimated from RGB images and NDVI. The limitations for NDVI as mentioned earlier apply to the LiDAR red reflectance, namely varying soil reflectance and varying canopy greenness. For instance, a wet, dark soil could lead to low reflectance values that could be close to the threshold used for vegetation. In the case of vegetation, the onset of senescence or presence of chlorosis could lead to elevated red reflectance with values similar to the soil. The use of LiDAR height for GC avoids these two issues, but the approach is problematic when plants are small and their height is proximal to soil undulations such as furrows. A combination of both approaches depending on crop height could be employed to accurately measure ground cover from emergence through to maturity.
At the initial crop stages, GC estimated from LiDAR red reflectance presents a comparable alternative to RGB images or NDVI (Figures 9A,C). The use of RGB images for GC, whilst potentially simple and cost effective to acquire, still require image capture for each plot and data processing using existing image processing algorithms (e.g., Casadesús et al., 2007;Li et al., 2010). Both tasks require human intervention and are potentially prone to errors and subjectivity, which makes necessary the development of robust automatic segmentation algorithms and processing pipelines . The use of active sensors, such as LiDAR or GreenSeeker, have the potential advantage of reliability under a range of light conditions. For measurements collected from stem elongation to post-canopy closure, LiDAR height GC offers advantages over NDVI due to the absence of signal saturation. This saturation was evident in the plateau in NDVI that occurred around anthesis in EXP2 (23 Sept. 2014; see Figure 10), similar to the evolution of NDVI reported elsewhere. For example, Rebetzke et al. (2016) reported the evolution of NDVI and LiDAR, in relation to canopy stay green, from preanthesis to maturity, where NDVI reached a plateau pre-anthesis and decreased during grain-filling. In this case, the alternative of using LiDAR height GC avoids this issue and can still provide an estimate of ground cover with non-green vegetation.
LiDAR Predictions of Biomass Were Strongly Associated with Above-Ground Biomass
The capacity to non-destructively estimate above-ground biomass using LiDAR is a critical outcome of this work, given the lack of rapid and non-destructive alternatives, especially after canopy closure in wheat. An additional motivating factor for this work was to overcome the limitations of NDVI for estimating aboveground biomass, namely, its saturation after canopy closure and the confounding influences of canopy greenness and soil reflectance. We tested two different algorithms for the determination of aboveground biomass from LiDAR. The first approach tested (3DVI), was based on the estimation of volumetric quantification of the above-ground biomass. This was performed through voxelization of the LiDAR point cloud and counting the number of voxels occupied by the canopy. For the second approach (3DPI), the vertical distribution of the LiDAR returns through the canopy were analyzed to develop a canopy density profile. That profile was integrated, resulting in the fraction of LiDAR points intercepted by the canopy.
Both 3DVI and 3DPI were strongly correlated with aboveground biomass after and including spike emergence (Figure 12), thereby overcoming limitations of using NDVI post canopy closure (Huete, 1988;Hansen and Schjoerring, 2003). The 3DPI outperformed the 3DVI after spike emergence (Z55, 23 Oct 2014), reaching a maximum coefficient of determination of 0.76 and minimum RMSE of 11.04% during grain-filling. Optimum parameters were also determined for all the data points across all the sample events resulting in a strong association (r² = 0.81 for 3DVI and r² = 0.73 for 3DPI) but also increased RMSE (>30%). The determination of combined optimum parameters for the samplings before and after anthesis revealed that prior to anthesis both indices performed similarly (r² > 0.85 and RMSE<26%), but after anthesis 3DPI provided superior results (r² = 0.68 vs. r² = 0.54). Aggregating the data from all sample events pre and including anthesis and post-anthesis and applying the corresponding relationships [Equations (4-7)] to each of the sampling dates provided improved results, with slightly better results for 3DPI (r² = 0.93, RMSE = 19.82%) when compared with 3DVI (r² = 0.92, RMSE = 21.28%; Figure 12). These results are in line with previous studies which use relationships between height and biomass on wheat with r² = 0.88 (Eitel et al., 2014) and rice with r² = 0.9 (Tilly et al., 2014) or that combine height and vegetation indices with r² = 0.84 (Bendig et al., 2015) and r² = 0.85 (Tilly et al., 2015). The results provide evidence that the proposed method can operate over a broader range of dry biomass (up to 20 t/ha in this study vs. ∼5t/ha Eitel et al., 2014;Tilly et al., 2015). Despite the considerable capacity of both 3DVI and, in particular, 3DPI, to quantify relative differences in above-ground biomass at full canopy closure, information about the phenological stage (pre and including anthesis or post anthesis) was still required to select the equations with the greatest prediction power.
The main limitation of both indices was their weak correlation with biomass at early growth stages. This problem is similar to the determination of GC using height. In both cases, a fraction of the canopy closest to the ground is ignored to avoid including rough terrain or furrows, created at sowing, thus contributing to an underestimation of above-ground biomass and GC at early growth stages. We feel that this limitation is minor, particularly as more meaningful physiological information from biomass comes from measurements taken from stem elongation to anthesis (e.g., Shearman et al., 2005). Destructive measurements taken at these stages are subject to many sources of error (e.g., the small subsection of the plot and the repeated handling of the samples during drying and weighing contribute to error) and cannot be taken at regular intervals; the LIDAR approaches overcome these issues, thereby providing greater confidence in the estimates obtained. It is possible however that an algorithm that separated plants from soil, by 3D reconstruction of the terrain for example, would potentially provide better estimations of above-ground biomass and GC at early growth stages to complement the promising results obtained here.
It is important to highlight that the LiDAR estimates of biomass are mainly driven by changes in the bio-volume of the canopy, as measured by the LiDAR sensor and then represented in the 3D point cloud. Despite the strong association of the proposed LiDAR indices with above-ground biomass, this method cannot explicitly account for changes in biomass resulting from remobilisation from the vegetative to the reproductive organs during grain-filling or even changes in tissue density. For example, a low final biomass resulting from biotic or abiotic stress during grain-filling may not be picked up by these indices, unless changes in the volume of the heads (i.e., an indirect measurement of grain size and grain number) become evident and detectable in the biovolume estimates or the 3D profile. Besides, the application of these methodologies in very dense canopies, where the upper layers intercept most of the points, preventing the laser beam to penetrate into the lower parts of the canopy, will present an underestimation of biomass and a poorer description of the canopy architecture from the LiDAR point cloud. These limitations may be overcome by using alternative sensor technologies that measure the water content or the canopy bulk density, which would help monitor the status of grain-filling or provide estimates of the density or plant organs. The potential synergies between LiDAR and hyperspectral imaging (Geipel et al., 2014;Bendig et al., 2015;Tilly et al., 2015) or microwave sensing could enable the development of new multi-sensor indices which would measure changes in the canopy density as it develops or provide an insight of the occluded layers in the canopy, providing more robust estimates of above-ground biomass.
CONCLUSIONS
We have demonstrated the capacity for non-destructive and accurate high-throughput measurement of canopy height, ground cover and above-ground biomass in the field using LiDAR. The Phenomobile Lite, presented herein, was designed for simple operation and cost-effective use on large field experiments. The main sensor is the LiDAR, but the Phenomobile Lite can accommodate additional instruments including a GreenSeeker, for NDVI, and a RGB digital camera; other sensing platforms can be added in future. A custom-developed web interface was developed for data processing by the non-technical user, enabling rapid and simple data extraction.
The deployment of the Phenomobile Lite within genetics, physiology, and agronomy studies, or plant breeding programs will enable the non-destructive measurement of canopy height, GC, and above-ground biomass on a larger scale than typically undertaken, by overcoming the resource-intensive nature of manually measuring these traits. Further, sampling will encompass the entire area of the plot to reduce sampling and increase precision than previously with portions of the canopy. The non-destructive nature of the measurements will allow monitoring crop growth through time and the development of new dynamic traits from time series analysis that may provide a deeper understanding of phenotypic and genotypic variation for complex traits associated with growth and development.
AUTHOR CONTRIBUTIONS
JJ-B and DD: Designed and commissioned Phenomobile Lite; JJ-B: Designed the LiDAR processing algorithms with input from RF and XS; PR-L: Implemented the algorithms and processing pipeline with input from JJ-B; AC: Did the experimental design for the field experiments; JJ-B, WB, and GR: Did the statistical analysis; JJ-B and DD: Wrote the manuscript with contributions from GR, RF, XS, WB, and RJ.
|
2018-02-27T14:06:26.989Z
|
2018-02-27T00:00:00.000
|
{
"year": 2018,
"sha1": "c609019f7559bb3a311aa556977085c366de8003",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2018.00237/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c609019f7559bb3a311aa556977085c366de8003",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
7436582
|
pes2o/s2orc
|
v3-fos-license
|
Early implant‐associated osteomyelitis results in a peri‐implanted bacterial reservoir
Implant‐associated osteomyelitis (IAO) is a common complication in orthopedic surgery. The aim of this study was to elucidate how deep IAO can go into the peri‐implanted bone tissue within a week. The study was performed in a porcine model of IAO. A small steel implant and either 104 CFU/kg body weight of Staphylococcus aureus or saline was inserted into the right tibial bone of 12 pigs. The animals were consecutively killed on day 2, 4 and 6 following implantation. Bone tissue around the implant was histologically evaluated. Identification of S. aureus was performed immunohistochemically on tissue section and with scanning electron microscopy and peptide nucleic acid in situ hybridization on implants. The distance of the peri‐implanted pathological bone area (PIBA), measured perpendicular to the implant, was significantly larger in infected animals compared to controls (p = 0.0014). The largest differences were seen after 4 and 6 days of inoculation, where PIBA measurements of up to 6 mm were observed. Positive S. aureus bacteria were identified on implants and from 25 μm to 6 mm into PIBA. This is important knowledge for optimizing outcomes of surgical debridement in osteomyelitis.
Implant-associated osteomyelitis (IAO) is among the most severe orthopedic conditions (1). The absolute number of IAO cases is increasing, owing to the growing number of patients with bone implants (2). In the United States of America, the infection rate is 5-15% in fracture fixation devices and 0.3-5% in joint prosthesis (3,4). Treatment of IAO can include surgical debridement, removal of implants and long-lasting antimicrobial therapy, and calls for a multidisciplinary approach (1). Nevertheless, treatment failure is common. Despite removal of the infected device and extensive debridement, there is a high risk of re-infection and prolonged use of postoperative antibiotics (5). In two different retrospective studies, treatment failure rates of IAO have been estimated to 41.8 and 58.2 percent, respectively (6,7). An explanation for the infections and re-infections of IAO has been suggested to be bacterial survival in the peri-implanted bone tissue (8).
Insertion of orthopedic implants is an equipment requiring process that involves drilling and often the use of bone cements. A by-product of these procedures is the generation of heat resulting in osteonecrosis (9). The necrotic osteocytes lose their inhibitory effects on osteoclasts leading to increased osteoclast activity and thereby bone resorption (10). Aside from resorption of dead bone, the process of osseointegration of an implant is especially dependent on the ingrowth of osteoblasts and mesenchymal stem cells (11). Along with osteonecrosis, bone resorption and osseointegration, a foreign body response also occurs around the implant (12). All these cellular changes make the peri-implanted bone tissue a perfect locus resistentiae minoris for bacterial infection (13).
The aim of this study was to answer the following question: how deep is a bacterial infection going in the peri-implanted bone tissue, in a case of IAO, within a week? The question was addressed in a porcine model of IAO in which evaluation of the entire infected bone is possible after a fixed period of time, compared to surgical biopsies from humans. Furthermore, it is highly relevant to use pigs for modeling of infectious diseases in humans, like IAO, as the porcine immune system shows a higher degree of similarity when compared to rodents and rabbits (14).
Study design
The study is a descriptive study based on mainly microscopic observations in bone tissue and on orthopedic implants, obtained from a porcine model of IAO. The model is based on tibial insertion of a small steel implant combined with inoculation of Staphylococcus aureus bacteria or sterile saline. Twelve pigs of 30 kg body weight, obtained from a specific pathogen-free (SPF) herd, were divided into three groups based on their time of killing (
Experimental surgery and inoculum
Animals were anesthetized (15) and a tibial implant was inserted (K-wire 2 9 20 mm) 1 cm below the growth plate of the right tibia. The procedure was recently described (16). The bacterial inoculum or saline was injected around the implant, before closure of the periost, subcutis and skin (16). The inoculating S. aureus strain was a pathogenic porcine strain, previously used in porcine models of osteomyelitis (17). The strain was prepared as described by Johansen et al. (18) and diluted with 0.9% sterile isotonic saline to obtain an inoculation dose of 10 4 colony forming units (CFU)/kg BW in a final volume of 10 lL.
Postoperative care of pigs
The pigs were daily monitored throughout the experiment by skilled personal. A body temperature above 41°C, impaired ability to stand and anorexia were set as human endpoints. The pigs received intramuscular injections (0.1 mg/kg BW) of buprenorphine (Temgesic 0.3 mg/mL, Schering-Plough, Heist-op-den-Berg, Belgium) every 6-8 h. The pigs did not receive local or systemic antibiotic treatment, which is applied to human patients under therapy, as this would have hampered the focus of the study, that is, the ability to spread within the bone of the inflammation and infection.
Pathology
Following killing after 2, 4 and 6 days, the implant was removed from the implant cavity using a sterile lancet and collected for scanning electron microscopy (SEM) and peptide nucleic acid fluorescence in situ hybridization (PNA FISH). From all pigs, the right tibia was dissected free and decalcified. However, in group C animals, the right tibial bone was sagittal sectioned through the implant cavity before decalcification. Following decalcification, the proximal end of right tibial bone, containing the implant cavity, was cut into five sagittal pieces 3-4 mm each. Afterward, the bone pieces were processed routinely and embedded in paraffin wax. Sections (4-5 lm) were stained with hematoxylin and eosin (HE) and special stained with phosphotungstic acid hematoxylin (PATH) and Masson's trichrome for demonstration of fibrin and collagen, respectively. Bone tissue with pathological changes, around the implant cavity, was defined as the peri-implanted pathological bone area (PIBA). The largest size of PIBA was measured perpendicularly to the implant cavity on the most representative section. On the same section, the scoring system developed by Pandey el al. (19) was used to define whether an infection occurred. Pandey found that the presence of 2+ or more (more than one neutrophil granulocyte per high power field (400) on average after examination of at least 10 high power fields) in periprosthetic tissue, distinguishing between septic and aseptic loosening of prosthesis and bone implants in humans. The scores were as follows: 0 = absent; 1+ less than 1 cell on average per high power filed; 2+ = 1-5 cells on average per high power fields; 3+ = >5 cells on average per high power fields.
Microbiology
Cotton swabs were taken from the implant cavity after removal of the implant. Swabs were processed and characterized as previously described (20), and selected bacterial isolates were Spa-typed (21).
Immunohistochemistry of bone tissue
Tissue sections of 4 lm were prepared and processed for indirect in situ identification of S. aureus with immunohistochemistry.
Primary S. aureus-specific antibodies (ab37644; Abcam, Cambridge, UK, diluted 1:1000 in 5% swine serum) were used (18). The size of the largest observed S. aureus-positive aggregates and their distance to the implant cavity were estimated on each section, by measuring the length directly on the immunohistochemistry (IHC) sections.
Scanning electron microscopy and peptide nucleic acid fluorescence in situ hybridization of implants Selected implants were examined with scanning electron microscopy (SEM) and peptide nucleic acid fluorescence in situ hybridization (PNA FISH) ( Table 1). The implants were placed in 2, 5% glutaraldehyde or formalin for SEM and PNA FISH, respectively. The samples for SEM were rinsed three times in 0.15 M sodium phosphate buffer (pH 7.4); specimens were postfixed in 1% OsO4 in 0.12 M sodium cacodylate buffer (pH 7.4) for 2 h. Following a rinse in distilled water, the specimens were dehydrated to 100% ethanol and critical point dried (Balzers CPD 030 instrument, Leica Microsystems, Wetzlar, Germany) using CO 2 . The specimens were subsequently mounted on stubs, using colloidal coal as an adhesive, and sputter coated with gold (Polaron SEM E5000 coating unit). Specimens were examined with a Philips FEG30 scanning electron microscope operated at an accelerating voltage of 2 kV. The samples for PNA FISH were carefully rinsed in sterile saline for 5 min. Samples were placed in a dish (depth 1 mm, diameter 10 mm) and covered by 100 lLn PNA FISH-specific S. aureus probe (Advandx, Woburn, Massachusetts, USA). Samples were incubated at 55°C for 90 min. Subsequently, dishes were submersed for 30 min in 55°C warm wash buffer (4 mL 609 wash buffer (Advandx) to 240 mL miliQ water). Afterward, the samples were air-dried in the dark followed by covering by 3 mM DAPI solution (Life Technologies, Carlsbad, California, USA) for 15 min at room temperature. Excess DAPI was removed by gently rinsing with PBS (the Substrate Department at the Panum Institute, Denmark). All observations were performed using a Zeiss LSM 710 confocal laser scanning microscope (Oberkochen, Germany).
Statistics
An unpaired t-test was used to analyze PIBA measurements between control and infected animals (GraphPad Software inc, version 7, LaJolla, California, USA).
Clinical observations
All animals were healthy when entering the study, but became lame on the operated leg after surgery and up until killing. The degree of lameness was low, as the animals were able to use the leg and walk around freely. One of the infected Group B animals had intermittent elevated body temperature. All animals ate and drank normally before and during the experiment.
Macroscopic pathology
Thick purulent material was seen within the implant cavity in the infected animals following 4 and 6 days. In all control pigs, and pigs infected for 2 days, serohemorrhagic fluid was present. Following sagittal section of the inoculated tibial bone from the two infected Group C animals, signs of osteomyelitis were seen around the implants as purulent and sequestered trabecular bone tissue (Fig. 1).
Histopathology
The neutrophil granulocyte score counted inside PIBA and the size of PIBA is presented in Table 1 and Fig. 2A, respectively. The size of PIBA showed no marked difference between infected and control animals after 2 days from surgery with a mean difference of 0.7 mm (Fig. 2A). However, in Groups B and C, the mean differences between infected and control animals were 4.3 mm and 4.4 mm, respectively ( Fig. 2A). The p-value of the difference in PIBA size between control (n = 6) and infected animals (n = 6) was 0.0014 (Fig. 2B). The largest registered PIBA value for day 4 (Group B) and day 6 (Group C) was 6.0 mm and 6.21 mm, respectively ( Fig. 2A).
The following description of pathomorphological changes is correlated with the trabecular tissue only. In Group A, PIBA was a mix of erythrocytes, bone marrow leukocytes, necrotic trabecular tissue (with empty lacuna), neutrophils and fibrin exudation (Fig. 3A). However, more neutrophils and extensive fibrin exudation were seen in the infected animals. At 4 and 6 days after inoculation (Groups B and C), a cellular layer of elongated fibroblasts, neutrophils, macrophages and giant cells were seen toward the implant cavity (Fig. 3B). This layer was surrounded by osteonecrotic trabecular bone intermingled with the same cell types. The most pronounced pathomorphological findings within PIBA of infected Groups B and C animals were massive extension of the cellular layer and active osteoclasts seen in resorptions lacuna of the necrotic bone trabecular (Fig. 3C,D). In the periphery of PIBA, fibroblasts and associated new collagen could be seen, although more pronounced in infected animals compared to controls. Generally, almost no bone tissue was present within PIBA of infected Group C animals.
Localization of bacteria in bone tissue
Bacteria were not identified by IHC in cortical bone tissue of any animal. Immunopositive S. aureus bacteria were present in all infected pigs of Groups A, B and C within both the exudate of the implant cavity and within PIBA (Table 1 and Fig. 4A). Additionally, S. aureus bacteria were also detected in one control animal of Group B within BIPA (Table 1 and Fig. 4B). IHC-positive S. aureus bacteria were not identified in the control animals of Groups A and C. On the tissue section representing the center of PIBA, the size of the largest S. aureus-positive aggregate stayed between 25 and 84 lm and was located from 25 to 2000 lm within PIBA regardless of the time of inoculation. Moreover, bacteria were found on two consecutively sections from distant bone pieces (2-3 mm each), not covering the implant cavity, in one infected animal of Groups A and B, each. All bacterial aggregates inside PIBA were seen within the cellular zone or adjacent to osteonecrosis.
Microbiology
All swab and Spa-typing results are presented in Table 1.
Implants SEM and PNA FISH
By SEM, coccoid bacteria were observed attached to the surface of all implants from both control and infected animals in the form of biofilm. The areas of biofilm were defined to isolated islands on the implant surface. In general, the bacteria were found embedded in a biomass of extracellular matrix, leukocytes and erythrocytes (Fig. 4C).
Attached leukocytes appeared to be more dominant on implants of infected animals. By PNA FISH, S. aureus could be identified on one implant (Table 1 and Fig. 4D).
DISCUSSION
This study shows that pathomorphological changes and infecting bacteria of IAO can go up to 6 mm into the surrounding bone tissue within a week, in a porcine model (Fig. 2). The surrounding bone tissue was described as PIBA. Within PIBA, S. aureus bacteria could be identified both adjacent to and distant from the implant. However, despite The present observations of peri-implanted bacteria are supported by a study of subcutaneous implant-associated infections (IAI) in a mouse model, showing that bacteria (Staphylococcus epidermidis) could be found within the tissue at a certain distance from the implant (22,23). In orthopedic infections, bacteria are usually discussed in the context of biofilm formation directly on the implant (24). However, case reports of IAO have also confirmed that peri-implanted bone tissue may contain pathogenic bacteria (25,26). Recently, by the same method as in this study (27), the size of bacterial colonies within peri-implanted bone tissue from patients with osteomyelitis has been estimated to range between 5 and 50 lm. In general, the maximal size of an in vivo biofilm has been estimated to be 200 lm (27). All measurements of S. aureus colonize within peri-implanted bone tissue of the present porcine model were below 84 lm. Therefore, the size of the observed bacterial colonies can be accepted as discriminative to human cases. The Infectious Disease Society of America (IDSA) has recently proposed five criteria for the diagnosis of periprosthetic infections (28). The infected pigs from this study fulfill criteria 2 (pus around the implant), 3 (histopathological evidence of inflammation) and 5 (positive intraoperative cultures). Additionally, the control animals also fulfilled criteria 3 and 5. However, Spa-typing results indicated that the control animals contaminated and infected themselves. The Spa types observed among control pigs isolates are typically seen in commensal porcine S. aureus strains (29) and differed from the Spa type used for inoculation (17). Self-contamination of the control animals thus highlights the hypothesis of the peri-implanted area as a locus resistentiae minoris; that is, a minimal number of bacteria can colonize an implant and the surrounding tissue. As S. aureus is the most common pathogen associated with IAO, and an independent risk factor of treatment failure (6), this bacterium was used. The selected porcine S. aureus strain was chosen, due to its ability to induce bone infection and inflammation (17,18). The dose of 10 4 CFU/kg BW was estimated to be among the lowest infective dose, based on a former dose-response study using the same porcine S. aureus strain (18).
All animals scored 2+ or 3+ in number of neutrophils. The scoring system for the number of neutrophils was originally used on tissue section from cases of chronic low-grade IAO in humans (19). Specific neutrophil number cutoffs for high-grade IAO, to which infected animals of this study may belong, has not been established (30).
Although the number of infected and control animals killed on each time point is limited, grouping of pigs as either being infected (n = 6) or control (n = 6) animals allowed a statistical comparison of PIBA size (Fig. 2B). As significance was observed between these two groups, the group size was acceptable, that is, including more pigs could be a waste of animals. It was not possible to do statistical calculations between infected and control animals of each time points. However, the values from each time points can give an estimate of the variance and effect size in power/sample size calculations. This might be relevant for further experiments aiming to study effects of surgical or medical treatments in the porcine model. The histopathological examinations of all animals clearly demonstrated trabecular osteonecrosis around the implant cavity due to drilling of the bone prior to insertion of the implant. Additionally, the injection of S. aureus bacteria resulted in noticeable pathomorphological changes between infected and control animals, visualized by the size of PIBA (Fig. 2A). The most obvious changes within infected animals were bacterial aggregates and accumulations of inflammatory cells, giant cells, active osteoclasts and fibroblasts. Thus, the present injection of bacteria, during insertion of the bone implants, resulted in biofilm formation on the implants and a peri-implanted bacterial reservoir directing bone pathomorphology. A peri-implanted bacterial reservoir might hamper a fast diagnosis and therapeutic debridement in cases of IAO. Furthermore, when located in the peri-implanted bone tissue, S. aureus bacteria can form biofilm (24), small colony variants (31) and be internalized by osteoblasts (31), three situations which favor persistence of the bacteria and infection.
The present porcine model of IAO showed that bacterial aggregation occurred on the surface of implants and within the surrounding bone tissue within 6 days of infection. Thus, it is important to notice that peri-implanted bone tissue shortly after implantation can serve as a reservoir for infecting bacteria. This is important knowledge for optimizing outcomes of surgical debridement in IAO. In clinical situations of acute IAO, that is, lasting for less than 3-4 weeks, simple lavage has been recommended (28). However, as this study shows, this approach might be problematic if the bacteria already within 1 week can be located half a centimeter into the surrounding bone tissue.
|
2018-04-03T00:36:20.217Z
|
2016-10-05T00:00:00.000
|
{
"year": 2016,
"sha1": "08a59d91a4f2895b50b735205f5c7afa28b8b8ff",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/apm.12597",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "acf93da31edb88909616f5393e459e2542ce2059",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53326837
|
pes2o/s2orc
|
v3-fos-license
|
New perspectives on the Erlang-A queue
Abstract The nonstationary Erlang-A queue is a fundamental queueing model that is used to describe the dynamic behavior of large-scale multiserver service systems that may experience customer abandonments, such as call centers, hospitals, and urban mobility systems. In this paper we develop novel approximations to all of its transient and steady state moments, the moment generating function, and the cumulant generating function. We also provide precise bounds for the difference of our approximations and the true model. More importantly, we show that our approximations have explicit stochastic representations as shifted Poisson random variables. Moreover, we are also able to show that our approximations and bounds also hold for nonstationary Erlang-B and Erlang-C queueing models under certain stability conditions.
Introduction
Markov processes are important modeling tools that help researchers describe real-world phenomena. Thus, it comes as no surprise that the Erlang-A model, which is a Markovian and multi-server queueing model that incorporates customer abandonments, is an important modeling tool in a multitude of application settings. Some of the more prominent applications include telecommunications, healthcare, urban mobility and transportation, and more recently cloud computing. See for example the following work by Mandelbaum et al. [12], Massey [13], Yom-Tov and Mandelbaum [26], Pender and Phung-Duc [24]. Despite its importance in many different applications, the Erlang-A queueing model has remained to be very difficult to analyze and understand. Even the analysis of the moments of Erlang-A queue beyond the fourth moment has remained an important topic for additional study.
It is well known that the stationary setting of the Erlang-A is much easier to analyze than its non-stationary counterpart. Some common approaches used to analyze non-stationary and state dependent queueing models including asymptotic methods such as heavy traffic limit theory and strong approximations theory, see for example Halfin and Whitt [5], Mandelbaum et al. [11]. Uniform acceleration is extremely useful for approximating the transition probabilities and moments such as the mean and variance of Markov processes. Moreover, the strong approximation methods are useful for analyzing the sample path behavior of the Markov process by showing that the sample paths of properly rescaled queueing processes converge to deterministic dynamical systems and Gaussian process limits.
However, there are two main drawbacks of these asymptotic methods. The first is that the method is asymptotic as a function of the model parameters and the results really only hold when the rates are large and are nearly infinite. Thus, the quality of the approximations depends significantly on the size of the model parameters and these asymptotic methods have been shown to be quite inaccruate for moderate sized model parameter settings, see for example Massey and Pender [14,15]. The second main drawback is that the asymptotic methods do not generate any important insights for the moments or cumulant moments beyond order two since the limits are are based on Brownian motion. Since Brownian motion has symmetry, its cumulants are all zero beyond the second order. Thus, Brownian approximations are limited in their power to capture asymmetries in higher moments or even the dynamics of the moment generating function, cumulant generating function, or Fourier transform. Moreover, it has been shown recently by Pender [19], Engblom and Pender [2] that the Erlang-A and its variants have non-trivial amounts of skewness and excess kurtosis, which implies that the Erlang-A are not nearly Gaussian for moderate sized queues. These results also demonstrate that it is important to capture the behavior of the Erlang-A model beyond its second moment as this information can be used in staffing decisions Massey and Pender [16].
One common approximation method that is used in the stochastic networks, queueing, and chemical reactions literature is a moment closure approximation. Moment closure approximations are used to approximate the moments of the queueing process with a surrogate distribution. It is often the case that the set of moment equations for a large number of queueing models are not closed, see for example Matis and Feldman [17], Pender [20]. Thus, the closure approximation helps approximate the moments with a closed system using the surrogate distribution. One such method used by Pender [21], Pender and Ko [22] is to use Hermite polynomials for approximating the distribution of the queue length process. In fact, they show that using a quadratic polynomial works quite well. Since the Hermite polynomials are orthogonal to the Gaussian distribution, which has support on the entire real line, these Hermite polynomial approximations do not take into account the discreteness of the queueing process and the fact the queueing process is non-negative. However, they show that Hermite polynomials are natural to analyze since they are orthogonal with respect to the Gaussian distribution and the heavy traffic limits of multi-server queues are Gaussian.
In this paper, we perform an in-depth analysis of the moments and the moment generating function of the non-stationary Erlang-A queue. As the Erlang-B and Erlang-C queueing models are special cases of the Erlang-A model, we are able to obtain similar results for those models. Our approach is to use convexity and exploit Jensen's and the FKG inequality to obtain bounds on the moments and moment generating function of the Erlang-A queue. What we find even more exciting is that we are able to provide a stochastic representation of our approximations and bounds as a Poisson random variables with a constant shift. This shifted Poisson was observed in peer to peer networks by Ferragut and Paganini [3], however, we will show in the sequel, this novel representation will allow us to view our bounds and approximations in a new way.
Main Contributions of the Paper
The main contributions of this work can be summarized as follows: • We provide new approximations for the moments, moment generating function, and cumulant generating function for the nonstationary Erlang-A queue exploting FKG and Jensen's inequalities.
• We derive a novel stochastic intepretation and representation of our approximations as shifted Poisson random variables or M/M/∞ queues, depending on the context. This sheds new light on the complexity of queues in heavy traffic or critically loaded regimes.
• We prove precise error bounds for our approximations and we also prove new upper and lower bounds for the nonstationary Erlang-A queue that become exact in certain parameter settings.
Organization of the Paper
The remainder of this paper is organized as follows. Section 2 introduces the nonstationary Erlang-A queueing model and its importance in stochastic network theory. In Section 3, we provide approximations for the moments of the Erlang-A system and use these to bound the true values. In Section 4 we derive approximations for the moment generating function and cumulant moment generating function of the Erlang-A queue. We again bound the true values by these approximations, and we also find a representation for our approximations in terms of Poisson random variables or M/M/∞ queues, depending on the context.
The Erlang-A Queueing Model
The Erlang-A queueing model is a fundamental queueing model in the stochastic processes literature. The work of Mandelbaum et al. [11], shows that the M (t)/M/c + M queueing system process Q ≡ {Q(t)|t ≥ 0} is represented by the following stochastic, time changed integral equation: where Π i ≡ {Π i (t)|t ≥ 0} for i = 1, 2, 3 are i.i.d. standard (rate 1) Poisson processes. Thus, we can write the sample path dynamics of the Erlang-A queueing process in terms of three independent unit rate Poisson processes. A deterministic time change for Π 1 transforms it into a non-homogeneous Poisson arrival process with rate λ(t) that counts the customer arrivals that occured in the time interval [0,t). A random time change for the Poisson process Π 2 , gives us a departure process that counts the number of serviced customers. We implicitly assume that the number of servers is c ∈ Z + and that each server works at rate µ. Finally, a the random time change of Π 3 gives us a counting process for the number of customers that abandon service. We also assume that the abandonment distribution is exponential and the rate of abandonments is equal to θ.
One of the main reasons that the Erlang-A queueing model has been studied so extensively is because several important queueing models are special cases of it. One special case is the infinite server queue. The infinite server queue can be derived from the Erlang-A queue in two ways. The first way is to set the number of servers to infinity. This precludes any abandonments since the abandonment rate θ · (Q(t) − c) + is always equal to zero when the number of servers is infinite. The second way to derive the infinite server queue is to set the service rate µ equal to the abandonment rate θ. When µ = θ, this implies that the sum of the service and abandonment departure processes is equal to a linear function i.e. µ · (Q(t) ∧ c) + θ · (Q(t) − c) + = µ · Q(t) = θ · Q(t). Thus, the Erlang-A queueing model becomes an infinite server queue.
One of the main and important insights of Halfin and Whitt [5] is that for multi-server queueing systems, it is natural to scale up the arrival rate and the number of servers simultaneously. This scaling known as the Halfin-Whitt scaling and been an important modeling technique for modeling call centers in the queueing literature. Since the M (t)/M/c + M queueing process is a special case of a single node Markovian service network, we can also construct an associated, uniformly accelerated queueing process where both the new arrival rate η · λ(t) and the new number of servers η · c are both scaled by the same factor η > 0. Thus, using the Halfin-Whitt scaling for the Erlang-A model, we arrive at the following sample path representation for the queue length process as The Halfin-Whitt scaling is defined by simultaneously scaling up the rate of customer demand (which is the arrival rate) with the number of servers. In the context of call centers this is scaling up the number of customers and scaling up the number of agents to answer the phones. In the context of hospitals or healthcare this might be scaling up the number of patients with the number of beds or nurses. Taking the following limits gives us the fluid models of Mandelbaum et al. [11], i.e.
where the deterministic process q(t), the fluid mean, is governed by the one dimensional ordinary differential equation (ODE) Moreover, if one takes a diffusion limit i.e.
one gets a diffusion process where the variance of the diffusion is given by the following ODE
Mean Field Approximation is Identical to the Fluid Limit
In addtion to using strong approximations to analyze the queue length process one can also use the functional Kolmogorov forward equations as outlined in Massey and Pender [15]. The functional forward equations for the Erlang-A model are derived as, for all appropriate functions f and where δ ( For the special case where f (x) = x, we can derive an ode for the mean queue length process as The first thing to note is that this equation is not autonomous and one needs to know the distribution of Q(t) a priori in order to compute the expectations on the righthand side of Equation 2.7. To know the distribution a priori is impossible except in some special cases like the infinite server setting. However, it is easy to derive simple approximations for the mean queue length by making some assumptions on the queue length process. This is known as a closure approximation and one common closure approximation method is to simply take the expectations from outside the function to inside the function. This implies that the expectation E[f (X)] becomes f (E[X]). This method is known as a mean field approximation in physics and is also known as the deterministic mean approximation of Massey and Pender [15]. By applying the mean field approximation to Equation 2.7, we can show that the resulting differential equation is given by the following autonomous ODE By careful inspection, one can observe that the ode given by the mean field approximation is identical to the fluid limit of 2.2. Moreover, if one simulates the queueing process and compare it to the mean field limit, one notices an ordering property. For example on the left of Figure 1, we simulate the Erlang-A queue and compare to the fluid model. We observe that when θ < µ, that the simulated mean is larger than the fluid mean. This is precisely what our results predict. Moreover, on the right of Figure 1, we simulate the Erlang-A queue and compare to the fluid model when θ > µ and observe that simulated queue length is smaller than the fluid limit. Our goal in this work is to explain the behavior that we observe in Figure 1, which we will do in the following section. Before concluding our overview of the Erlang-A queueing model, we make a brief remark for notational clarity. Remark 2.1. Throughout the remainder of this work, we use Q(t) to represent the true queueing process and Q f (t) to represent the fluid approximation of it. This fluid approximation is a stochastic process that will be fully described in this work. In fact, in Section 4 we use characterize the fluid approximations and use insight from these representations to bound the true queue length from above and below.
Inequalities for the Moments of the Erlang-A Queue
In this section, we prove when the true moments of the Erlang-A queue are either dominated or dominates their corresponding fluid limit. We find that the relationship between the ser-vice rate and the abandonment rate determines whether or not the moment is dominated by the fluid limit. This section is organized as follows. In Subsection 3.1, we derive inequalities for the true mean of the Erlang-A and its fluid approximation. In Subsection 3.2 we extend these inequalities to analogous results for the m th moment of the queueing system. Finally, in Subsection 3.3 we provide figures from numerical experiments that demonstrate these findings.
Inequalities for the Mean
We begin with analysis of the mean of the Erlang-A queue. Before we proceed, we first establish a lemma for comparisons of ordinary differential equations that will be fundamental to our approach to the results.
has a unique solution for the time interval [0,T] and Proof. The the proof of this result is given in Hale and Lunel [4].
With this lemma in hand, we can now derive relationships for the fluid limit and the true mean. As seen in the proof, these results follow from the application of this differential equation comparison lemma and the convexity seen in the fluid approximation.
, then the true mean dominates the fluid limit when θ < µ, the fluid limit dominates the true mean when θ > µ, and the two means are equal when θ = µ.
Proof. Recall that the true mean satisfies the following differential equation and the fluid limit satisfies the following differential equation We can simplify both equations by observing that (X ∧ c) + (X − c) + = X for any random variable X. Thus, we have the following two equations for the true mean and the fluid limit If we take the difference of the two equations, we obtain the following Now since the minimum function (Q ∧ c) is a concave function, we have that for any random variable Q. Thus, we have that for θ < µ since both differential equations are initialized with the same value and the origin is an equilibrium point for the difference. This completes the proof.
As discussed in Section 2, the Erlang-A model is quite versatile in its relation to other queueing systems of practical interest. In the two following corollaries, we find that Theorem 3.2 can be applied to the Erlang-B and Erlang-C models.
Proof. This is obvious after noticing that the Erlang-B queue is a limit of the Erlang-A queue by letting θ → ∞.
Proof. This is obvious after noticing that the Erlang-C queue is an Erlang-A queue with θ = 0. Since µ is assumed to be positive, then we fall into the case where θ < µ and this completes the proof.
Remark 3.5. Given that we use Jensen's inequality and the FKG inequality later on in the paper, we find it important to differentiate them. Here we give an example that sets the two apart. If we have the following function Q n , then Jensen's inequality implies that . We find it interesting that by iterating the FKG inequality n − 2 more times, it yields Jensen's inequality for the moments of random variables.
Inequalities for the m th Moment
In this subsection we will now extend the previous findings for the mean to higher moments of the queueing system. Like the result for the mean, this is again built through observation of the convexity in the differential equation of the fluid approximation.
Proof. We will use proof by induction. For the base case we can apply Theorem 3.2. Now, suppose that the statement holds for j ∈ {1, 2, . . . , m − 1}. Recall that the m th moment satisfies and the approximate autonomous version satisfies Now by taking the difference, we have that Because the minimum is a concave function, we have that for any X and Y with real means . Thus, we have that for θ > µ, f (t) = 0 since both differential equations are initialized with the same value, the origin is an equilibrium point for the difference, and all the lower-power terms in the differential equations follow this structure, which we know from the inductive hypothesis. Therefore we see this holds for m, which completes the proof.
Again as we have seen for the mean, we can exploit the versatility of the Erlang-A queue to extend these insights to the Erlang-B and Erlang-C models as well.
for all t ≥ 0 and m ∈ Z + . Proof. This is obvious after noticing that the Erlang-B queue is a limit of the Erlang-A queue by letting θ → ∞.
Proof. This is obvious after noticing that the Erlang-C queue is an Erlang-A queue with θ = 0. Since µ is assumed to be positive, then we fall into the case where θ < µ and this completes the proof.
Numerical Results
In this section we describe numerical results for approximating the moments of the Erlang-A queue and examine them relative to our findings. In Figures 2 and 3, we show the first four moments of the Erlang-A queue and their respective fluid approximations for cases of θ < µ and θ > µ, respectively. In these plots, we take the arrival rate at time t ≥ 0 to be λ(t) = 10 + 2 sin(t). We initialize the queue as empty, and we assume that the queueing system has c = 10 servers each with exponential service rate µ = 1. We test two different cases for the abandonment rate: θ = 0.5 and θ = 2. In these settings, we observe that when θ < µ the fluid approximations are below their corresponding simulated stochastic values and that when θ > µ the fluid values are greater than the simulations, and this matches the statements of Theorems 3 and 3.6.
We observe the same relationships in
Inequalities and Characterizations for Generating Functions of the Erlang-A Queue
Building on what we have found for the moments of the Erlang-A, we can provide similar inequalities for the moment generating function and the cumulant generating function again through convexity in the differential equations for the fluid approximations. We provide these inequalities in Subsections 4.1 and 4.2, respectively. In doing so, we find forms for the fluid approximations that we can interpret in terms of expectations of other random quantities. Through these recognitions, we characterize the fluid approximations. We describe these representations for systems in steady-state in Subsection 4.3 and for nonstationary systems in Subsection 4.4. We conclude this section with a variety of demonstrations of these results through empirical experiments in Subsection 4.5.
An Inequality for the Moment Generating Function of the Erlang-A Queue
Using the functional forward equations Massey and Pender [15], we can show that the moment generating function for the Erlang-A queue satisfies the following partial differential (4.14) Just like the non-autonomous differential equation for the mean in Equation 2.7, we also cannot directly compute the moment generating function since we do not know the distribution of the queue length a priori. This is also true for numerical purposes. Unless we can compute the expectation that includes the minimum function it is impossible to know the moment generating function, except in special cases such as the infinite server queue and some cases of the Erlang-B queue. Thus, it is useful to obtain approximations that are explicit upper or lower bounds for the moment generating function. By using Jensen's inequality for concave functions, we can approximate the moment generating function with the following partial differential equation The following theorem determines exactly when E e α·Q f (t) is a lower or upper bound for the exact moment generating function of the Erlang-A queue.
Proof. If we take the difference of the two partial differential equations, we obtain the following Now by exploiting the positive scalability property and the concavity of the minimum function, we have by Jensen's inequality that Thus, we have when θ < µ that and finally when θ = µ, since they solve the same partial differential equation. This completes our proof.
As with the moments, we can observe these relationships occurring in numerical experiments. We provide figures demonstrating this in Subsection 4.5.
An Inequality for the Cumulant Moment Generating Function of the Erlang-A Queue
As a consequence of the findings for the moment generating function, we can also provide similar inequalities for the cumulant moment generating function. Using Equation 4.11, we have Like for the MGF, we note that we cannot compute the cumulant moment generating function directly without knowing the distribution of the queue length. So, by again applying Jensen's inequality, we can describe the fluid approximation as follows.
Using this observation and our approach in finding the inequalities for the moment generating function, we find the equivalent inequalities for the cumulant moment generating function in the following corollary.
Proof. The proof follows from the same argument that was given in Theorem 4.1 and the fact that the log function is strictly increasing.
Characterization of the Moment Generating Function in Steady-State
From what we have observed for the moment generating function, we can derive an exact representation for the fluid approximation of the moment generating function in steadystate. We assume a stationary arrival rate λ > 0. We will investigate the stationary fluid approximation differential equations in a casewise manner based on the relationship of λ and the system's service parameters. To do so, we begin with a lemma bounding the fluid approximation of the mean.
Proof. We will prove this by contradiction. For the first part, we assume that E[Q f (∞)] ≥ c. Now by using the differential equation for the mean in steady state, we have that Since we assumed that E[Q f (∞)] ≥ c, then this yields the following inequality λ ≥ cµ, which yields a contradiction. For the second case, where we assume that λ ≥ cµ and E[Q f (∞)] < c, then by the same differential equation we have that which yields another contradiction.
We now begin characterizing the fluid approximations with our first case, λ ≥ cµ, in the following proposition.
which yields a solution of Proof. To find the partial differential equation, we use functional cumulant bound for any non-decreasing function h(·) (which can be seen as a form of the FKG inequality), where here we have used the identity e x = e x −1 1−e −x , which can be observed by multiplying each side of the equation by 1−e −x . Because the MGF is equal to 1 when α = 0, we also have that G f (0) = 0. Using this initial condition and integrating left and right sides of Equation 4.28 with respect to α, we find that and since M f (∞, α) = e G f (α) , we attain the stated result.
We can now observe that the fluid approximation is equivalent in distribution to a Poisson random variable shifted by γ ≡ c(θ−µ) θ , as the moment generation function for the Poisson distribution is e β(e α −1) , where β is the rate of arrival and α is the space parameter of the MGF. This gives rise to the following.
Proof. From Proposition 4.4, we have that the fluid approximation of the MGF in steadystate is where Γ ∼ Pois λ θ and γ = c(θ−µ) θ . From the uniqueness of MGF's, we have that for all m ∈ Z + . Now, recall that for an M/M/∞ queue with arrival rate λ and service rate θ, the stationary distribution is that of a Poisson random variable with rate parameter λ θ . So, we can think of Γ as representing the steady-state distribution of an infinite server queue with Poisson arrival rate λ and exponential service rate θ.
Suppose now that θ > µ. Then, by Theorem 3.6 and our preceding observation, we have that E [(Q(∞)) m ] ≤ E [(Γ + γ) m ]. Additionally, by comparing the steady-state infinite server queue representation of Γ to Q(∞), we can further observe that E [(Q(∞)) m ] ≥ E [Γ m ], as for any state j the service rate in Q(∞) is no more than the service rate in the same state in the Γ queueing system. Thus we have that for all m ∈ Z + whenever θ > µ. By symmetric arguments, we can also find that if µ > θ then for all m ∈ Z + , as in this case γ = c(θ−µ) θ < 0.
Remark 4.6. Note that in Theorem 3.6, we require that Q(0) = Q f (0) but in this case we have not assumed such a condition. This is because the inequalities in Theorem 3.6 hold for all time, and we simply need the relationship to hold in steady-state, which can be seen to occur regardless of initial conditions. By knowing the fluid form of moment generating function explicitly as a Poisson distribution, we can also provide exact expressions for the fluid moments and the fluid cumulant moments. These are given in the two following corollaries.
Corollary 4.7. If λ ≥ cµ, then in steady-state we have that the first n moments have the following steady-state expressions: where P m λ θ is the m th Touchard polynomial with parameter λ θ .
Proof. This can be seen by direct use of the Poisson form of the fluid MGF. Let Γ ∼ Pois λ θ and let γ = c(θ−µ) θ . Then, Corollary 4.8. If λ ≥ cµ, then in steady-state we have that We now consider the second case, which is λ < cµe −α . Note that this now also requires a relationship involving the space parameter of the moment generating function, α. This is less general than the first case, but it allows us to derive Lemma 4.9.
Proof. To begin, suppose that α). Using this information in conjunction with the steady-state form of the partial differential equation for the fluid MGF given in Equation 4.16, we have that Using our assumption, we see that and this yields that λ < cµe −α , which shows one direction.
We now move to showing the opposite direction and instead assume that α). In this case, Equation 4.16 is given by and this simplifies to Again by use of this case's assumption, we have and this now yields thus completing the proof.
We can now use this lemma to find an explicit form for the fluid approximation of the steady-state moment generating function when λ < cµe −α . for α ∈ R.
Proof. By Lemma 4.9 and our assumption that λ < cµe −α , we know that α). Thus, by observing this in the steady-state MGF equation, we easily obtain the result in Equation 4.32. Moreover, the solution to Equation 4.32 can be easily seen by inserting our proposed solution in and noting that it satisfies our differential equation. Moreover, the solution is unique by the properties of linear ordinary differential equation theory.
Remark 4.11. We now pause to note that the λ ≥ cµe −α case of Lemma 4.9 implies Proposition 4.4 (and its following consequences) with a weaker assumption. However, because the condition λ ≥ cµ does not depend on the choice of α it is more general, and thus we leave those results as stated with that assumption instead of λ ≥ cµe −α .
Here we observe that Equation 4.33 is equivalent to the moment generating function of a Poisson random variable with parameter λ µ . Now, by recalling again that the steady-state distribution of a M/M/∞ queue is a Poisson distribution with parameter equal to the arrival rate over the service rate, we find the following inequalities.
Theorem 4.12. Let λ < cµ and m ∈ Z + . Then, if θ > µ where Γ x ∼ Pois λ x for x > 0. Proof. In each case, the inequality involving Γ µ ∼ Pois λ mu follows directly from Proposition 4.10 and Theorem 3.6 via the observation that the fluid form of the moment generating function is equivalent in distribution to that of Γ µ . Here we are using Proposition 4.10 with α = 0, and by continuity we know this holds for some ball around 0. This validates the use of the derivatives of the steady-state MGF with respect to α evaluated at α = 0 in finding the moments for the fluid approximation. Thus, we are left to prove the inequalities for Γ θ ∼ Pois λ θ .
To do so, let's first note that the stationary distribution of a M/M/∞ queue with service rate θ is equivalent to that of Γ θ . Suppose now that θ > µ. Then, any state of such a M/M/∞ queue has a larger rate of departure than the same state in the Erlang-A system. Thus, we have that for all m ∈ Z + . By symmetric arguments in the θ < µ case, we complete the proof.
As we did for the case when λ ≥ cµ, we can use these findings to give explicit expressions for the fluid approximations of the moments and the cumulant moments.
Corollary 4.13. If λ < cµ, then in steady-state we have that and for n ∈ Z + , where C (n) [Q f (∞)] is defined as the n th cumulant moment of Q f (∞) and P m λ µ is the m th Touchard polynomial with parameter λ µ .
Characterization of the Nonstationary Moment Generating Function
Many scenarios that feature customer abandonments may also feature an arrival process that is nonstationary. To incorporate this, we now incorporate a point process that can be used to approximate any periodic mean arrival pattern, as discussed in Eick et al. [1]. Specifically, we define λ(t) by a Fourier series: let λ 0 and {(a k , b k ), k ∈ Z + } be such that a k sin(kt) + b k cos(kt). (4.39) We now take λ(t) as the rate of arrivals at time t in the Erlang-A model. Under this setting, we derive the following expression for the cumulant moment generating function of the fluid approximation and its corresponding partial differential equation whenever the arrival rate is sufficiently large. We do so through a series of technical lemmas. First, we bound the fluid mean when the arrival rate and initial value are sufficiently large.
Lemma 4.14. Suppose that λ ≡ inf t≥0 λ(t) > cµ and that E [Q f (0)] > c. Then, Proof. We have seen that E [Q f (t)] evolves according to at all times t. Now, suppose thatt > 0 is a time such that E Q f (t) = c + for some > 0. Then, if < λ−cµ θ we have that By the continuity of the fluid mean and the fact that E [Q f (0)] = q(0) > c, we see that With this in hand, we now also provide the moment generating function for an M/M/∞ queue with nonstationary arrival rate λ(t), which we will use for comparison later in this section.
Proof. To start, we have that time derivative of the MGF is where λ(t) is as defined previously: a k sin(kt) + b k cos(kt).
This differential equation can be view as a partial differential equation when expressed as where M (α, t) is the moment generating function at time t and space parameter α. To simplify our effort, we instead consider the differential equation for the cumulant MGF, which is G(α, t) = log(M (α, t)). This PDE is with the initial condition that G(α, 0) = log E e αQ∞(0) = log (e αq 0 ) = αq 0 .
We can first see that the ODE's for α and t solve to α(r, s) = log(e c 1 (r)+µs + 1) −→ α(r, s) = log ((e r − 1)e µs + 1) t(r, s) = s + c 2 (r) −→ t(r, s) = s and so we can now use these to solve the remaining ODE. After substituting we have dg ds (r, s) = λ(s)(e r − 1)e µs which gives a solution of So, using s = t and r = log (e −µt (e α − 1) + 1), we have that and therefore by solving for M (α, t) = e G(α,t) we attain the stated result.
Now that we have established these lemmas we proceed with the analysis of the nonstationary Erlang-A. In the next theorem we give explicit forms for the fluid form of the cumulant MGF and its corresponding partial differential equation.
Theorem 4.16. If inf t≤∞ λ(t) ≡ λ > cµ and q(0) > c, then in for all t ≥ 0 we have that which gives a solution of for all t ≥ 0 and all α ∈ R.
Proof. From Equation 4.24, we have that the PDE for the fluid approximation's cumulant moment generating function is . Using the FKG inequality and our observation from Lemma 4.14 that E [Q f (t)] > c, we have that and so ∂G f (t,α) ∂α ∧ c = c. Thus, we have the PDE given in Equation 4.40 and so now we seek to find it's solution. We approach this via the method of characteristics. Because G f (0, α) = log(E e αQ f (0) ) = αq(0), we see that we seek to solve the following system Introducing characteristic variables r and s, we have the characteristic ODE's as with initial conditions α(r, 0) = r, t(r, 0) = t, and g(r, 0) = rq(0). Then, we can solve the first two ODE's to see that α(r, s) = log((e r − 1)e θs + 1) t(r, s) = s and so we can use these to solve the remaining equation. Substituting in, we have the ODE as dg ds (r, s) = λ(s)e θs (e r − 1) + c(θ − µ) e θs (e r − 1) e θs (e r − 1) + 1 and this now solves to g(r, s) = (e r − 1) λ 0 θ (e θs − 1) + ∞ k=1 (a k θ + b k k) sin(ks)e θs + (b k θ − a k k)(cos(ks)e θs − 1) Now, we can rearrange our solutions to find s = t and r = log((e α − 1)e −θt + 1). Then, we have that 1)e −θt + 1) + log((e α − 1)e −θt + 1)q(0) and this simplifies to the stated result.
Like the approach in our investigation of the steady-state scenario, we can now observe that the fluid approximation is equivalent in distribution to a infinite server queue shifted by γ ≡ c(θ−µ) θ . This gives rise to the following. Theorem 4.17. For the Erlang-A queue with nonstationary arrival rate λ(t) such that λ ≡ inf t≥0 λ(t) > cµ and initial value q(0) > c, the fluid approximation of the MGF is equivalent to that of a shifted M/M/∞ queue with arrival rate λ(t), service rate θ, initial value q(0) − c(θ−µ) θ , and linear shift c(θ−µ) θ .
Proof. Observe from Theorem 4.16 that the fluid MGF for the Erlang-A under these conditions is (a k θ+b k k) sin(kt)+(b k θ−a k k)(cos(kt)−e −θt ) which is of a form that we can recognize. Comparing it to Lemma 4.15, we can see that Q f is of the form of a shifted M/M/∞ queue with arrival rate λ(t), service rate θ, initial value q(0) − c(θ−µ) θ , and linear shift c(θ−µ) θ , thus enforcing that the fluid model does start at q(0). This representation of the fluid approximation allows us to now provide upper and lower bounds for the moments of the Erlang-A system. Corollary 4.18. Let Q(t) represent the Erlang-A queue with nonstationary arrival rate λ(t) such that λ ≡ inf t≥0 λ(t) > cµ and initial value q(0) > c, and let Q f (t) represent the corresponding fluid approximation. Then, if θ > µ for all time t > 0 and all m ∈ Z + , where γ = c(θ−µ) θ .
Proof. In each case, the bound involving the fluid approximation of the moment is a direct consequence of Theorem 3.6 and so only the other two bounds remain to be shown. We now note that since we have characterized the fluid approximation as a shifted M/M/∞ queue, the remaining bounds are from the unshifted version of this system and, by following the same arguments as in Theorems 4.5 and 4.12 regarding the rates of departure in the corresponding states of the Erlang-A queue and the M/M/∞ queue, this completes the proof.
Numerical Results
In this subsection we describe various numerical experiments demonstrating these findings. We first have Figures 6, 7, 8, and 9, which compare simulated value of the moment generating function to their fluid approximations. In the first two figures, the arrival intensity is λ(t) = 5 + sin(t), the service rate is µ = 1, and the number of servers is c = 5. The abandonment rates are the differing component of these plots, with θ = 0.5 and θ = 2 as the two respective values. These same comparisons are made in the latter two figures, however in this case the arrival rate is instead λ(t) = 10 + 2 sin(t) and the number of servers is c = 10.
Through these plots one can observe that the true MGF dominates the fluid approximation when θ < µ and that the fluid dominates the stochastic value when θ > µ. This is of course stated with the understanding that for small values of α or for times near 0 the values of the MGF and the approximation are quite close and so with numerical error the surfaces may overlap. In Figure 10 we plot the limiting distribution for the steady-state Erlang-A. For these plots we take λ = 20 and µ = 1, and then vary θ and c. For the three plots on the left we take the abandonment rate to be θ = 0.5 and for those on the right we set θ = 2. For the top two plots we set the number of servers as c = 15, in the middle two c = 20, and in the bottom two we make c = 25. We observe that the approximate distribution is quite close when λ is not near cµ but the approximation is less accurate when λ = cµ. This finding is consistent with much of the literature that focuses on finding novel approximations for queueing networks and optimal control of these networks, see for example Hampshire and Massey [6], Hampshire et al. [7,8], Pender and Ko [22], Niyirora and Pender [18], Qin and Pender [25]. We note here that these approximations are not all of the same form: recall that when λ ≥ cµ the fluid approximation is equivalent in distribution to a shifted Poisson random variable with parameter λ θ , but when λ < cµ it is equivalent to a Poisson distribution with parameter λ µ . In Figure 11 we examine the limiting distributions for the single server case. In these plots we set µ = 1 and then vary the arrival rate and the abandonment rate. On all plots on the left we set θ = 0.5 and on the right θ = 2. Further, in the top pair we make λ = 0.8, in the middle we let λ = 1, and in the bottom pair λ = 1.2. As in Figure 10, Figure 11 shows that our approximations are quite good. Thus, we are able to capture single server dynamics as well as large-scale multi-server dynamics even though they are quite different. This is even more useful as our approximations are non-asymptotic and don't rely on scaling the number of servers. In Figures 12, 13, and 14, we take the arrival rate as λ(t) = 6.5 + sin(t), the service rate as µ = 1, and the number of servers as c = 5. Because inf t≥0 λ(t) > cµ, we use the characterization of the fluid approximation as a shifted M/M/∞ queue and compare the simulated system, the fluid approximation, and the unshifted M/M/∞. In the first figure we consider the mean for θ = 1.1 and and θ = 0.9 and find that while the fluid approximation is quite close the unshifted system is not near to the Erlang-A system, even for these relatively similar rates of service and abandonment. We find the same for the latter two figures, in which we plot the moment generating function for θ = 1.1 and θ = 0.9, respectively.
Conclusion
In this paper we have investigated the Erlang-A queueing system through comparison to the fluid approximations of its moments and moment generating function as well as of its cumulants and cumulant moment generating function. Through recognizing the convexity in the differential equations describing these approximations, we have found fundamental relationships between the values of these quantities and their fluid counterparts: when the rate of abandonment is less than the rate of service the true value dominates the approximation, when the service rate is larger the approximation dominates the true value, and when the rates of abandonment and service are equal, the two are equivalent.
In forming these inequalities, we have found explicit representations of the fluid approximations through equivalences in distribution with Poisson random variables and infinite server queues, in the stationary and non-stationary cases, respectively. These characterizations both give insight into the approximations themselves and yield natural inequalities that complement those from the approximations. We have demonstrated the performance of these bounds through simulations. Through consideration of both these findings and the empirical experiments, we can identify interesting directions of future work.
For example, it would be of great interest to gain more explicit insights into the gap between the fluid approximations and the true values. This is a non-trivial endeavor, which stems from the non-differentiablility and non-closure in the differential equations for the true expectations. The numerical experiments in this work indicate that the fluid approximations may often be quite close but not exact, and additional understanding would be useful in practice. Moreover, extending our results to more complicated queueing systems where the arrival and service processes follow phase type distributions is of interest given the new work of Pender and Ko [22], Ko and Pender [9,10].
Additionally, it would be even more useful to gain a better understanding of the limiting distribution of the Erlang-A queue. As we discuss in the paper, the empirical experiments in Subsection 4.5 indicate that the true limiting distributions closely resemble the shifted Poisson distributions that we have found as characterizations of our fluid approximations. In particular, the approximations seem quite close when λ is not near cµ. As a simple extension of this work, it can be observed that some sort of combination of the approximation when λ < cµ and of the approximation when λ > cµ could make a nice choice for approximation of the distribution when λ = cµ. In some sense, it is not surprising that these approximations are similar to the true limiting distribution, as the Erlang-A appears to be a M/M/∞ queue with service rate µ (the approximation when λ < cµ), when only considering the states up to c, and it also resembles some sort of shifted M/M/∞ queue with service rate θ (which also describes the approximation when λ ≥ cµ) for states c + 1 and beyond. Finally, it would be interesting to extend this to networks of Erlang-A queues like in Pender and Massey [23], however, we would have to keep track of the routing probabilities carefully to keep track of the convexity/concavity of the rate functions.
|
2018-10-14T12:38:14.029Z
|
2017-12-22T00:00:00.000
|
{
"year": 2019,
"sha1": "5ac3f99075d4cdbcf7aed22eeb5d061191d58a09",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1712.08445",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "02ce7e1f17e2003c8c916050a9ceecf893fe29c3",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
57013143
|
pes2o/s2orc
|
v3-fos-license
|
Swall-E: A robotic in-vitro simulation of human swallowing
Swallowing is a complex physiological function that can be studied through medical imagery techniques such as videofluoroscopy (VFS), dynamic magnetic resonance imagery (MRI) and fiberoptic endoscopic evaluation of swallowing (FEES). VFS is the gold standard although it exposes the subjects to radiations. In-vitro modeling of human swallowing has been conducted with limited results so far. Some experiments were reported on robotic reproduction of oral and esophageal phases of swallowing, but high fidelity reproduction of pharyngeal phase of swallowing has not been reported yet. To that end, we designed and developed a robotic simulator of the pharyngeal phase of human swallowing named Swall-E. 17 actuators integrated in the robot enable the mimicking of important physiological mechanisms occurring during the pharyngeal swallowing, such as the vocal fold closure, laryngeal elevation or epiglottis tilt. Moreover, the associated computer interface allows a control of the actuation of these mechanisms at a spatio-temporal accuracy of 0.025 mm and 20 ms. In this study preliminary experiments of normal pharyngeal swallowing simulated on Swall-E are presented. These experiments show that a 10 ml thick bolus can be swallowed by the robot in less than 1 s without any aspiration of bolus material into the synthetic anatomical laryngo-tracheal conduit.
Swallowing
Swallowing is a fundamental physiological function whereby food and liquids are transported in a synchronized and sequential manner from the oral cavity to the esophagus, passing through the pharynx, also known as the aerodigestive crossroads. Swallowing can be divided into (I) the preparatory phase, (II) the oral phase, (III) the pharyngeal phase and (IV) the esophageal phase [1]. Phases II, III and IV are schematically represented on Fig 1 for better comprehension. A bolus, i.e. foods and/or liquids mixed with saliva, is masticated and shaped in the oral cavity during the preparatory phase. The bolus is then propelled by the tongue from the oral cavity to the pharynx during the oral phase. During the pharyngeal phase, the bolus transits the pharyngeal cavity and is transported towards the esophagus. The pharynx is PLOS Bolus normally flows only from the oropharynx to the hypopharynx, while the lower airways, i.e. the larynx, the trachea and the lungs, are protected during the pharyngeal phase by a number of protective reflex mechanisms [1], mainly the laryngeal elevation and the sequential closure of laryngeal anatomical structures: vocal folds, ventricular folds, aryepiglottic folds and epiglottic fold [2]. Once the swallowed bolus safely reaches the esophagus, by passing through the open upper esophageal sphincter (UES), the esophageal phase of swallowing commences, whereby the bolus is conveyed downwards (down to the stomach) thanks to the peristaltic motion of the esophagus. While normal (healthy) swallowing ensures a safe transport of foods and liquids by preventing them from entering the airways during the pharyngeal phase, in case of disrupted (pathological) swallowing, which is called dysphagia, part or all the bolus may accidentally enter the trachea e.g. due to disrupted timings of events (desynchronizations or delays of mechanisms) and/or impaired/insufficient protective mechanisms.
Robotic simulation of swallowing
Robotic systems for partially simulating different phases of swallowing were previously developed. Woda et al [3] developed a mastication simulator to observe food bolus formation during mastication. Doyennette et al [4,5] swallowing robot from Noh et al [10] includes an artificial head composed of mandible, tongue, pharynx, larynx, epiglottis, trachea, which is able to mimic swallowing motions based on VFS data. The main drawback of this system is its absence of actuation of the epiglottis and the pharynx, making it more suitable to study the oral phase of swallowing than the pharyngeal phase.
Regarding the pharyngeal phase, specifically, Stading and Qazi are currently developing a mechanical in-vitro apparatus named the 'Gothenburg Throat' [11] which aims at investigating the rheology of bolus during the pharyngeal phase. This apparatus consists of a duct assembly of simplified rigid geometries representing the tongue, pharynx, larynx, trachea, epiglottis and esophagus. Tested boli are injected by a motorized syringe inside the duct. Furthermore, this apparatus is equipped with ultrasonic velocimetry and pressure sensors positioned at relevant locations of the duct to allow precise rheological characterization of the tested bolus flow in a controlled way. The main limitation of such a device is the rigidity of its anatomical structures which cannot deform.
Aim of this study
Our aim was to develop an in-vitro mechatronic system that can realistically simulate the pharyngeal phase of human swallowing, in terms of both physiology and anatomy. This system should therefore be able to swallow an injected bolus from the pharyngeal inlet down to the upper esophagus inlet (i.e. UES) in timings similar to those of a real pharyngeal swallowing, i.e. in a time interval of less than 1 s [1]. Moreover, due to the importance of the interaction between swallowing and respiration with regards to the swallowing efficiency [2], which is still far from being clearly understood [12], and the lack of such an important physiological function in other existing swallowing robots, our system is designed to simulate a respiratory airflow. To that end, we created a robotic apparatus, named Swall-E, based on a realistic model of a human pharyngo-laryngeal tract in which the reproduced anatomical structures and mechanisms involved in the pharyngeal swallowing phase can be precisely actuated and controlled.
Swall-E robot
Design and specifications. Swall-E swallowing robot, illustrated in Fig 2, was designed with the main objective of mimicking the anatomy and physiology of the pharyngeal phase of adult human swallowing, as faithfully as possible, in terms of dynamics, timings and dimensions. Thus, the main starting specifications were as follows: real scale model of adult human anatomical tract in optimally chosen soft material; reproduction of anatomical structures directly involved in the bolus transport process occurring in the pharyngeal phase; reproduction of critical physiological mechanisms in a precise and adjustable manner, spatially and timely. Such reproduced mechanisms are: bolus injection by base of tongue movement; vocal fold opening/closure; laryngeal elevation; pharyngeal contraction; epiglottic closure of larynx; UES opening/closure; respiration. The overall architecture of Swall-E is centered around a synthetic anatomical conduit, which is set in motion by an actuation system. The overall assembly is mounted on an aluminum frame of dimensions 850 × 850 mm. A power supply provides electricity to all the electromechanical components.
Anatomical conduit. The synthetic anatomical conduit (Fig 3) is derived from a computer aided design (CAD) geometry generated from a computerized tomography (CT) scan of a human aerodigestive tract (healthy male subject between 20 and 25 years of age). The following anatomical structures are included in this conduit: base of tongue; oropharynx; hypopharynx; epiglottis; larynx and upper part of trachea (30 mm); upper part of esophagus (30 mm) including the UES. In this first version, we decided not to incorporate a velopharyngeal sphincter and not to reproduce the velopharyngeal closure mechanism, as their contribution to laryngo-tracheal aspiration mechanisms occurring in the laryngeal and hypopharyngeal regions is limited. Thus, the anatomical conduit is closed at the velopharyngeal region.
Materials. For the anatomical conduit, efforts were made to select optimal soft materials with mechanical properties as similar as possible to those of the real anatomical structures and tissues. In order to simplify the manufacturing process of the conduit, we decided to use only a single soft material. The different hardnesses of the various anatomical structures were taken into account by locally varying the thickness of the material according to the local anatomical region (tongue, pharynx, larynx, esophagus). Silicone, due to its mechanical behavior suitable for this type of biomechanical application [13,14] and translucency allowing some internal visualization, was selected. The hardness of the selected silicone was determined by having several silicone samples of different hardness levels independently evaluated by seven experienced ENT surgeons. Of all the evaluated samples, the 13-Shore A hardness sample was unanimously deemed the most suitable. The manufacturing process of the silicone anatomical conduit involved silicone injection molding in a 3D-printed mold created from the aero-digestive CAD model.
Actuation system. The sequenced and synchronized motions of the reproduced mechanisms are achieved by an actuation system (of spatial accuracy 0.025 mm and temporal accuracy 20 ms) based on metal wires directly deforming the silicone conduit (Fig 4). This type of actuation system was chosen over other systems (e.g. hydraulic based or mechanical roller based systems) owing to its robustness, precision and relative technical ease of manufacturing. 17 actuators pull and push the metal wires. Each actuator is made of an electrical motor (ref. Maxon DC-MAX26S EB KL 12V) with an embedded 32-increment optical coder (ref. Maxon ENX16 EASY 32IMP) attached to a screw-nut system of length 50 mm and screw thread of 0.8 mm (hence the spatial accuracy of 0.025 mm = 0.8/32). Each nut is attached by a pressure spindle to a 1 mm-diameter galvanized steel wire (ref. 1 mm diameter STANDERS lifting cable). This type of wire was selected due to its suitable flexibility for this application. Each wire is covered by a plastic sheath of inner diameter 1.1 mm to ensure a reversible movement of the wire regardless of a tensile or compressive stress exerted to it. The sheath prevents excessive bending of the wire. Sheath proximal ends (close to the anatomical conduit) are supported by spatially adjustable metal rods screwed on the aluminum frame. Wires are guided by 3D-printed transparent pads glued on the silicone conduit. The wires and pads are positioned so that to mimic the muscular insertions of head and neck muscles involved in the pharyngeal swallowing process. Symmetrical pair muscular insertions are mimicked by curving a wire of which both ends are symmetrically attached to one actuator.
Tongue, pharynx and larynx movements. In physiological swallowing, the bolus is propelled by the base of tongue against the posterior pharyngeal wall at the beginning of the pharyngeal phase [1]. Then, the pharyngeal posterior muscles which are mainly the three pharyngeal constrictor muscles (upper, middle and lower) exert a continuous and smooth constricting movement to the bolus in order to progressively transport it towards the esophagus [1,15]. Simultaneously, the larynx elevates in the anterior and superior directions to protect the airways. On Swall-E, only the base of tongue is reproduced (i.e. not the complete tongue) as it is the most important part of tongue involved in pharyngeal swallowing and it enables increased amplitude of backward propelling movement. The propulsion movement of this synthetic base of tongue is achieved by three wires passing through pads positioned on the curved anterior side of the base of tongue. The pharyngeal contraction movement is induced by seven wires passing through pads positioned on the curved posterior side of the pharynx. Laryngeal elevation movement is achieved by two slides mounted perpendicularly in the horizontal (X) and vertical (Y) directions (Fig 5), which are rigidly attached to a rigid 3D-printed piece acting both as a cricoid cartilage and a hyoid bone. The X and Y directions correspond respectively to the physiological anterior-posterior and inferior-superior directions. The two slides are also attached to two wires and corresponding actuators. The so-called cricoid piece itself is attached to the laryngo-tracheal duct to pull it in the X and Y directions during the laryngeal elevation.
Anatomical folds and sphincters. The actuated anatomical folds and sphincters are the vocal folds, the epiglottic fold and the UES. Vocal folds are already shaped in the silicone anatomical conduit (Fig 6). The vocal fold abduction/adduction (or opening/closure) mechanism is handled by a clamp pinching the laryngo-tracheal duct (Fig 5) at the vocal fold plane (or glottic plane). The jaws of the clamp open and close the two synthetic vocal folds thanks to a horizontal pulling/pushing wire. The epiglottis is modeled by a silicone flap also already shaped in the silicone conduit (Fig 6), partially overmolded on a guiding metal stem (the tip of the synthetic epiglottis remains entirely flexible and free to move, though). While in physiological reality the epiglottic tilt movement is passive (i.e. non activated by muscles), induced by the coordination of laryngeal elevation and approximation of hyoid bone and thyroid cartilage [16], in Swall-E it is actively actuated (Fig 5) in order to have more control over it in case of pathological swallowing modeling. The epiglottic actuation is mechanically performed by connecting the metal stem to a shaft, the latter being attached to an electrical motor through two universal joints and two symmetrical transmission belts. Such a mechanism allows three degrees of freedom: one rotation in the (XY) plane, and two translations in the X and Y directions. As for the UES, its opening/closing pattern, having also critical physiological impact on the swallowing function efficiency [17,18], is created on Swall-E by having the synthetic cricoid cartilage (on its posterior surface) glued to it (Fig 5). By pulling anteriorly the synthetic cricoid cartilage during laryngeal elevation movement, the UES opens accordingly. At rest, the UES remains closed like in physiological reality [18].
Control and instrumentation
Control system. Swall-E actuation system is piloted by an electromechanical control system. The rotational position of each motor, thus the linear position (i.e. displacement) of each , hence the previously mentioned temporal accuracy of 20 ms. This temporal spacing is due to the limitations of the microcontroller. Programed chronological displacement sequences of actuators are referred thereafter by 'chronograms'. A LabVIEW (National Instruments, Inc.) graphical user interface (GUI) was developed to allow precise control and settings of all the actuators and other electromechanical components of the robot. The GUI imports text files containing preprogramed 500 point-chronograms and sends them to the microcontroller, to be played by the actuators. It is therefore possible to reproduce the complex coordinated pattern of mechanisms occurring in a pharyngeal swallowing process by preparing chronogram text files beforehand. This preparation can be either done manually on the GUI (through displacement graphs manually editable) or more automatically using a programming tool that can produce output text files, e.g. MATLAB (Mathworks Inc.).
Control of pressure. Another important feature is the dynamic control of pressure (i.e. pressure generation) inside the laryngo-tracheal duct and inside the esophageal duct. This feature is carried out by two identical pressure generators, each one consisting in a piston mounted in a plexiglas cylinder of length 15 cm and diameter 13 cm associated with a pressure transducer (ref. NovaSensor NPC-100) which monitors the pressure at the outlet of the cylinder. The outlet pressure dynamically generated by the piston is piloted by a servo control which, according to the set pressure chronogram, dynamically adjusts the piston position. A solenoid valve (ref. Bürkert 6213 EV) is mounted downstream of the piston outlet to either completely shut or allow the airflow generated from the piston. A vent tube is connected to the conduit in the velopharyngeal region at one end and at a solenoid valve (ref. SMC VDW 10AA) at the other end leading to ambient air, to act as a respiratory 'nose'. The capability of pressure generation in the laryngo-tracheal duct enables a controlled respiratory airflow, useful to study for instance the coordination between swallowing and respiration, critical on swallowing function outcome [2].
Bolus injection. The bolus injection mechanism into the anatomical conduit is carried out by a 60 ml motorized syringe. The bolus fluid material is stored inside a tank installed upstream of the motorized syringe through a T-shaped tube. Two pinch solenoid valves (ref. Fluid Concept S126) are positioned in the injection circuit to precisely control the bolus injection sequence, which occurs as follows: (1) the syringe aspirates a set volume of bolus from the bolus tank; (2) the syringe propels the set volume of bolus into the conduit at a predefined injection timing through a nozzle air-tightly connected to the conduit ( Fig 5); (3) the syringe returns to its initial position. This system is able to achieve an injection of 10 ml of liquid or thick bolus in 100 ms, i.e. an injection flowrate of 0.1 l/s. Instrumentation. In terms of instrumentation, Swall-E is equipped with various cameras for visual monitoring and video recording of the simulated swallowing events occurring inside the translucent silicone anatomical conduit. Two high-speed cameras (ref. iDS UI-3160CP-C-HQ Rev.2), allowing synchronized video recordings at 100-300 frames per second (FPS), are positioned 30 cm from the anatomical conduit in lateral and rear sides of it. Two generic USB endoscopic cameras of diameter 5.5 mm can be inserted in the laryngo-tracheal duct and the esophageal duct to enable endoscopic visualization below the vocal folds and the UES. A third endoscopic 5.5 mm camera is mounted on the posterior pharyngeal wall to allow endoscopic visualization of the pharyngeal cavity.
Simulated swallowing experiments
Simulated swallowing experiments were conducted to demonstrate the fundamental capabilities of Swall-E robot. We simulated a normal (i.e. non pathological) pharyngeal swallowing cycle based on in-vivo timings reported in normal swallowing studies. Our aim was to achieve a transit of injected bolus through the pharynx in less than 1 s, without any aspiration of bolus material into the laryngo-tracheal duct. For reference, Fig 7 shows some successive VFS images of a normal pharyngeal swallowing recorded from an around 50 years old subject, swallowing a 3 ml thick bolus of custard type texture. These data were retrospectively obtained from previously-collected data, and were anonymized prior to the current study. Proper formal written consent from the subject to utilize these data for education or research purposes was obtained prior to the data collection.
Bolus material
A bolus material of thick consistency was prepared prior to the experiments, by pouring and mixing one spoonful of thickening powder (ref. Nutricia Power, Nutilis) into 200 ml of still water, and adding 15 droplets of red food coloring to improve visibility of the bolus. This type of thickening alimentary powder is commonly utilized by dysphagic patients to help them swallow liquids more easily by increasing the viscosity of the swallowed liquids. We used the IDDSI (International Dysphagia Diet Standardisation Initiative [19]) protocol to characterize the texture of the bolus in a standardized way. According to this protocol, the tested bolus was a class 3 texture i.e. a moderately thick texture (0 corresponding to thin textures and 4 to purees).
Experimental protocol
A 10 ml volume of the prepared bolus material was stored by the motorized syringe to be injected in the anatomical conduit at a flow rate of 0.1 l/s. In these experiments no respiration was simulated in order to focus solely on the pharyngeal swallowing process. One single cycle of pharyngeal swallowing was simulated and simultaneously filmed by the rear and the lateral high-speed cameras at 105 FPS. After the actual swallowing cycle, all the actuators and the conduit were automatically reset to their original positions. The actuation chronograms utilized for these experiments are depicted on Fig 8. These chronograms were generated in MATLAB by retrieving and adapting in-vivo timings reported in [20] and [18] studies. Displacement amplitudes of the actuators were gradually set until a qualitatively satisfactory configuration was achieved (by comparison with normal swallowing VFS recordings).
Evaluation of experiments
In the present study, our objective was to visually demonstrate that Swall-E robot is capable of functionally mimicking a normal (healthy) pharyngeal phase of swallowing in less than 1 s. By healthy swallowing we mean that no injected bolus material penetrates into the 'wrong' pipe i.e. the laryngotracheal duct (the 'right' pipe being the esophageal duct). For this visual demonstration, the rear and lateral high-speed cameras were utilized to record the videos of the conducted experiments and the obtained videos were evaluated thereafter. The evaluation of the videos was carried out by utilizing two qualitative clinical indicators which assess the safety level of a swallowing process in dysphagic patients, detailed below.
• Visual observation of the absence or presence of aspiration in the trachea. For a normal swallowing sequence, we expect to observe a transit of injected bolus from the oropharyngeal cavity down to the esophagus without any tracheal aspiration, and presence of post-swallowing pharyngeal residues at worst only in the valleculae (i.e. PRS = 2) as observed by Omari et al. in healthy subjects [21]. Swall-E swallowing robot video files are provided as supporting information (S1 and S2 Videos). We were able to achieve a pharyngeal transit of injected bolus from the injection nozzle outlet located upstream from the oropharyngeal cavity down to the upper esophagus in less than 1 s. No tracheal aspiration of the injected bolus was observed during the whole sequence and after, for at least 2 minutes. However, some residues of bolus material could be observed after the passage of bolus, on the pharyngeal wall, in the valleculae and seemingly in the piriform sinus; i.e. a PRS score of 6. These traces are probably due to too weak bolus propulsion forces [22] and/or the high stickiness of the prepared bolus material as well as a certain lack of lubrication of the anatomical silicone conduit, compared to real pharyngeal walls which are normally lubricated by saliva and mucus. Despite a likely insufficient bolus propulsion, a safe swallowing process without any tracheal aspiration was consistently achieved. Other types of bolus materials and textures will be investigated in future experiments.
Discussion
The performed normal swallowing sequences demonstrate the ability of our system to translate the chronological order of biomechanical movements from clinical observations into a model where the movement of the food bolus can be physically mimicked and monitored. The availability of such a tool would be beneficial in several aspects. i) The contribution to potential reduction of animal experiments, as there are no relevant animal models for monitoring of swallowing. ii) Obtaining clinically relevant data without patient participation: and the current methods of clinical tests are discomforting for patients with swallowing disorders. iii) In-depth studies of specific swallowing disorders without extensive testing with patients: the available chronograms can be fine-tuned and can be run many times for elucidating the mechanisms of specific swallowing disorders. One main limitation of the system is limited information on the food bolus interaction with the internal surface of the tissue model, which we aim to overcome in the future by incorporating specific sensors. The fouling of the internal surface due to the food remnants, which can have effects on subsequent tests, will be tackled by application of coatings that will enable the elimination of the food residues without having an effect on swallowing sequences.
Summary
In this study, we presented a robotic device that can mimic the pharyngeal phase of physiological human swallowing. The potential outlook for such a device is a mean to generate big data related to normal and pathological swallowing, that can bridge gaps in the current level of fundamental knowledge, as the only means of obtaining such information are generally invasive medical procedures. Swall-E has the potential to provide new insights for clinicians, food industry and speech language therapists.
Perspectives
Our goals with the future iterations of Swall-E are: i) The development and validation of disease specific chronograms and demonstration of aspiration mechanisms similar to those observed clinically for testing potential therapeutic solutions (such as implants) or studying disease mechanisms (such as more precise sub-classification of swallowing disorders not just based on causing diseases (such as stroke) but also with respect to the final aspiration mechanism. ii) Testing of modified food used by dysphagia patients and determination of the most suitable modified food for a given conditions based on swallowing tests in Swall-E. iii) A supporting tool for the rheological studies of food and particularly modified food in a physiologically relevant model.
|
2019-01-22T22:20:06.163Z
|
2018-12-19T00:00:00.000
|
{
"year": 2018,
"sha1": "96b74a4c25d1b48fc810ecb4ea48ddfbf4483f5a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0208193",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96b74a4c25d1b48fc810ecb4ea48ddfbf4483f5a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251431761
|
pes2o/s2orc
|
v3-fos-license
|
Genesis of the Halılar Metasediment-Hosted Cu-Pb ( ± Zn) Mineralization, NW Turkey: Evidence from Mineralogy, Alteration, and Sulfur Isotope Geochemistry
: This study contributes to our understanding of the evolution of Halılar Cu-Pb ( ± Zn) mineralization (NW Turkey) based on mineralogical and geochemical results and sulfur isotope data. The study area represents local Cu-Pb with some Zn brecciated-stockwork vein type mineralization along the NE–SW fault gouge zone at the lower boundary of the Sakarkaya and Düztarla granitoid rocks. Two main zones, consisting of sericite–quartz–chlorite ± kaolinite ± pyrite (i.e., zone-1) and calcite–epidote–albite ± chlorite ± sericite (i.e., zone-2), were observed within the central ore mineral zone at the mining site. Different mineralization assemblages were recorded; the main ore mineral contains chalcopyrite, galena, pyrite, and sphalerite within alteration zone-1, and the oxidation/supergene mineralization includes covellite and goethite. The mass balance calculations show that the samples of zone-1 show an increase in SiO 2 , Fe 2 O 3 , K 2 O, and LOI along with Ag, As, Cu, Mo, Pb, S, Sb, and Zn, reflecting high pyritization with sericitization and silicification. On the other hand, the samples from zone-2 are rich in CaO; Na 2 O; P 2 O 5 ; TiO 2 ; LOI; and carbon-reflecting calcite, epidote, and albite alterations. A uniform magmatic sulfur source of Halılar sulfides is suggested by their mean δ 34 S value of − 1.62‰. Furthermore, the primary metal source is metasediments and intrusive Düztarla granitoid magmatism. These observations suggest that the Halılar metasediment-hosted Cu-Pb ( ± Zn) mineralization was formed by epigenetic hydrothermal processes after sedimentation/diagenesis and metamorphism.
Introduction
Studies of hydrothermal alteration are important in the exploration of copper deposits in order to determine the processes of ore formation, as well as to identify potential ore zones [1].Spectroscopic methods, geophysics, or multispectral remote sensing techniques are used in mapping alteration zones, as well as in identifying their mineral assemblages [1][2][3][4][5][6][7][8][9].In addition, the geochemical changes from host rock to alteration zones provide alteration type and its degree, as well as the genesis and evolution of the hydrothermal system [5,[10][11][12][13][14][15][16][17][18][19].Hydrothermal alteration processes are responsible for mineralogical and chemical changes in the rock-forming minerals as a result of interactions between the hydrothermal fluids and host rocks along fracture zones and grain boundaries [1,2,20,21].Schwartz [22] stated that the alteration generally depends on: (1) temperature, pressure, and chemical composition of the fluid; (2) the chemical and physical nature of the wall rocks; and (3) the water-rock ratio.The mechanism and types of mineral deposits are assigned by the nature of the alteration assemblages and the different hydrothermal systems.In addition, the mineral assemblages of the altered rocks are important to help identify the alteration types (e.g., phyllic alteration refers to assemblages of quartz + sericite + pyrite minerals; potassic alteration: orthoclase + biotite + sericite; propylitic alteration: epidote + chlorite + albite) [23].Gifkins et al. [24] defined different types of mineral deposits by their alteration type and mineralogy, such as porphyry Cu deposits having potassic, phyllic, argillic, and propylitic alterations, while the low-sulfidation, epithermal, geothermal, VHMS, and sediment-hosted massive sulphide deposits having sericitic (or phyllic) and propylitic (or saussuritization) alterations.
In Turkey, mineralization in the structural zone of the Anatolian tectonic belt represents part of the Tethyan-Eurasian metallogenic belt (TEMB), which formed during the Mesozoic and Early Cenozoic [25].This mineralization was controlled by extensional events that formed after the Neo-Tethys closure.It is associated with calc-alkaline magmatic activity during the Oligocene-Miocene/Pliocene within the post-collision continent-continent environments and led to the formation of Pb-Zn, Sb, As, and Au-Cu deposits [25].
The study area (Halılar area) is located about 25-30 km northeast of Edremit in Balıkesir Province (Biga Peninsula, Turkey) (Figure 1).Halılar Cu-Pb (±Zn) mineralization occurs in a vein-type deposit that formed in the volcanogenic metasediments of the Sakarkaya Formation.It is associated with the NE-SW fault gouge zone along with the lower boundary of the Ba gca gız Formation and the Düztarla granitoid intrusion.
Although geological and geochemical studies of the Halılar area have been published [26], the genesis of base-metal Cu-Pb (-Zn) mineralization in this area remains ambiguous, as it has not been studied in detail.Therefore, this study focuses on mineralization in the Halılar area by reporting new data obtained from mineralogical, petrographical, and geochemical investigations of the mineralization and altered host rock.Using mass balance calculations, enrichment and/or depletion in the chemical components of the different alteration zones associated with this mineralization were calculated on the basis of their mass/volume changes (gain and loss).Sulfur isotope data from the sulfide minerals, including pyrite, chalcopyrite, and galena, were collected to understand the sulfur source(s), as well as to determine the δ 34 S H2S values of the hydrothermal fluid that caused the Halılar Cu-Pb-(±Zn) mineralization.[27].
Geological Setting
The Halılar area contains two groups: the clastic Halılar Group, which is slightly metamorphosed and overlain by the pre-Late Triassic age or Permian limestone [28], and the Bilecik group.These two groups are in contact with the intrusive rocks to the N and NW of Halılar village (Figure 2).The Halılar Group consists of two formations: the Bağcağız and Sakarkaya Formations; the Bilecik Group is represented by two formations: the Taşçıbayırı Formation and Günören Limestone (Figure 2).The granitoid rocks intruded the Sakarkaya and Bağcağız Formations of the Halılar Group in the study area (Figure 2).
Geological Setting
The Halılar area contains two groups: the clastic Halılar Group, which is slightly metamorphosed and overlain by the pre-Late Triassic age or Permian limestone [28], and the Bilecik group.These two groups are in contact with the intrusive rocks to the N and NW of Halılar village (Figure 2).The Halılar Group consists of two formations: the Ba gca gız and Sakarkaya Formations; the Bilecik Group is represented by two formations: the Taşçıbayırı Formation and Günören Limestone (Figure 2).The granitoid rocks intruded the Sakarkaya and Ba gca gız Formations of the Halılar Group in the study area (Figure 2).The Halılar Group was classified by Krushensky, Akcay, and Karaege [28] into the Bağcağız Formation (sandstone and shale) and the Sakarkaya Formation (sandstone and conglomerate).The Bağcağız Formation (sample IDs: H63 and H64) was intruded by the Düztarla granitoid at its lower boundary (Figure 2).It has dark siltstone at its upper boundary, which is overlain by the sandstone of the Sakarkaya Formation.This formation also has sandstone and siltstone alternations from bottom to top, consisting of dark-grayish-colored siltstones and silty shales with yellowish-colored, medium-bedded sandstones from the Lower Triassic to Middle Jurassic.The Bağcağız Formation is represented by carbonaceous dark metasiltstone and rhyolitic metatuffs (Figure 3a).The rhyolitic The Halılar Group was classified by Krushensky, Akcay, and Karaege [28] into the Ba gca gız Formation (sandstone and shale) and the Sakarkaya Formation (sandstone and conglomerate).The Ba gca gız Formation (sample IDs: H63 and H64) was intruded by the Düztarla granitoid at its lower boundary (Figure 2).It has dark siltstone at its upper boundary, which is overlain by the sandstone of the Sakarkaya Formation.This formation also has sandstone and siltstone alternations from bottom to top, consisting of dark-grayish-colored siltstones and silty shales with yellowish-colored, medium-bedded sandstones from the Lower Triassic to Middle Jurassic.The Ba gca gız Formation is represented by carbonaceous dark metasiltstone and rhyolitic metatuffs (Figure 3a).The rhyolitic metatuffs are fine-grained light gray to yellowish rocks (Figure 3a) microscopically consisting of microp-erthite and quartz crystals embedded in a finer-grained tuffaceous matrix of kaolinitized and carbonatized feldspar, quartz, and Fe-oxide (Figure 3b).
The Sakarkaya Formation (sample IDs: H05, H07, H09, H14, H15, H18, H20, H22a, H55, and H60) outcrops approximately 500 m south of Sakarkaya Hill and 1.5-2 km north and northeast of Halılar village (Figure 2).It is represented by fine-grained, yellowishcolored metasandstone (Figure 3c).It has a sharp contact with the dark metasiltstones of the Bağcağız Formation.The unit rests with a distinct contact on the Bositra-bearing dark silty shale of the Bağcağız Formation [26].The metasandstone ranges from subarkosic to wackes in composition and consists of poorly sorted quartz, sericitized and kaolinitized feldspar, and mica grains cemented by iron oxide (Figure 3d,e).These components are embedded in altered feldspar and silicified fine-grained matrix (Figure 3d,e).The upper portion of the formation is represented by cross-stratified beds.The Sakarkaya Formation (sample IDs: H05, H07, H09, H14, H15, H18, H20, H22a, H55, and H60) outcrops approximately 500 m south of Sakarkaya Hill and 1.5-2 km north and northeast of Halılar village (Figure 2).It is represented by fine-grained, yellowish-colored metasandstone (Figure 3c).It has a sharp contact with the dark metasiltstones of the Ba gca gız Formation.The unit rests with a distinct contact on the Bositra-bearing dark silty shale of the Ba gca gız Formation [26].The metasandstone ranges from subarkosic to wackes in composition and consists of poorly sorted quartz, sericitized and kaolinitized feldspar, and mica grains cemented by iron oxide (Figure 3d,e).These components are embedded in altered feldspar and silicified fine-grained matrix (Figure 3d,e).The upper portion of the formation is represented by cross-stratified beds.
The Bilecik Group is part of the Callovian-Hauterivian (Middle Jurassic-Lower Cretaceous) stratigraphy in NW Anatolia known as the Bilecik Limestone (Figure 3f-h).It has been divided into the two formations; Taşçıbayırı and Günören Limestone formations.The Taşçıbayırı Formation (sample IDs: H56, H57, and H58) underlies the Günören Limestone (sample ID: H59); they contain sandy limestone and dolomitic limestone, respectively.The sandy limestone of the Taşçıbayırı Formation is composed of calcite with feldspar, mica, and volcanic rock fragments (Figure 3g).The volcanic rock fragments are composed of broken and/or eroded volcanic rocks consisting of quartz and feldspar (Figure 3g), while the Günören dolomitic limestone consists of calcite and dolomite with Fe-oxide minerals (Figure 3h).
The Halılar area has a well-described Upper Triassic-Liassic continuous succession (Figures 1 and 2).The tectonic sedimentary rocks formed at the Sakarya divergent margin, which evolved in the Late Triassic-Aptian interval [29,30].As a result of the diachronic closure of the Tethys basin in western Anatolian, the Upper Triassic black shales were deposited in the Lias in the Karakaya euxinic basin throughout the Edremit region.This shale and the Hettangian arkosic sandstones were later intruded by the Düztarla granodioriticgranitic body due to the southward subduction of the Paleo-Tethys [29].
Sampling and Analytical Methods
A total of 45 host rocks, altered rocks, and mineralized samples were collected from the study area.Thin sections and a subset of polished sections were examined optically using transmitted and reflected light microscopes.Whole-rock major, trace, and rare earth element analyses were conducted at the Geochemistry Research Laboratories of Istanbul Technical University (ITU/JAL).The samples were grounded using a tungsten carbide milling device.Major elements were analyzed using a BRUKER S8 TIGER model X-ray fluorescence spectrometer (XRF) (Östliche Rheinbrückenstraße 49, 76187 Karlsruhe, Germany) with a wavelength range from 0.01-12 nm.Trace elements were analyzed by inductively coupled plasma-mass spectrometry (ICP-MS) using an ELAN DRC-e Perkin Elmer model (PerkinElmer, Waltham, MA, US).Approximately 100 mg of powdered sample was digested in two steps.The first step was completed with 6 mL of 37% HCl, 2 mL of 65% HNO 3 , and 1 mL of 38%-40% HF in a pressure-and temperature-controlled Teflon beaker using a Berghoff Microwave™ at an average temperature of 180 • C. The second step was completed with the addition of 6 mL of 5% boric acid solution.The remaining solution sample was analyzed by ICP-MS.The altered rocks were also analyzed for mineralogy using a BRUKER X-ray diffractometer (XRD) (Östliche Rheinbrückenstraße 49, 76187 Karlsruhe, Germany).Calculation of the normative mineral abundances from the major element analyses and rare earth element diagrams were created using Igpet 2.3 [31].The GEOISO-Windows of Coelho [32] were used to determine the absolute mobility of the elements using equations from Gresens [33] and isocon diagrams from Grant [34,35].
Sulfide minerals for sulfur isotope analysis were separated from slightly crushed (200 mesh) lode samples (>95 % pure pyrite, chalcopyrite, and galena).They were washed and handpicked under a binocular microscope.These analyses were carried out at the Geochron Laboratory (USA) using EA-IRMS (Elemental Analysis-Isotope Ratio Mass Spectrometry) techniques.All stable isotope data are reported in the delta (δ) notation, relative to Vienna-Canyon Diablo Troilite (V-CDT) for sulfur isotopes with 0.5‰ (1 σ) analytical uncertainty.
Halılar Cu-Pb (±Zn) Mineralization
The Halılar base metal mineralization represents Cu-Pb with some Zn brecciatedstockwork-veining-type mineralization.The mineralization is restricted to a fault gouge zone directed NE-SW, as well as along the lower boundary of the Sakarkaya and Düztarla granitoid rocks (Figure 2).It is also closely associated with intense hydrothermal alteration within the breccia and quartz stockwork veining (Figure 4a-d).Based on the field investigation and petrographic and mineralogical (XRD) data, the mineralized quartz veins and brecciated ore bodies are accompanied by two types of hydrothermal alteration zone with gradational boundaries: zone-1 (sericite-quartz-chlorite ± kaolinite ± pyrite) and zone-2 (calcite-epidote-albite ± chlorite ± sericite).The ore zone is represented by mineralized and brecciated quartz stockwork veining (Figure 4a-c).It has high amounts of Cu (9.9 %), Pb (11.3 %), and Zn (0.29 %) mineralization, with high amounts of chalcopyrite and galena with sphalerite and pyrite (Figure 4a-c).It contains quartz with a subordinate amount of wollastonite, kaolinite, andradite, and calcite (Figures 4e-h and 5 and Appendix A).These calc-silicate assemblages refer to the skarn that resulted from the metasomatism of sandy limestone in the Taşçıbayırı Formation in association with andradite (Figures 4e-h and 5 and Appendix A).The XRD data show quartz (low), wollastonite (1A, manganoan), kaolinite (1A), microcline, calcite, chalcopyrite, andradite, anglesite, and cubanite (high) with smaller amounts of pyrite, sphalerite (ferrous), galena, and quartz (high) (Figure 5 and Appendix A).
Alteration zone-1 (sericite-quartz-chlorite ± kaolinite ± pyrite) forms the main alteration zone developed outwards from the ore zone and has high amounts of sericite and quartz, with lesser amounts of chlorite, kaolinite, and pyrite (Figures 4i-l and 6 and Appendix B).It is characterized by the preferential replacement of the original K-feldspar and/or plagioclase-biotite by sericite/muscovite-kaolinite.XRD studies reveal a paragenesis of quartz (low), kaolinite (1A), clinochlore (1MIa), and sericite (2M1) with a subordinate amount of chamosite (1MIIb), pyrite, and chalcopyrite (Figure 6, Appendix B).
Ore Mineralogy
The ore mineral assemblage includes chalcopyrite, galena, pyrite, and sphalerite with covellite and goethite in abundant gangue minerals such as quartz, sericite, chlorite, and calcite forming along the quartz stockwork veins as well as in the brecciated ore zones (Figures 4-7 and Appendices A-C).Chalcopyrite and galena are the most common sulfide minerals in the ore bodies, occurring as yellow and whitish gray in color and with a subhedral granular texture (up to 2 mm), respectively (Figure 8a,b).Pyrite is either associated with or occurs as inclusions in chalcopyrite (Figure 8c,d).Sphalerite is characterized by dark gray coloring associated with chalcopyrite and pyrite, forming exsolution textures produced by chalcopyrite (Figure 8b,c,e).These minerals were developed in the main ore mineralization phase (Figure 9).On the other hand, the oxidation and supergene mineralization events represent the second phase of mineralization, including covellite and goethite formed after chalcopyrite and pyrite, respectively (Figures 8 and 9).
Ore Mineralogy
The ore mineral assemblage includes chalcopyrite, galena, pyrite, and sphalerite with covellite and goethite in abundant gangue minerals such as quartz, sericite, chlorite, and calcite forming along the quartz stockwork veins as well as in the brecciated ore zones (Figures 4-7 and Appendices A-C).Chalcopyrite and galena are the most common sulfide minerals in the ore bodies, occurring as yellow and whitish gray in color and with a subhedral granular texture (up to 2 mm), respectively (Figure 8a,b).Pyrite is either associated with or occurs as inclusions in chalcopyrite (Figure 8c,d).Sphalerite is characterized by dark gray coloring associated with chalcopyrite and pyrite, forming exsolution textures produced by chalcopyrite (Figure 8b,c,e).These minerals were developed in the main ore mineralization phase (Figure 9).On the other hand, the oxidation and supergene mineralization events represent the second phase of mineralization, including covellite and goethite formed after chalcopyrite and pyrite, respectively (Figures 8 and 9).
Geochemistry of the Least-Altered Metasediments
Ten representative samples collected from the least-altered metasediments of the Sakarkaya Formation were analyzed for major, trace, and rare-earth element contents (Table 1).Samples from the metasandstones are classified as mainly wackes and, rarely, Fe-sand and Fe-shale based on the geochemical classification of the terrigenous sandstones and shales by Herron [36] (Figure 10a).The samples have SiO2/Al2O3 ratios ranging from 2.7 to 5.5 with an average of 4.3, which are similar to upper continental crust (UCC) [37] (~4.3 SiO2/Al2O3 ratio), suggesting that they were sourced from the crustal felsic rocks.It also appears in Figure 10b,c that the Sakarkaya metasediments have acidic/intermediate characteristics, which lie mostly in the field of the metavolcanic tuffs, metagreywackes, and arkosic sands [38] according to their low K/Rb ratios (mean = 312.8).In the F1-F2 classification diagram (Figure 10d), the metasediments are mostly comparable with the compositional characteristics of the P4-quartoze sedimentary provenance that form within the passive and active continental margins (Figure 10e) due to recycling from old sedimentary rocks derived from highly weathered felsic terrains.The metasandstones have low total rare earth element contents (∑REE) (up to 145.14 ppm with an average of 88.96 ppm), ∑REE/∑HREE = 6.59-10.43ppm, (La/Yb)N = 5.38-14.29 ppm, and positive Eu anomaly
Geochemistry of the Least-Altered Metasediments
Ten representative samples collected from the least-altered metasediments of the Sakarkaya Formation were analyzed for major, trace, and rare-earth element contents (Table 1).Samples from the metasandstones are classified as mainly wackes and, rarely, Fe-sand and Fe-shale based on the geochemical classification of the terrigenous sandstones and shales by Herron [36] (Figure 10a).The samples have SiO2/Al2O3 ratios ranging from 2.7 to 5.5 with an average of 4.3, which are similar to upper continental crust (UCC) [37] (~4.3 SiO2/Al2O3 ratio), suggesting that they were sourced from the crustal felsic rocks.It also appears in Figure 10b,c that the Sakarkaya metasediments have acidic/intermediate characteristics, which lie mostly in the field of the metavolcanic tuffs, metagreywackes, and arkosic sands [38] according to their low K/Rb ratios (mean = 312.8).In the F1-F2 classification diagram (Figure 10d), the metasediments are mostly comparable with the compositional characteristics of the P4-quartoze sedimentary provenance that form within the passive and active continental margins (Figure 10e) due to recycling from old sedimentary rocks derived from highly weathered felsic terrains.The metasandstones have low total rare earth element contents (∑REE) (up to 145.14 ppm with an average of 88.96 ppm), ∑REE/∑HREE = 6.59-10.43ppm, (La/Yb)N = 5.38-14.29 ppm, and positive Eu anomaly
Geochemical Characteristics 5.1. Geochemistry of the Least-Altered Metasediments
Ten representative samples collected from the least-altered metasediments of the Sakarkaya Formation were analyzed for major, trace, and rare-earth element contents (Table 1).Samples from the metasandstones are classified as mainly wackes and, rarely, Fe-sand and Fe-shale based on the geochemical classification of the terrigenous sandstones and shales by Herron [36] (Figure 10a).The samples have SiO 2 /Al 2 O 3 ratios ranging from 2.7 to 5.5 with an average of 4.3, which are similar to upper continental crust (UCC) [37] (~4.3 SiO 2 /Al 2 O 3 ratio), suggesting that they were sourced from the crustal felsic rocks.It also appears in Figure 10b,c that the Sakarkaya metasediments have acidic/intermediate characteristics, which lie mostly in the field of the metavolcanic tuffs, metagreywackes, and arkosic sands [38] according to their low K/Rb ratios (mean = 312.8).In the F1-F2 classification diagram (Figure 10d), the metasediments are mostly comparable with the compositional characteristics of the P4-quartoze sedimentary provenance that form within the passive and active continental margins (Figure 10e) due to recycling from old sedimentary rocks derived from highly weathered felsic terrains.The metasandstones have low total rare earth element contents (∑REE) (up to 145.14 ppm with an average of 88.96 ppm), ∑REE/∑HREE = 6.59-10.43ppm, (La/Yb) N = 5.38-14.29 ppm, and positive Eu anomaly (Eu/Eu* = 0.68-1.27ppm) that are similar to the upper continental crust (UCC) of Taylor and McLennan [37] (Figure 10f).[38].Fields of unmetamorphosed arkosic sands after van de Kamp et al. [39], low−grade metagreywackes after Condie et al. [40] and Caby et al. [41], and higher-grade metavolcanic tuffs after van de Kamp [42]; (d) plot of samples in discriminant functions F1 vs. F2 (provenance fields are after Roser and Korsch [43]; (e) plot of discriminant scores along Function 1 vs. 2 after Bhatia [44]; (f) upper continental crust (UCC)−normalized REE patterns [37].
Alteration Geochemistry
Two main alteration zones surround the Cu-Pb±Zn-bearing ore mineralization in the Halılar area.These are represented by zone-1 (sericite-quartz-chlorite ± kaolinite ± pyrite) and zone-2 (calcite-epidote-albite ± chlorite ± sericite), and they were analyzed for major, trace, and REEs (Table 2).Based on the alteration index (AI) [45] and advanced argillic alteration index (AAAI) of Williams and Davidson [46], samples from each zone show opposite alteration trends (Figure 11a).The ore zone and alteration zone-1 fall along the trend of silicification/potassic alteration, while alteration zone-2 falls along the carbonation/chloritization alteration trend (Figure 11a).Based on the alteration boxplot relationship between the chlorite-carbonate-pyrite index (CCPI) of Large et al. [47] and the AI of Ishikawa et al. [45], the samples of the ore zone and zone-1 are clustered in the field of strongly altered rock, having chlorite-sericite-pyrite alteration while the ore zone is affected by extensive pyritization (Figure 11b).On the other hand, zone-2, within the carbonate-altered host rock field, shows Mn-carbonate-sericite-chlorite alteration (Figure 11b).[45] vs. AAAI [46]; (b) AI [45] vs. CCPI [47] of the studied alteration samples from the Halılar area.
Mass Balance Calculations
The behavior of different elements, excluding immobile ones, is changeable during hydrothermal alteration processes depending on their volume changes and their mass transfer [48,49].Gresens [33] and Grant [34,35] used mass-balance calculations to quantify hydrothermal alteration effects on the host rock within the mineralized regions and to determine the relative gain and loss of the various major and trace elements during hydrothermal alteration.
Based on the trace element geochemical analyses, the ore zone and alteration zone-1 have high amounts of Cu and Pb, with an average of 9.9% and 11.3%, respectively, for the ore zone, and 0.32% and 0.12%, respectively, for zone-1 (Table 2).They are classified as a Cu-Pb type (Figure 12), which refers to the high concentrations of chalcopyrite and galena.Alteration zone-2 represents the Cu-Pb-Zn type (Figure 12), having low Cu, Pb, and Zn contents, with averages of 28.18ppm, 47.95ppm, and 98.07ppm, respectively.
Al2O3 and TiO2 are immobile in all alteration zones during hydrothermal alteration; therefore, they were selected to assess the chemical changes due to the process of hydrothermal alteration by using the GEOISO-Windows software developed by Coelho [32].The results of these calculations are illustrated through the isocon diagrams of Grant [34] and show the different patterns of major and trace element gains and losses (Figures 13 and 14 and Table 3).The samples from zone-1 are rich in SiO2, Fe2O3, K2O, and LOI, with lesser increases in the amount of CaO, P2O5, and MnO (Figure 14a).Gains in Ag, As, Cu, Mo, Pb, S, Sb, and Zn are also recognized within this alteration zone (Figure 14b).This zone is characterized by higher amounts of sulfur and iron, with variable copper, lead, and zinc contents reflecting high pyritization, with the main base metals providing higher mass (MC = 170.42)and volume change (VC = 182.1)(Table 3).SiO2 and K2O increases reflect high silicification and sericitization, which are comparable with the petrographic and mineralogical (XRD) data.In zone-2, CaO, Na2O, P2O5, TiO2, LOI, and carbon are enriched, reflecting calcite, epidote, and albite alterations (Figure 14c).The loss of Cu, Pb, and Zn is observed in this zone, providing lower MC (−3.18) and VC (−1.80) values (Figure 14d and Table 3).[45] vs. AAAI [46]; (b) AI [45] vs. CCPI [47] of the studied alteration samples from the Halılar area.
Mass Balance Calculations
The behavior of different elements, excluding immobile ones, is changeable during hydrothermal alteration processes depending on their volume changes and their mass transfer [48,49].Gresens [33] and Grant [34,35] used mass-balance calculations to quantify hydrothermal alteration effects on the host rock within the mineralized regions and to determine the relative gain and loss of the various major and trace elements during hydrothermal alteration.
Based on the trace element geochemical analyses, the ore zone and alteration zone-1 have high amounts of Cu and Pb, with an average of 9.9% and 11.3%, respectively, for the ore zone, and 0.32% and 0.12%, respectively, for zone-1 (Table 2).They are classified as a Cu-Pb type (Figure 12), which refers to the high concentrations of chalcopyrite and galena.Alteration zone-2 represents the Cu-Pb-Zn type (Figure 12), having low Cu, Pb, and Zn contents, with averages of 28.18ppm, 47.95ppm, and 98.07ppm, respectively.Al 2 O 3 and TiO 2 are immobile in all alteration zones during hydrothermal alteration; therefore, they were selected to assess the chemical changes due to the process of hydrothermal alteration by using the GEOISO-Windows software developed by Coelho [32].The results of these calculations are illustrated through the isocon diagrams of Grant [34] and show the different patterns of major and trace element gains and losses (Figures 13 and 14 and Table 3).The samples from zone-1 are rich in SiO 2 , Fe 2 O 3 , K 2 O, and LOI, with lesser increases in the amount of CaO, P 2 O 5 , and MnO (Figure 14a).Gains in Ag, As, Cu, Mo, Pb, S, Sb, and Zn are also recognized within this alteration zone (Figure 14b).This zone is characterized by higher amounts of sulfur and iron, with variable copper, lead, and zinc contents reflecting high pyritization, with the main base metals providing higher mass (MC = 170.42)and volume change (VC = 182.1)(Table 3).SiO 2 and K 2 O increases reflect high silicification and sericitization, which are comparable with the petrographic and mineralogical (XRD) data.In zone-2, CaO, Na 2 O, P 2 O 5 , TiO 2 , LOI, and carbon are enriched, reflecting calcite, epidote, and albite alterations (Figure 14c).The loss of Cu, Pb, and Zn is observed in this zone, providing lower MC (−3.18) and VC (−1.80) values (Figure 14d and Table 3).δ 34 S isotopic data from the sulfide-bearing ore deposits were obtained to determine the source of the sulfur and the origin of the sulfur-bearing fluids [51].The δ 34 S isotope values of ten pyrite, chalcopyrite, and galena samples collected from the highly altered and mineralized altered metasediments host rocks are in the range of −1.1 to −0.1‰V CDT (n = 3), −2.7 to −0.5 ‰V CDT (n = 3), and −3.5 to −2.1‰V CDT (n = 4), respectively (Table 4).Pyrites from a quartz vein have an average δ 34 S of 0.4‰VCDT (Table 4 and Figure 15a).By assuming the H 2 S as the sulfur species in solution, and based on the fractionation equations of Czamanske and Rye [52] and Ohmoto and Rye [51], the δ 34 S H2S values of the fluid have a narrow range of −2.54 to −0.08 ‰ V CDT (Table 4 and Figure 15b). (1Based on the fluid inclusion thermometry (unpublished data), (2) Calculated by using the sulfur isotope fractionation equations in Czamanske and Rye [52] and Ohmoto and Rye [51].δ 34 S isotopic data from the sulfide-bearing ore deposits were obtained to determine the source of the sulfur and the origin of the sulfur-bearing fluids [51].The δ 34 S isotope values of ten pyrite, chalcopyrite, and galena samples collected from the highly altered and mineralized altered metasediments host rocks are in the range of −1.1 to −0.1‰VCDT (n = 3), −2.7 to −0.5 ‰VCDT (n = 3), and −3.5 to −2.1‰VCDT (n = 4), respectively (Table 4).Pyrites from a quartz vein have an average δ 34 S of 0.4‰VCDT (Table 4 and Figure 15a).By assuming the H2S as the sulfur species in solution, and based on the fractionation equations of Czamanske and Rye [52] and Ohmoto and Rye [51], the δ 34 SH2S values of the fluid have a narrow range of −2.54 to −0.08 ‰ VCDT (Table 4 and Figure 15b).
Sources of Sulfur
There are many sulfur sources with distinct δ 34 S values: (1) the mantle source has a 0 ± 3‰ δ 34 S value [53]; (2) the magmatic source, in which the sulfur resulted from desulfidation and/or dissolution or from magmatic sulfides, has 0 to +9‰ δ 34 S [54]; (3) the seawater sources have a mean value of +20 ‰ δ 34 S; and (4) the strongly reduced sulfur source in the sedimentary rocks has very negative δ 34 S values [55].
In the Halılar area, the mean δ 34 S value of the sulfides is close to −1.62‰, suggesting a uniform magmatic sulfur source in which the sulfur originates either from the leaching and remobilization of the old magmatic sulfide or from the mantle source (Figure 16).Furthermore, the δ 34 S values of the studied sulfide minerals decrease from pyrite (−1.1 to 0.4 ‰V CDT ) and chalcopyrite (−2.7 to −0.5 ‰V CDT ) to galena (−3.5 to −2.1‰V CDT ) (Figure 17), which is compatible with the suggested trend of differentiation of Ohmoto and Rye [51].Thus, the ore-bearing fluid appears to have a magmatic (mantle) source [51] with magmatic-hydrothermal signatures [56] (Figure 18). (1Based on the fluid inclusion thermometry (unpublished data), (2) Calculated by using the sulfur isotope fractionation equations in Czamanske and Rye [52] and Ohmoto and Rye [51].
Sources of Sulfur
There are many sulfur sources with distinct δ 34 S values: (1) the mantle source has a 0 ± 3‰ δ 34 S value [53]; (2) the magmatic source, in which the sulfur resulted from desulfidation and/or dissolution or from magmatic sulfides, has 0 to +9‰ δ 34 S [54]; (3) the seawater sources have a mean value of +20 ‰ δ 34 S; and (4) the strongly reduced sulfur source in the sedimentary rocks has very negative δ 34 S values [55].
Metal Source
The metasediments of the Sakarkaya Formation that host the Halılar Cu-Pb (±Zn) mineralization are slightly enriched in metallic elements (average of Ag = 6.7 ppm, As = 101.9ppm, Au = 0.04 ppm, Cu = 53.8ppm, Mo = 1.8 ppm, Pb = 274.7 ppm, S = 389.0ppm, Sb = 2.0 ppm, and Zn = 371.3ppm) relative to the average UCC (Table 5).Moreover, the contents of the metallic elements in the Düztarla granitoid rocks also show higher values than typical UCC (mean values of Ag = 1.When the sulfur is normalized to the UCC of Rudnick et al. [65], it is highly rich in granitoid rocks, but not in metasediments (Figure 19a,b).Thus, the primary metal suppliers appear to be the metasediments and intrusive Düztarla granitoid magmatism; together, they account for the metals in the Halılar brecciated-stockwork-type mineralization.Based on the geologic features and mode of occurrences, the Halılar metasedimenthosted Cu-Pb (±Zn) mineralization appears to be formed by epigenetic hydrothermal processes after sedimentation/diagenesis and metamorphism.
Metal Source
The metasediments of the Sakarkaya Formation that host the Halılar Cu-Pb (±Zn) mineralization are slightly enriched in metallic elements (average of Ag = 6.7 ppm, As = 101.9ppm, Au = 0.04 ppm, Cu = 53.8ppm, Mo = 1.8 ppm, Pb = 274.7 ppm, S = 389.0ppm, Sb = 2.0 ppm, and Zn = 371.3ppm) relative to the average UCC (Table 5).Moreover, the contents of the metallic elements in the Düztarla granitoid rocks also show higher values than typical UCC (mean values of Ag = 1.14 ppm, As = 84.05ppm, Au = 0.36 ppm, Cu = 368.91ppm, Mo = 324.68ppm, Pb = 49.52 ppm, S = 1396.7 ppm, Sb = 2.34 ppm, and Zn = 414.69ppm).When the sulfur is normalized to the UCC of Rudnick et al. [65], it is highly rich in granitoid rocks, but not in metasediments (Figure 19a,b).Thus, the primary metal suppliers appear to be the metasediments and intrusive Düztarla granitoid magmatism; together, they account for the metals in the Halılar brecciated-stockwork-type mineralization.Based on the geologic features and mode of occurrences, the Halılar metasediment-hosted Cu-Pb (±Zn) mineralization appears to be formed by epigenetic hydrothermal processes after sedimentation/diagenesis and metamorphism.
Conclusions
The Halılar area contains two groups: the clastic Halılar Group that overlies the metamorphics of the pre-Late Triassic age or Permian limestones and the Bilecik Group.The Halılar Group consists of the Bağcağız and Sakarkaya Formations, and the Bilecik Group is represented by two formations, including the Taşçıbayırı Formation and the Günören Limestone.The Sakarkaya and Bağcağız Formations were later intruded by Oligo-Miocene Düztarla granitoid rocks.
The Halılar base metal mineralization consists mainly of Cu-Pb sulfide with some Zn sulfide in the brecciated stockworks and veins.This type of vein mineralization is restricted to a fault gouge zone directed NE-SW and along the lower contact of the Sakarkaya and Düztarla granitic rocks.Two types of hydrothermal alteration zones with gradual boundaries can be observed in the main ore zone.These include zone-1 (sericitequartz-chlorite ± kaolinite ± pyrite) and zone-2 (calcite-epidote-albite ± chlorite ± sericite).The main ore mineral assemblage consists of chalcopyrite, galena, pyrite, and sphalerite in an abundant amount of gangue minerals such as quartz, sericite, chlorite, and calcite forming along the quartz stockwork veins, as well as in the brecciated ore zones.The other oxidation and supergene mineralization includes covellite and goethite formed after chalcopyrite and pyrite, respectively.
The least-altered Sakarkaya metasediments are classified mainly as wackes and, rarely, Fe-sand and Fe-shale, which are relatively similar in chemical composition to the upper continental crust (UCC).They are sourced from the crustal felsic rocks and a
Conclusions
The Halılar area contains two groups: the clastic Halılar Group that overlies the metamorphics of the pre-Late Triassic age or Permian limestones and the Bilecik Group.The Halılar Group consists of the Ba gca gız and Sakarkaya Formations, and the Bilecik Group is represented by two formations, including the Taşçıbayırı Formation and the Günören Limestone.The Sakarkaya and Ba gca gız Formations were later intruded by Oligo-Miocene Düztarla granitoid rocks.
The Halılar base metal mineralization consists mainly of Cu-Pb sulfide with some Zn sulfide in the brecciated stockworks and veins.This type of vein mineralization is restricted to a fault gouge zone directed NE-SW and along the lower contact of the Sakarkaya and Düztarla granitic rocks.Two types of hydrothermal alteration zones with gradual boundaries can be observed in the main ore zone.These include zone-1 (sericite-quartzchlorite ± kaolinite ± pyrite) and zone-2 (calcite-epidote-albite ± chlorite ± sericite).The main ore mineral assemblage consists of chalcopyrite, galena, pyrite, and sphalerite in an abundant amount of gangue minerals such as quartz, sericite, chlorite, and calcite forming along the quartz stockwork veins, as well as in the brecciated ore zones.The other oxidation and supergene mineralization includes covellite and goethite formed after chalcopyrite and pyrite, respectively.
The least-altered Sakarkaya metasediments are classified mainly as wackes and, rarely, Fe-sand and Fe-shale, which are relatively similar in chemical composition to the upper continental crust (UCC).They are sourced from the crustal felsic rocks and a quartzose sedimentary provenance formed within the passive and active continental margins.Massbalance calculations reveal that the samples of zone-1 are enriched in SiO 2 , Fe 2 O 3 , K 2 O, and LOI, with Ag, As, Cu, Mo, Pb, S, Sb, and Zn reflecting a high degree of pyritization with sericitization and silicification.On the other hand, the samples of zone-2 show an increase in CaO; Na 2 O; P 2 O 5 ; TiO 2 ; LOI; and carbon-reflecting calcite, epidote, and albite alterations.
The mean δ 34 S value of the sulfides in the Halılar area is close to −1.62‰, suggesting a uniform magmatic sulfur source in which the sulfur originates either from leaching and remobilization from the old magmatic sulfide or from the mantle source.There is also a sulfur isotope having a differentiation trend from pyrite to galena.The ore-bearing fluid has δ 34 S values of H 2 S, ranging from −2.54 to −0.08 ‰, typical of a magmatichydrothermal signature [47].
Based on the normalization of the metallic elements in the Sakarkaya metasediments and Düztarla granitoid rocks to the UCC [38] and [65], these metasediments and granitoid rocks represent the primary source of metals forming the Halılar brecciatedstockwork-veining-type mineralization.Overall, the geologic features and the mode of occurrences of the Halılar metasediment-hosted Cu-Pb (±Zn) mineralization suggest that they were formed by epigenetic hydrothermal processes after sedimentation/diagenesis and metamorphism.chemistry Research Laboratories at Istanbul Technical University (Turkey).Great appreciation goes to M. Kumral (ITU, Turkey) and A. Abdelnasser (Benha University) for their support during all stages of work on the article.Z. Doner (ITU, Turkey) and A. Unal (ITU, Turkey) are also thanked for their help during fieldwork.The help of F. Yavuz (ITU, Turkey) and G. Ustunisik (South Dakota School of Mines, USA) were highly appreciated during the review of the article.The editor and two anonymous reviewers are thanked for carefully reading the manuscript and for their constructive comments.
Conflicts of Interest:
The author declares no conflict of interest.
Figure 3 .Figure 3 .
Figure 3. (a) Rhyolitic metatuffs of the Bağcağız Formation; (b) XPL photomicrograph of the mineral composition of rhyolitic metatuffs; (c) yellowish-colored metasandstone of the Sakarkaya Formation; (d,e) XPL photomicrograph of the poorly sorted quartz, feldspar, and mica grains bounded by iron oxide in the subarkosic-to-quartz arenitic of the metasandstone; (f) general view of the Bilecik Limestone; (g) XPL photomicrograph of the calcite with feldspar, mica, and volcanic rock fragments in the sandy limestone of the Taşçıbayırı Formation; (h) XPL photomicrograph of the Figure 3. (a) Rhyolitic metatuffs of the Ba gca gız Formation; (b) XPL photomicrograph of the mineral composition of rhyolitic metatuffs; (c) yellowish-colored metasandstone of the Sakarkaya Formation; (d,e) XPL photomicrograph of the poorly sorted quartz, feldspar, and mica grains bounded by iron oxide in the subarkosic-to-quartz arenitic of the metasandstone; (f) general view of the Bilecik Limestone; (g) XPL photomicrograph of the calcite with feldspar, mica, and volcanic rock fragments in the sandy limestone of the Taşçıbayırı Formation; (h) XPL photomicrograph of the calcite and dolomite with Fe-oxide minerals in Günören dolomitic limestone; (i) granodiorite of Düztarla intrusive rocks; (j) XPL photomicrograph of the oligoclase, quartz, and microperthite with subordinate amount of biotite and Fe-oxide minerals in granodiorite; (k) granite from the Düztarla intrusion invaded into the Ba gca gız Formation; (l) XPL photomicrograph of the mineral composition of the Düztarla granite intrusion.Abbreviations: biotite (bt), calcite (cal), dolomite (dol), K-feldspar (kfs), kaolinite (kln), muscovite (ms), opaque (opq), plagioclase (pl), quartz (qz), and sericite (ser).
Figure 9 .
Figure 9. Paragenetic sequence of mineralization phases in the Halılar area.
Figure 9 .
Figure 9. Paragenetic sequence of mineralization phases in the Halılar area.
Figure 9 .
Figure 9. Paragenetic sequence of mineralization phases in the Halılar area.
Figure 14 .
Figure 14.Gain/loss of major oxides (wt.%) (a,c) and trace elements (ppm) (b,d) in the alteration zones during hydrothermal alteration based on the mean data of the representative least−altered samples as a reference for calculations.
Figure 15 .
Figure 15.Histograms of (a) the δ 34 S isotopic compositions for sulfide minerals (pyrite, chalcopyrite, and galena) and (b) the δ 34 SH2S of the fluid that formed the sulfides in the Halılar area.
Figure 15 .
Figure 15.Histograms of (a) the δ 34 S isotopic compositions for sulfide minerals (pyrite, chalcopyrite, and galena) and (b) the δ 34 S H2S of the fluid that formed the sulfides in the Halılar area.
Figure 17 .
Figure 17.Distribution of the δ 34 S values in the studied sulfide minerals from the Halılar area.Figure 17.Distribution of the δ 34 S values in the studied sulfide minerals from the Halılar area.
Figure 17 .
Figure 17.Distribution of the δ 34 S values in the studied sulfide minerals from the Halılar area.Figure 17.Distribution of the δ 34 S values in the studied sulfide minerals from the Halılar area.
Figure 17 .
Figure 17.Distribution of the δ 34 S values in the studied sulfide minerals from the Halılar area.
Table 1 .
The major oxides and trace and rare-earth elements (REE) of metasediments in the Sakarkaya Formation.
Table 2 .
Major oxides and trace and rare-earth elements (REE) of the ore zone and alteration zones 1 and 2 in the Halılar area.
Table 3 .
Element/oxide mass changes in relation to the original whole-rock mass ((Mfi-Moi)/Mo) and in relation to the original element/oxide mass in the original rock ((Mfi-Moi)/Moi).
Table 4 .
Sulfur isotope values of sulfides from the Halılar area.
Table 4 .
Sulfur isotope values of sulfides from the Halılar area.
Table A1 .
XRD analyses of representative samples from the ore zone.
|
2022-08-09T15:25:55.103Z
|
2022-08-04T00:00:00.000
|
{
"year": 2022,
"sha1": "5e585fc0e73685c168d1eaf07176432900e80511",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/12/8/991/pdf?version=1659661645",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8efa8ed9562c79fcc59c2654a49d931847d6d76e",
"s2fieldsofstudy": [
"Geology",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
256055360
|
pes2o/s2orc
|
v3-fos-license
|
Assessment of the levels of termination of the conus medullaris and thecal sac in the pediatric population
Purpose This study assessed the position of the termination of the conus medullaris (the point where the spinal cord tapers to an end) and thecal sac (the sheath of dura mater that surrounds the spinal cord and caudal nerve roots) in a large pediatric population, to characterise the nature of the pediatric Gaussian distribution and assess whether age affected the distribution. The study further aimed to assess the effect of gender on termination positions. Methods A total of 520 MRI spine studies of children aged between 1 month and 19 years old were collected from two pediatric tertiary referral centres in the UK and Italy. Studies with pathological findings were excluded, and normal scans were found using keyword search algorithms on a database of radiologists’ reports. The reported scans were individually assessed and reviewed by two experienced neuroradiologists. The termination points of the conus medullaris and thecal sac were determined for each study. Local IRB approvals were sought. Results The results showcased a Gaussian distribution in both conus medullaris (r=0.8997) and thecal sac termination levels (r=0.9639). No statistically significant results were noted with increasing age for the termination positions of the conus medullaris or thecal sac (p = 0.154, 0.063). No statistical significance was observed with gender variation with either anatomical landmark. A weak positive correlation was observed between the termination levels of the conus medullaris and the thecal sac (r=0.2567) Conclusion Termination levels across all pediatric age range followed a Gaussian distribution. Knowledge of normal termination levels has relevant clinical implications, including the assessment of patients with suspected spinal dysraphism. Supplementary Information The online version contains supplementary material available at 10.1007/s00234-022-03111-8.
Introduction
Embryologically, the spinal cord forms mostly from the neural plate generated during primary neurulation, with only its caudal metameres (i.e., S3-S5 and coccygeal levels) and filum terminale deriving from the processes of junctional and secondary neurulation [1]. The conus medullaris (CM) is the point where the spinal cord tapers and comes to an end. During fetal development, the CM progressively "ascends" along the vertebral column as a combined result of the phenomenon of retrogressive differentiation of the secondary neural tube and the differential longitudinal growth rates of the vertebral column and spinal cord. The CM eventually occupies its final, "adult" position shortly after birth, but studies have differed on the precise time at which this is reached, i.e., whether the conus settles in its final position at 2 months or at 5 months of age [2]. In fact, Kshitij Mankad and Andrea Rossi are joint senior authors. some of the literature has suggested that no further ascent occurs after birth and that the ascent entirely occurs within gestation, especially between weeks 9 and 16 [3]. Thus, this study sought to further clarify the matter and to provide statistically significant conclusions on the timeframe when the final position is achieved.
Although the variation of its position is found among individuals, its peak incidence in the adult population has been reported at the level of the L1 vertebra [4], with a mean variation ranging from the T12 to the upper L3 vertebrae. Similarly, slight variation is seen in the adult position of the thecal sac terminus (TS); however, TS is generally expected to terminate at the level of the S2 vertebra, with mean variation ranging from the lower L5 to the lower S3 vertebrae [4,5].
Several studies have long established a Gaussian distribution in the termination levels of the conus medullaris and the thecal sac in the adult population. However, there lies existing variation in the literature as to what age the adult termination position is achieved. Thus, this study aimed to characterise the pediatric Gaussian distribution further and assess whether any changes existed across the age range.
Few studies have been conducted solely in the pediatric age groups, and there is yet to be sufficient literature ascertaining whether changes in the Gaussian distribution occur across increasing pediatric age and gender. This study, with its large sample size (n=520) aimed to reinforce this distribution amongst pediatric populations.
Prior studies [6] have employed similar methodologies to assess at which age in particular the adult termination level is attained. This study sought to further quantitatively outline the nature of the Gaussian distribution itself, and ascertain whether age or gender had an important role to play in this respect. Previous literature [7] has assessed the effect of gender of distribution, however, was limited by an upper range of 6 months of age, as opposed to the entire pediatric age range. In addition, this study only considered ultrasound as the imaging modality, which has its own limitations.
Methods and materials
This study was a retrospective observational study conducted in two tertiary referral centres in the UK and Italy with local IRB approvals. The selected age range of subjects was 1 month to 19 years (mean age 7.78 years). The inclusion criteria included subjects with normal MRI whole spine. We included patients with brain tumours and cranial trauma with normal spinal imaging. Patients with spinal dysraphism, vertebral segmentation anomalies, those with previous spinal cord disease or injury, or, those who had undergone previous spinal surgery for whatever reason were excluded.
The patient databases were narrowed down using these inclusion and exclusion criteria, and the relevant MRI studies were reviewed. Each data point was subsequently manually reported by two experienced neuroradiologists. The sequences were acquired using 1.5T scanners as part of routine clinical protocols which included T1, T2 axial, and sagittal sequences and/or Short-TI Inversion Recovery (STIR). The images were taken in the supine position as per standard clinical practice. Termination level was selected at the level where the tapering of the conus and thecal sac could be seen most clearly, with all slices considered before the optimal slice selection.
The selection process included reviewing patient databases on tens of thousands of imaging reports and restricting our search to only those where an experienced neuroradiologist had previously reported that the spine appeared normal, or similar words to that effect. This technique excluded several thousand cases, with the remaining cases assessed chronologically, until an adequate sample size was obtained. Studies with unclear quality were excluded, due to the possibility of bias.
There was no assessment for interrater agreement for the measurements; however, in equivocal cases, a joint opinion between two experienced neuroradiologists was agreed upon The images were analysed in the sagittal plane, and the same technique was used to measure the level of both terminations of the CM and TS. Lines were drawn to triangulate the exact point at which each structure terminated so that the corresponding vertebral level could be ascertained by drawing a perpendicular line from the termination point. The vertebral level was found by counting from top downwards and upwards from the last lumbar vertebra, which was identified as the last well-formed vertebral body above the sacrum. Fig. 1 shows the approach to assessing the level of termination.
In accordance with techniques adopted in similar previous studies, the specific vertebra where the structures terminated was then divided into thirds (upper, middle, and lower) so that the level of termination could be ascribed to a more specific part of the vertebra. In cases where they terminated at an intervertebral disc, they were labelled as being at the level of the appropriate disc. Finally, to conduct statistical analysis, each vertebral level was allocated a numerical value. For the termination of the conus medullaris, it ranged from 1 (lower third of T11) to 16 (middle third of L3), and for thecal sac termination from 1 (L5-S1 disc) to 13 (S3-S4 disc).
Statistical analysis
The Product Moment Correlation Coefficient was used to describe the normality of the data, whereby our obtained dataset was contrasted to a perfectly generated counterpart using its key parameters. This technique has been employed in previous studies. We further calculated kurtosis and skewness values, to reinforce the Gaussian distribution.
The effect of gender on the level of termination was assessed using the Wilcoxon signed rank test. The effect of age on the termination levels was assessed using the Kruskal−Wallis ANOVA test. The Product Moment Correlation Coefficient was used to assess for any correlation between the termination levels of the conus medullaris and the thecal sac.
The statistical analysis was conducted and cross-checked internally, by researchers with backgrounds in statistics.
Results
A total of 520 MRI studies were assessed from the two centres (250 females and 270 males).
Level of termination of the conus medullaris
A Gaussian distribution to the level of the conus termination was found, with the mean at the level of lower L1. When the data was contrasted to that of its perfectly normally distributed counterpart data, an extremely strong positive correlation (r=0.8997) was produced (Fig. 2). The population mean was calculated to be 7.904 (95% CI 7.71 to 8.09). The termination levels ranged from upper T12 to the L2/L3 disc.
Effect of gender on termination level of conus medullaris
The study showed Gaussian distributions in both female and male subset populations. In males, the data for the conus medullaris showed a strong positive correlation against its normally distributed dataset (r= 0.9717). Within the female pediatric population, the data was also shown to be normally distributed for the conus medullaris (r=0.8942). The mean level of termination of the conus medullaris was 7.88 in males (95% CI 7.74 to 8.04) and 7.93 (95% CI 7.78 to 8.11) in females, both corresponding to the mid-L1 level. The termination level of the conus ranged from upper T12 to the L2-L3 disc in males, whereas it ranged from upper T12 to the mid-L3 vertebrae in females. This is all strongly in keeping with Gaussian distributions across both genders. A similar Gaussian distribution was noted for the level of the tip of the thecal sac (r=0.9639). Fig. 3 showcases the results, with a Gaussian distribution clearly visualised. The mean level of termination was 7.128 (95% CI (6.95, 7.31)), corresponding to mid-S2. The data ranged from the L5-S1 disc to the S3-S4 disc.
Effect of gender on termination levels of conus medullaris and thecal sac
The thecal sac data, classified by gender, also showcased a strong Gaussian distribution. The male data was shown to have a strong positive correlation (r= 0.9397). The mean level of termination of the thecal sac was 7.28 (95% CI 7.15, 7.43) in males and 6.94 (95% CI 6.78, 7.10) in females, both at the level of lower S2 (Figs. 4 and 5). The data ranged from the L5-S1 disc to the level of lower S3 in males, and the L5-S1 disc to the S3-S4 disc in females.
A 2-tailed Wilcoxon rank sum test was performed and did not allow rejection of the null hypothesis at the 5% significance level (p=0.9955). This showed no statistically significant difference between the distribution of termination positions in the male and female populations. This strengthens the previously published view that gender has no noteworthy impact on the positions of termination.
Correlation between the position of conus medullaris termination and thecal sac termination
A positive correlation (r=0.2567) was noted between the levels of termination of the conus medullaris and thecal sac in each patient respectively (Fig. 6). This is in line with the published data in adults (5).
Effect of age on the levels of termination of the conus medullaris and the thecal sac
The data across pediatric age ranges was compared for any trends. Firstly, the subset of the first 6 months of life was considered in isolation to further elucidate whether a clear ascent was noted within this time frame (Fig. 7). The graph shows no noteworthy changes in termination level across age in the first year of life. A Kruskal-Wallis ANOVA test was performed to assess whether age had a significant impact on the level of termination of the conus medullaris in the first 6 months of life. A non-significant p value of 0.1218 was obtained, suggesting no ascent occurs in the first 6 months.
The entire pediatric age range was subsequently considered and is shown in supplementary figure 1. The graph shows no noteworthy changes in termination levels across age in the pediatric population. A Kruskal-Wallis ANOVA test was performed to assess whether age had a significant impact on the level of termination of the conus medullaris in the age range of 0-19 years. A non-significant p value of 0.1543 was obtained, suggesting that age has no significant influence on the termination level of the conus medullaris.
Similar methods were used to assess the termination positions of the thecal sac in the first year of life and across the pediatric age range respectively. The Kruskal-Wallis analysis obtained p values of 0.2859 and 0.063 respectively, suggesting that age has no significant impact on the termination level of the thecal sac. Fig. 8 shows the change in the termination positions of the thecal sac in the first year of life.
Discussion
This study has several important clinical implications. Irradiation for spinal tumours is traditionally performed by placing the caudal border of the spinal field at S2/S3 intervertebral space [8]. However, such a placement has been shown to miss 8.7% of thecal sacs [8], risking a failure of successful treatment of the underlying tumour. Whilst contouring and placement are continually improving, significant variability is still being observed amongst high-volume practitioners [9]. The clinical risk of underestimating the contour may lead to greater rates of central neuropathy [9]. This serves to denote the extreme clinical importance of a correct understanding of the distribution of the thecal sac and likely termination position in children.
Level of termination of conus medullaris
The mean level of termination of the conus in populations of adults was noted to be at upper L1 [4,10]. This study found a mean pediatric termination around mid-L1 level. This may suggest the conus medullaris is more low-lying in pediatric populations; however, no statistical significance was noted.
This study has concluded that the absolute lower limit for a normal conus medullaris is at the level of the L2-L3 disc. This is in line with previous literature [4]. The current ISPN position is that any conus below mid-L2 should be considered tethered until proven otherwise [11]. This may suggest a new definition of the absolute lower limit of the CM is warranted
Level of termination of the thecal sac
Once again, these results support the previously published literature on the level of termination of the tip of the thecal sac in adult populations [4]. Similar mean positions were noted, with the tip lying around the level of mid-S2. Previous literature in pediatric populations has concluded an average position of upper S2 [8], although amongst a small sample size (n=23). The results are clinically valuable as they may suggest no specific age adjustment needs to be made with respect to the positioning of spinal fields in irradiation procedures, as there is no increased risk of overestimating or underestimating contours.
Correlation between the position of conus medullaris termination and thecal sac termination
The resultant correlation (r=0.2567) is in line with previous studies which have shown a correlation in adult populations to be r=0.309 and 0.32 respectively [4,12]. This allows extrapolation that a similar correlation exists and is maintained throughout all ages. This correlation is unlikely to be due to the result of error, due to both its replicability from previous studies [4] and the relatively large sample sizes in the studies.
Accurate knowledge of the relationship between the CM and TS terminuses can be very important in the assessment of patients with suspected cord tethering who have a conus in a normal position. That is, the overall position of the conus may still be "normal" in general terms (e.g., L2), but the conus is actually tethered and stretched. In this case, there could be a loss of proportionality between the CM and TS. In the reverse cases, i.e. in patients with caudal regression syndrome type I, both the cord terminus and TS are typically higher than normal. Thus, this reinforces the need to assess for proportionality between the CM and TS positions, that is, assisting in the evaluation of the aforementioned subtle cases.
Effect of gender on position of conus medullaris termination and thecal sac termination
In both male and female children, the mean level of the conus medullaris and thecal sac terminations were equally mid-L1 and S2, respectively. This suggests gender has a limited effect on the positions of the conus medullaris and thecal Lower S3 Upper S3 Lower S2 Upper S2 Lower S1 Upper S1 Lower L5 sac. However, the mean conus level across both genders was extremely similar, which disagrees with previous studies in adults [4] which found that gender has a small but significant impact on termination levels of the conus medullaris. However, studies amongst children [3] have equally demonstrated no significant difference across genders. This may suggest that gender may only become an influencing factor once adulthood is reached. This notion is strengthened in a prior study where women, particularly elderly aged ones, had a lower lying conus medullaris [13]. This may suggest that a more complex relationship between gender, age, and termination positions exists.
Effect of age on the levels of termination of the conus medullaris and the thecal sac
Prior studies in children aged 1-6 months have found a mean termination position of upper L2 [3]. This study obtained a mean position of lower L1 (8.56) for the termination level of the CM in the first 6 months, suggesting variability exists. Furthermore, researchers have found that in 15-20-yearolds, the mean termination level was the middle third of L1 [3]. These studies implied that the conus may have been more low-lying in pediatric populations and may, in fact, ascend into adulthood, but that its termination position may gradually increase across increasing childhood age ranges and in fact further into adulthood.
No statistically significant differences were observed with increasing age on the final termination position of the conus medullaris within the first 6 months of life. This seems incongruent with the notion that the conus medullaris reaches its final adult position at 2 or 5 months old. Non-significant results were additionally obtained for increased age on the termination level of the conus medullaris across the entire pediatric population. This supports the idea that no further ascent occurs following childbirth and supports the notion that ascent is entirely in utero, between weeks 9 and 16 [14].
No statistically significant difference was noted amongst age ranges and the termination level of the thecal sac. These results are in agreement with the literature [4,15] that increasing age has no impact on the thecal sac level and this suggests no ascent occurs following childbirth.
Limitations
The relative paucity of studies amongst the age group of 0-1 years of life is a limitation of this study. However, this is to be expected given routine spinal MRI in an otherwise asymptomatic infant is uncommon.
The assessment method employed in the study may have introduced observer bias between neuroradiologists, and this was minimised by consensus reads in equivocal cases. Confounding factors such as child height were difficult to control, especially in tertiary referral centres catering to diverse communities where significant changes in pediatric height were observed.
Conclusion
This study reaffirmed the fact that a Gaussian distribution exists with respect to the levels of termination of both the CM and TS within the pediatric population. Gender was shown to have a limited effect, and there was no general relationship between increasing age and termination levels, suggesting that the final level is obtained at birth.
Future work should be conducted to elucidate the relationship between increasing age on termination positions. This study highlighted the critical importance of imaging investigations in light of this Gaussian distribution and the knock-on effects they play in patient care.
Funding
No funding was needed for this work.
Conflicts of interest/Competing interests None of the authors have any conflicts of interest to disclose.
Ethics approval The centres in Italy and UK had respective IRBs. Formal ethics was not needed as the analysis was done on a retrospective case notes review basis.
Informed consent Not required for this study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2023-01-22T06:16:11.041Z
|
2023-01-21T00:00:00.000
|
{
"year": 2023,
"sha1": "d790eefc484b47b43b6150ea61cf967d0fbc9594",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00234-022-03111-8.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "2c960fd44f945acfe679439cb3c46d0a377768a4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229923389
|
pes2o/s2orc
|
v3-fos-license
|
TransRegex: Multi-modal Regular Expression Synthesis by Generate-and-Repair
Since regular expressions (abbrev. regexes) are difficult to understand and compose, automatically generating regexes has been an important research problem. This paper introduces TransRegex, for automatically constructing regexes from both natural language descriptions and examples. To the best of our knowledge, TransRegex is the first to treat the NLP-and-example-based regex synthesis problem as the problem of NLP-based synthesis with regex repair. For this purpose, we present novel algorithms for both NLP-based synthesis and regex repair. We evaluate TransRegex with ten relevant state-of-the-art tools on three publicly available datasets. The evaluation results demonstrate that the accuracy of our TransRegex is 17.4%, 35.8% and 38.9% higher than that of NLP-based approaches on the three datasets, respectively. Furthermore, TransRegex can achieve higher accuracy than the state-of-the-art multi-modal techniques with 10% to 30% higher accuracy on all three datasets. The evaluation results also indicate TransRegex utilizing natural language and examples in a more effective way.
Abstract-Since regular expressions (abbrev. regexes) are difficult to understand and compose, automatically generating regexes has been an important research problem. This paper introduces TRANSREGEX, for automatically constructing regexes from both natural language descriptions and examples. To the best of our knowledge, TRANSREGEX is the first to treat the NLP-and-example-based regex synthesis problem as the problem of NLP-based synthesis with regex repair. For this purpose, we present novel algorithms for both NLP-based synthesis and regex repair. We evaluate TRANSREGEX with ten relevant state-of-theart tools on three publicly available datasets. The evaluation results demonstrate that the accuracy of our TRANSREGEX is 17.4%, 35.8% and 38.9% higher than that of NLP-based approaches on the three datasets, respectively. Furthermore, TRANSREGEX can achieve higher accuracy than the stateof-the-art multi-modal techniques with 10% to 30% higher accuracy on all three datasets. The evaluation results also indicate TRANSREGEX utilizing natural language and examples in a more effective way.
Index Terms-regex synthesis, regex repair, programming by natural languages, programming by example
I. INTRODUCTION
As a versatile mechanism for pattern matching and searching, regular expressions (abbrev. regexes) have been widely used in different fields of computer science such as programming languages, natural language processing (NLP) and databases due to the high effectiveness and accuracy [1]- [6]. Unfortunately, despite their popularity, regexes can be difficult to understand and compose even for experienced programmers [1], [3], [7]- [9].
To alleviate this problem, prior research has proposed techniques to automatically generate regexes. For example, several techniques generate regexes from natural language (NL) descriptions [10]- [13], while others synthesize regexes from examples [14]- [20]. Though these techniques help lessen the difficulties of automatic regex synthesis, they have obvious drawbacks as follows.
Existing NLP-based techniques can only generate regexes similar in shape to the training data and have relatively low accuracy on simple benchmark datasets (e.g., SOFT-REGEX [13] achieved only 62.8% accuracy on benchmark NL-RX-Turk [11]). Furthermore, NLP-based techniques are impeded by the ambiguity and imprecision of NL even for stylized English [13], [21], [22]. For example, according to [13], among 921 incorrectly predicted regexes, over 38.4% are caused by ambiguity of NL descriptions and 27.8% are from imprecision. Additionally, Zhong et al. [22] found that NLPbased techniques may not make correct prediction if words in these NL descriptions are not covered by the training data.
On the other hand, the example-based synthesis approaches rely on high quality examples provided by users. The synthesized regexes may be under-fitting or over-fitting when the given examples do not meet the implicit quality requirements (e.g, insufficient or not characteristic enough). However, examples with high quality are often unavailable in practice. It poses difficulties in applying purely example-based approaches. In addition, these approaches have severe restrictions on the kinds of regexes that they can synthesize (e.g., absence of Kleene star [14], [15], limited character occurrences [16]- [18], [20] or constraining to binary alphabet [19]). Therefore, to better synthesize regexes it would be ideal to take advantage of both NL and examples (called NLP-andexample-based synthesis or multi-modal synthesis): the use of advanced NLP-based techniques can reduce the amount of required (characteristic) examples meanwhile alleviating the amount of effort from users; while the use of examples can effectively disambiguate or correct errors in the descriptions. Further, a survey on posts on regex synthesis shows that many programmers actually use NL descriptions as a major resource, and leverage some example(s) to resolve the ambiguities of NL [23]. On the other hand, insufficient (characteristic) examples limit the generalization ability of example-based approaches, while incorporating NL can improve the generalization ability, and help to drastically narrow down the search space [23]. Actually there have been recent attempts in this direction [21], [24], in which they first translated the NL description into a sketch 1 , then searched the regex space defined by the sketch guided by the given examples. However, the forms of translated sketches are restricted. This prevents regexes from being synthesized correctly when the generated sketches are inappropriate (e.g., logically-incorrect). In such a case, the incorrectness will be inherited from sketches to the subsequent regex. Moreover, while these works [21], [24] have achieved relatively high accuracy on simple datasets, they did not perform well on complex and realistic datasets. In latter datasets, the NL descriptions are longer, more complicated, and describe the regexes which are more complex in terms of length and tree-depth [22], [25].
We observe that most of the incorrect regexes generated by NLP-based techniques are very similar to the target regexes with subtle differences, and can be made equivalent 2 to the target regexes with only minor modifications (e.g., reordering/revising characters or quantifiers). This motivates us to view the NLP-and-example-based regex synthesis problem as the problem of NLP-based synthesis with regex repair, and develop the first framework, TRANSREGEX, to leverage both NL and examples for regex synthesis by using NLP-based and regex repair techniques. TRANSREGEX uses an NLPbased synthesizer to convert the description into a regex. If the synthesized regex is inconsistent with the given examples, it then leverages a regex repairer to modify the synthesized regex guided by the examples, and returns the revised regex.
Considering that the lack of focus on validity in previous NLP-based works (e.g., on the dataset StructuredRegex, less than half of the regexes predicted by DEEP-REGEX (Locascio et al.) [11] are valid, see Section IV-C), we propose a twophase NLP-based synthesis model S 2 RE to solve this problem as follows. By rewarding the S 2 RE model with validity, in addition to the semantic correctness reward used in previous works [10], [13], our model is towards generating more valid regexes than previous models. If the generated regexes are still invalid after that, the invaid2valid model, wherein the structure of S 2 RE is reused, is used to transform them into valid ones to further guarantee the validity of regexes.
There are various drawbacks or limitations in existing repair techniques, such as only supporting positive or negative examples [26], [27], not supporting regexes with the conjunction (&) operator, or having the problems of under-fitting/overfitting [20], [28]. To overcome these drawbacks or limitations, we present a novel and efficient algorithm, SYNCORR, based on Neighborhood Search (NS) to repair an incorrect regex to achieve that the repaired regex is consistent with the examples. Particularly SYNCORR alleviates the under-fitting/over-fitting problem, via preserving the integrity of the small sub-regexes.
TRANSREGEX can greatly reduce the aforementioned errors caused by ambiguity, imprecision or unknown words in NL descriptions, via using the example-guided regex repairer. In comparison with existing multi-modal works [21], [24], TRANSREGEX avoids their limitations by not restricting the 1 A sketch is an incomplete regex containing holes to denote missing components. 2 Two regexes are equal iff their corresponding languages are equivalent.
sketches of the generated regex. Furthermore, TRANSREGEX modularizes the synthesis problem as the NLP-based regex synthesis and example-guided regex repair, allowing one to use his own algorithms or any other new algorithms instead. We evaluate TRANSREGEX by comparing TRANSREGEX against ten state-of-the-art tools on three publicly available datasets. Our evaluation demonstrates the accuracy of our TRANSREGEX is 17.4%, 35.8% and 38.9% higher than that of NLP-based works on the three datasets, respectively. Further, TRANSREGEX can achieve higher accuracy than the state-ofthe-art multi-modal works [21], [24], with 10% to 30% higher accuracy on all three datasets. Our evaluation also reveals our NLP-based model S 2 RE can generate 100% valid regexes on complex dataset, whereas other NLP-based tools can synthesize 49.6% to 90.6% valid ones. Finally, the evaluation results on regex repair also show that our SYNCORR has better capability than existing repair tools.
The contributions of this paper are listed as follow.
• We propose TRANSREGEX, an automatic framework which can synthesize regular expressions from both NL descriptions and examples. To the best of our knowledge, TRANSREGEX is the first to treat the NLP-and-examplebased regex synthesis problem as the problem of NLPbased synthesis with regex repair. • We introduce a two-phase algorithm S 2 RE for regex synthesis from NL. By rewarding the S 2 RE model with validity and using the invaid2valid model, S 2 RE generates more valid regexes while having similar or higher accuracy than the state-of-the-art NLP-based models. • We present a novel algorithm SYNCORR for regex repair that (i) leverages Neighborhood Search (NS) algorithms to guide the search for a better regex which is consistent with the given examples from the neighborhoods of the incorrect regex, and (ii) utilizes some rewriting rules for sub-regexes abstraction to preserve the integrity of some small sub-regexes, thereby alleviating under-fitting/overfitting and efficiently reducing the search space. • We conduct a series of comprehensive experiments comparing TRANSREGEX with ten state-of-the-art synthesis tools. The evaluation results demonstrate that the accuracy of our TRANSREGEX is 17.4%, 35.8% and 38.9% higher than that of NLP-based approaches on the three datasets, respectively, while TRANSREGEX can achieve higher accuracy than the state-of-the-art multimodal techniques with 10% to 30% higher accuracy on all three datasets. The evaluation results also indicate TRANSREGEX utilizing natural language and examples in a more effective way.
II. OVERVIEW
In this section, we present an overview of TRANSREGEX. As illustrated in Fig. 1, TRANSREGEX consists of two steps, namely, the NLP-based regex synthesis (Section III-C) and the example-guided regex repair (Section III-D). In the first step, NLP-based regex synthesis takes the given NL description as input and tries to synthesize a regex from the NL Fig. 2.
First, TRANSREGEX utilizes our NLP-based synthesizer S 2 RE to translate the NL description N L into a regex. As shown in Fig. 3, the encoder of S 2 RE generates latent vectors from the given description N L, and the decoder of S 2 RE synthesizes the corresponding regex So after that, TRANSREGEX employs the algorithm SYN-CORR that is based on Neighborhood Search (NS) to repair the incorrect regex, given in Fig. 4. As we have mentioned in Section I, the incorrect regexes generated by S 2 RE may be very similar to the target regexes with subtle differences. Therefore, SYNCORR first converts the above incorrect regex to the abstract regex r = (<VOW><S><NUM><S>)<Q 7, > with symbolic symbols obtained by executing the function preprocess, so that the integrity of small sub-regexes (e.g., [AEIOUaeiou] and [0-9]) can be retained as much as possible in the subsequent steps. Then SYNCORR calls the function transformations to get the neighbours of r, i.e., some abstract regexes (e.g., <VOW><S>(<NUM>)<Q 7, ><S>) that are similar to r through a series of subtle transformations 3 (e.g., quantifier adjustment or element replacement, etc.). Next, SYNCORR maps these abstract regexes into corresponding concrete regexes via using the function unpreprocess, and calculates the f value 4 of each regex. Finally, SYNCORR returns the regex [AEIOUaeiou]. * [0-9]{7,}. * with f value of 1. Leveraging NLP-based synthesis with regexs repair, our TRANSREGEX is able to synthesize the correct one mentioned above. This success case shows that our TRANSREGEX can well deal with the ambiguity of NL and the invalidity of regexes produced by some NLP-based synthesizers.
In addition, it is worth noting that on the same example and the incorrect regex mentioned above, the repair tool RFIXER [28] produces the incorrect regex ([AEeiO01234U56789]. * [0-9]. * ){2,}. Specifically, RFIXER cannot generalize for some unseen examples (e.g., a positive example a1234567 ), this will result in the regex produced by RFIXER without some characters like "a", i.e., the generated regexes will be over-fitting. In contrast, SYNCORR avoids over-fitting well by preserving the integrity of the small sub-regexes.
However, considering that SYNCORR is an algorithm based on NS and thus may trap in local optimum, we will continue to use RFIXER to repair if SYNCORR fails. As demonstrated in TABLE III, SYNCORR+RFIXER can achieve more than 10% higher success rate of repair than SYNCORR on the experimental datasets. Further, if there will be more powerful repair tool then RFIXER, by combining SYNCORR we can achieve even higher success rate, which is a future work.
III. REGEX SYNTHESIS ALGORITHM
In this section, we present the details of our synthesis algorithm TRANSREGEX. Before that, we first provide the background.
A. Background
Let Σ be a finite alphabet of symbols. The set of all words over Σ is denoted by Σ * . The empty word and the empty set are denoted by ε and ∅, respectively.
Regular Expression (Regex). Expressions of ε, ∅, and a ∈ Σ are regular expressions; a regular expression is also formed using the operators Besides, r?, r * , r + and r{i} where i ∈ N are abbreviations of r{0, 1}, r{0, ∞}, r{1, ∞} and r{i, i}, respectively. r{m, ∞} is often simplified as r{m, }. The language L(r) of a regular expression r is defined inductively as follows: Fig. 3. The process of the algorithm S 2 RE.
B. The Main Algorithm
Our synthesis algorithm is shown in Algorithm 1, which aims to synthesize regexes from NL descriptions and examples. In detail, our algorithm first employs a NLP-based synthesizer S 2 RE to generate a regex r from the given NL description N L (line 1), which is introduced in Section III-C. Then, if r is consistent with the given positive and negative examples, TRANSREGEX outputs r (line 2). Otherwise, TRANSREGEX leverages two example-guided repairers to fix the incorrect regex r based on the provided positive and negative examples, which are described in Section III-D, and returns the repaired regex (lines 3-6).
C. Regex Synthesis from Natural Language Descriptions
To synthesize a regex from the NL description, we build a seq2seq model with attention mechanism as our regex synthesis model S 2 RE. It consists of an encoder and a decoder. The encoder initializes the words in the NL sequences as vectors, then it encodes the vectors as hidden states, which represents the semantic representations of the NL sequences. The decoder generates the corresponding regex according to the latent representations from the encoder.
The training of our neural S 2 RE model consists of two stages, using two different strategies.
Maximum Likelihood Estimation (MLE): In the first stage, we use MLE to maximize the likelihood of mapping the NL description to corresponding regex: where D is the training set, p(r|N L) is the probability that the seq2seq model generates a regex r from a NL description N L. Policy Gradient: MLE may fail to consider the semantic equivalence of the regexes that might be different in syntax and the validity of the regexes (especially for complex and realistic datasets). Therefore, in the second stage, we gradually train our S 2 RE model via policy gradient [29] by rewarding the model according to the following two indicators.
• Semantic Correctness: following Park et al.'s work [13], we reward the S 2 RE model if it generates a regex that is semantically equivalent to the ground truth, that is, the semantic reward R C (r) is 1 if the regex r is semantically equal with ground truth and 0 otherwise; • Syntactic Validity: we also reward the S 2 RE model if it generates regexes that are valid 5 , that is, the syntactic reward R V (r) is 1 if the regex r is valid and 0 otherwise. Finally, the objective of the second stage is to maximize the following function: where α and β are hyper-parameters. The S 2 RE model helps to generate more correct and valid regexes than the previous models. However, it still can not guarantee to generate valid regexes for all the input NL descriptions. To solve this problem, we reuse the structure of the S 2 RE model to build our invalid2valid model (wherein only R V is used). Specifically, the invalid2valid model is trained on 5, 000 invalid and valid regex pairs, which are collected as following: • Get a valid regex randomly from the dataset Structure-dRegex [25]; • Make it invalid by performing some minor changes on it : adding or deleting or modifying 1-5 positions randomly; • If the changed regex is still valid, then discard it. The whole process of generating regexes from NL descriptions is shown in Fig. 3. It shows that, the algorithm first uses our S 2 RE model to generate a regex. If the generated regex is invalid, it then transform this regex fast to a valid one using the pretrained invalid2valid model. The experiments show that after the S 2 RE model and the invalid2valid model, we obtain 100% valid regexes in syntax.
D. Regex Repair from Examples
The regexes generated by S 2 RE may be incorrect but very similar to the target regexes with subtle differences. In this section, we present our algorithm SYNCORR to repair these incorrect regexes. The key idea is to search for a better regex which accepts more positive examples and rejects more negative examples from the neighborhoods of input regex.
To start with, we define the neighborhood and an evaluation criterion of regex r. Given a regex r, we define its neighborhood, denoted as N (r), as the set of regexes, which can be obtained by applying a transformation on r (transformations are given later). In order to select a regex r among a set of regexes, we define a measure f on r with respects to positive examples P and negative examples N as The algorithm SYNCORR is shown in Algorithm 2. In order to facilitate neighborhood search, SYNCORR sets the highest level of rewriting rules l max ranges from 2 to 0 (line 1). First, SYNCORR preprocesses the given regex r 0 with the given highest level l max and keeps the result regex in r (line 2). Next, it applies the transforms on r to get the neighborhood N (r) and unpreprocess r and N (r) (lines 4-6). Then SYNCORR selects the regex with the maximum f value, denoted as r m , from the neighborhoods of r (line 7). After that, it compares the f values between the current regex r and the selected regex r m (line 8). If the selected regex r m gets a higher f value, then SYNCORR checks whether the f value equals to 1. If it is, SYNCORR returns r m (line 9). Otherwise, to start with the next iteration, the regex r m is preprocessed and assigned to r (line 10). If the selected regex r m does not get a higher f value, then r m may be a local maximum, so SYNCORR fails to repair regex r 0 and breaks the loop (line 11). For each l max , the processing above runs until the stop conditions meet (lines Rewriting rules. In order to preserve the integrity of the small sub-regexes (probably the correct part) and reduce the search space, we define some rewriting rules for the regexes to abstract some small sub-regexes. The resulting regexes are called the abstract forms of the original regexes. Specifically, we rewrite some special small regexes to unique symbolic nodes, which are listed as follows: where const denotes a string (in regex) consisting of characters in Σ (e.g., "abc"), r is [C] or a const, and the suffix indicates the level l of the rule. The rewriting rules are performed greedily and depending on its level. In our implementation, we use some meaningful names for some special regexes (e.g, use <NUM>, <LET>, <CAP>, and <VOW> for [0-9], [A-Za-z], [A-Z], and [AEIOUaeiou], respectively) and keeps the rewriting mappings in dictionaries (i.e., Dict l for the level l).
Preprocess and unpreprocess. Given a regex r and the highest level l max , the preprocess is to abstract (i.e., from left to right) r according the rewriting rules whose level l ranging from l max to 0 in order. The rules with high level are performed first. Take r 0 in Fig. 4 as an example. If l max = 2, the rules with level l = 2 are performed first, but without changing r 0 because of no rules with level l = 2 are applicable. Then the rules with level l = 1 are applied, yielding r 1 =(<SRVOW><SRNUM>){7,}. Finally, the rules with level l = 0 is applied, yielding the final preprocessed regex r =(<SRVOW><SRNUM>)<Q 7, >. If l max = 1, the same preprocessed regex r is returned. If l max = 0, directly apply the rewriting rules with level l = 0, yielding the final preprocessed regex (<VOW><S><NUM><S>)<Q 7, >. The unpreprocess is the reversion of the preprocess.
Stop conditions. We can set the maximum number of iterations, and the maximum running time of the program as independent or mixed stop conditions.
Transformation. We observe that most of the incorrect regexes generated by S 2 RE can be made equivalent to the target regexes with only minor modifications. For that, we design a series of transformations on regexes, which are listed bellow, where we use elements to denote the small regexes that the rewriting rules can apply on, and generalized elements to denote the regexes containing at least a pair of brackets.
• Binary Element Insertion: this transformation inserts a binary element (i.e., disjunction, concatenation, or conjunction) from a candidate set into the current regex. In our implementation, we take the elements collected in the dictionaries as the candidate set. And to search the neighbour fast, we perform the transformation in parallel, as there are no data races between each transformation.
IV. EVALUATION
We implemented TRANSREGEX in Python, and conducted experiments on a machine with 16 cores Intel Xeon CPU E5620 @ 2.40GHz with 12MB Cache, 24GB RAM, running Windows 10 operating system. Under this experiment settings, we then designed our experiments to answer the following research questions: • RQ1: Can S 2 RE model generate correct and valid regexes from natural language descriptions? ( §IV-C) • RQ2: Can SYNCORR repair incorrect regexes from examples? ( §IV-D) • RQ3: Can TRANSREGEX synthesize regexes accurately?
A. Datasets
In the experiment, we evaluate TRANSREGEX on three public datasets: KB13 [10], NL-RX-Turk [11], and Structure-dRegex [25]. Among them, KB13 consists of 824 pairs of NL descriptions and the corresponding regexes constructed by regex experts. NL-RX-Turk includes 10,000 pairs of NL descriptions and regexes collected through crowdsourcing. StructuredRegex comprises 3,520 long English descriptions which are 2.9 to 4.0 times longer than the first two datasets, paired with complex regexes and associated 6 positive/6 negative examples using crowdsourcing. As our approach requires examples which are absent in the first two datasets, we adopt the corresponding 10 positive and 10 negative examples for the first two datasets provided by Ye et al. [24]. Specifically, the 10 positive examples are enumerated by randomly traversing the deterministic finite automaton (DFA) of the given regex (resp. the 10 negative examples are synthesized by stochastically traversing the DFA of the negation of the given regex).
C. RQ1: Effectiveness of S 2 RE
To answer the first question, we compare our algorithm Besides, the evaluation results on validity are summarized in TABLE II. The validity of synthesized regexes is crucial. It guarantees the quality of regex synthesis from NL . Further, in our approach, it ensures that the regex synthesized from NL is provided as a valid input to example-guided regex repair. The results show that the validity of existing tools (e.g., DEEP-REGEX (Locascio et al.), and SOFTREGEX) is unsatisfactory on the last dataset StructuredRegex which is much more complex than the first two. As we can see, less than half of the regexes generated by DEEP-REGEX (Locascio et al.) are valid and the most advanced NLP-based model SOFTREGEX achieves 90.6%. By contrast, our model S 2 RE achieves 100% validity ratio, because it utilizes both the syntactic validity reward and invalid2valid model. Summary to RQ1: S 2 RE can achieve similar or better accuracy than the state-of-the-art NLP-based models. Meanwhile, S 2 RE can synthesize more valid regexes. The advantage of high validty of S 2 RE becomes more obvious on complex datatsets.
We further demonstrate the advantages and disadvantages of RFIXER and SYNCORR through a few cases in
E. RQ3: Effectiveness of TransRegex
To answer this question, we compared TRANSREGEX with six NLP-based baselines and four multi-modal baselines. We can see from TABLE I that on average, NLP-based works performed worse than multi-modal works on all three datasets. In general, the accuracy on the first dataset KB13 is much higher than that on the other two datasets for NLP-based methods, and the accuracy achieved by NLP-based methods are up to 30% lower than that achieved by multi-modal methods on average. Particularly, on the first two datasets, the accuracy of NLP-based works range from 38.6% to 78.2%, compared with 68.9% to 86.4% achieved by existing multi-modal works. On comparison, TRANSREGEX achieved approximate 10% higher accuracy than these baselines, reaching 90.3% to 98.6% on the first two datasets. The superiority of our work is more obvious on the last dataset, which is much more complex than the first two. The accuracy achieved by TRANSREGEX (67.4%) almost In other words, the incorrection of sketches will be inherited by the generated regex in the next step. While TRANSREGEX does not have the above-mentioned drawbacks. In addition, although TRANSREGEX is also a two-step algorithm, the second step of TRANSREGEX is not only not affected by the errors of the first step, but also specifically correct the errors of the first step guided by the given examples.
Summary to RQ3: TRANSREGEX can achieve higher accuracy than the NLP-based works with 17.4%, 35.8% and 38.9%, and the state-of-the-art multi-modal works with 10% to 30% higher accuracy on all three datasets. The experiment results also indicate TRANSREGEX utilizing natural language and examples in a more effective way than other multi-modal works. Table VI. In general, we can see that TRANSREGEX takes longer time to synthesize regexes on the last dataset than on the first two, and NLP-based baselines take less than TRANSREGEX on average. In particular, TRANSREGEX takes an average 5.822 seconds and 3.944 seconds on the first two datasets, compared with 1.104 to 3.578 seconds achieved by baselines. While on the last dataset, TRANSREGEX takes longer time due to the complexity of the dataset. Considering together with the accuracy, TRANSREGEX achieved the accuracy (67.4%) that doubles the accuracy (28.5%) achieved by S 2 RE, taking only around 10 more seconds running time. Table VI also reveals the rationality of our algorithms, the adopting of SYNCORR helps us to accelerate the repair process in some cases, while RFIXER takes care of the rest.
F. RQ4: Efficiency of TransRegex
Summary to RQ4: TRANSREGEX can synthesize regex efficiently. Especially when considering together with accuracy, TRANSREGEX can takes an average 3.944 to 5.822 seconds to achieve around 20% more accuracy on simpler datasets, and takes 30% more accuracy at the cost of 10 more seconds on the more complex dataset.
V. THREATS TO TRANSREGEX'S VALIDITY TRANSREGEX is not guaranteed to generate correct regexes for each benchmark, mainly due to the following aspects: • Uncharacteristic examples.
A. Regex Synthesis
Regex synthesis from examples. The problem of automatic regex synthesis from examples has been explored in many domains [4], [16]- [20], [30], [31]. AlphaRegex [19] is a searchbased algorithm for synthesizing simple regexes for introductory automata assignments. AlphaRegex exploits over/underapproximations to effectively prune out a large search space. However, all the regexes produced by AlphaRegex are over alphabets of size 2. RegexGenerator++ [4], [30] However, one of the main issues in the above example-based techniques is the quality of the synthesis, i.e., whether it would generalize and correct for unseen examples. Specifically, if users can not provide sufficient and characteristic examples, the synthesized regexes will be under-fitting or over-fitting. Regex synthesis from NL. Several works from the Natural Language Processing (NLP) community address the problem of generating regexes from (NL) specifications [10]- [13]. Kushman and Barzilay [10] introduced a technique for learning a probabilistic combinatory categorial grammar model to parse a NL description into a regex. To avoid domainspecific feature extraction, Locascio et al. [11] described the DEEP-REGEX model based on standard sequence-to-sequence (seq2seq) model, which regards the problem of generating regexes from NL descriptions as a direct machine translation task. To solve the problem that DEEP-REGEX model may not generate semantically correct regexes, the SEMREGEX model [12] based on reinforcement learning method was presented. It leverages DFA equivalence as a reward function to encourage the model to generate semantically correct regexes. To speeds up the training phase of SEMREGEX model, Park et al. [13] devised the SOFTREGEX model, which determines the equivalence of two regexes using deep neural networks.
There are three major bottlenecks in existing NLP-based techniques that affect the quality of synthesis: (i) Ambiguity and imprecision of NL . Ambiguity of NL results in predicting a regex embedding other meanings (resp. imprecision of NL affects the correctness of synthesis); (ii) Unknown words and rare words. There are unknown words or rare words in NL descriptions, which will lead to failure to generate correct regexes; (iii) Seq2Seq model. Seq2Seq-based approaches can only synthesize regexes similar in shape to the training data. Description 3 lower case letters followed by more than 3 letters.
Description a string that consists of upper or lower case letters, special characters (-!@#$%&*() .) or the number 0 and whose length is 4 or more characters long.
Description a list of 3 semicolon separated strings, the first and second strings begin with any combination of 4 letters or digits, these strings end with 2 to 4 lower case letters, the third part is any number of capital letters. Then a synthesizer searches the regex space defined by the sketch and returns a concrete regex that is consistent with the given examples. Although both adopting a two-step paradigm, these works [24], [25] have an apparent limitation-incorrect sketches generated in the first step will subsequently induce the final regexes. In other words, the incorrection of sketches will be inherited by the synthesized regexes in the next step. On the other hand, our work overcomes this limitation: the second step of TRANSREGEX (i.e., example-guided regex repair) fixes the incorrect regex by examples if needed. In this manner, the inherited inconsistencies in our first step (i.e., NLP-based regex synthesis) will be fixed in our second step.
B. Regex Repair
Regex repair from examples. There are several works [20], [26]- [28] targeting at repairing regexes from examples. We discuss two main paradigms of them. In the first paradigm, works only consider either positive or negative examples. Li et al. [26] proposed ReLIE, which can modify complex regexes by rejecting the newly-input negative examples. By contrast, Rebele et al. [27] proposed a novel way to generalize a given regex so that it accepts the given positive examples. On the other hand, works in the second paradigm take both positive and negative examples into consideration. Pan et al. [28] designed RFIXER, a tool for repairing incorrect regexes using both examples. It took advantage of skeletons of regexes (i.e., sketches) to effectively prune out the search space, and it employed SMT solvers to efficiently explore the sets of possible character classes and numerical quantifiers. Our work applies RFIXER in our regex synthesis from NL and examples.
Li et al. [20] described algorithm REPAIRINGRE based on Neighborhood Search (NS) to repair incorrect or ReDosvulnerable regexes from positive and negative examples. Similar to REPAIRINGRE, our algorithm SYNCORR also uses NS to repair regexes, but the difference is that REPAIRINGRE uses automaton-directed repair, while we use regex-directed repair.
Like the above example-based synthesis algorithms, these repair algorithms also may cause under-fitting or over-fitting results. To alleviate the problems of under-fitting/over-fitting, our SYNCORR leverages some rewriting rules for sub-regexes abstraction to preserve the integrity of some small sub-regexes.
C. Program Synthesis
Programming by example (PBE). PBE techniques have been the subject of research in the past few decades [32] and successful paradigms for program synthesis, allowing end-users to construct and run new programs by providing examples of the intended program behavior [33]. Recently, PBE techniques have been successfully used for string transformations [34]- [36], data filtering [14], data structure manipulations [37], [38], table transformations [39], [40], SQL queries [41], [42], and MapReduce programs [43], [44]. Programming by NL (PBNL). There has been a lot of progress made in PBNL [45]. Specifically, several techniques have been proposed to translate NL descriptions into Python [46], [47], SQL queries [48]- [50], shell scripts [51], [52], spreadsheet formulas [53], test oracles [54], JavaScript function types [55], and Java expressions [56]. Program synthesis from NL and examples. Sinece program synthesis from NL and examples techniques can well overcome the shortcomings of PBE and PBNL techniques, at the same time, provide a more natural and friendly interface to the users, recent years they have been widely used in several areas, for example, string manipulation programs [23], [57], and program sketches [58]. In this paper, we focus on an important subtask of the program synthesis from NL and examples problem: synthesizing regexes from both NL and examples.
VII. CONCLUSION
We propose an automatic framework TRANSREGEX, for synthesizing regular expressions from both natural language descriptions and examples. To the best of our knowledge, TRANSREGEX is the first to treat the NLP-and-examplebased regex synthesis problem as the problem of NLP-based synthesis with regex repair. For NLP-based synthesis, we devise a two-phase algorithm S 2 RE which generates more valid regexes while having similar or higher accuracy than the state-of-the-art NLP-based models. While for regex repair, we present a novel algorithm SYNCORR that leverages NS algorithms to guide the search for a target regex and uses rewriting rules to alleviate under-fitting/over-fitting and efficiently reduce the search space. The evaluation results demonstrate that the accuracy of our TRANSREGEX is 17.4%, 35.8% and 38.9% higher than that of NLP-based works on the three publicly available datasets, respectively. Further, TRANSREGEX can achieve higher accuracy than the state-ofthe-art multi-modal works with 10% to 30% higher accuracy on all three datasets. The evaluation results also indicate TRANSREGEX utilizing natural language and examples in a more effective way.
|
2021-01-01T02:16:35.340Z
|
2020-12-31T00:00:00.000
|
{
"year": 2020,
"sha1": "0377a9dad3f4b45830a20316ff33de2c8aef04f7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.15489",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bf84d20b0de78ad344faac49bcec0abcbae6612f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
28015431
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid & El Tor variant biotypes of Vibrio cholerae O1 in Thailand
Background & objectives: El Tor Vibrio cholerae O1 carrying ctxBC trait, so-called El Tor variant that causes more severe symptoms than the prototype El Tor strain, first detected in Bangladesh was later shown to have emerged in India in 1992. Subsequently, similar V. cholerae strains were isolated in other countries in Asia and Africa. Thus, it was of interest to investigate the characteristics of V. cholerae O1 strains isolated chronologically (from 1986 to 2009) in Thailand. Methods: A total of 330 V. cholerae O1 Thailand strains from hospitalized patients with cholera isolated during 1986 to 2009 were subjected to conventional biotyping i.e., susceptibility to polymyxin B, chicken erythrocyte agglutination (CCA) and Voges-Proskauer (VP) test. The presence of ctxA, ctxB, zot, ace, toxR, tcpAC, tcpAE, hlyAC and hlyAE were examined by PCR. Mismatch amplification mutation assay (MAMA) - and conventional- PCRs were used for differentiating ctxB and rstR alleles. Results: All 330 strains carried the El Tor virulence gene signature. Among these, 266 strains were typical El Tor (resistant to 50 units of polymyxin B and positive for CCA and VP test) while 64 had mixed classical and El Tor phenotypes (hybrid biotype). Combined MAMA-PCR and the conventional biotyping methods revealed that 36 strains of 1986-1992 were either typical El Tor, hybrid, El Tor variant or unclassified biotype. The hybrid strains were present during 1986-2004. El Tor variant strains were found in 1992, the same year when the typical El Tor strains disappeared. All 294 strains of 1993-2009 carried ctxBC ; 237 were El Tor variant and 57 were hybrid. Interpretation & conclusions: In Thailand, hybrid V. cholerae O1 (mixed biotypes), was found since 1986. Circulating strains, however, are predominantly El Tor variant (El Tor biotype with ctxBC).
Vibrio cholerae, the causative agent of severe watery diarrhoeal disease cholera, comprises 206 serogroups (O1-O206) based on antigenic diversity of their outer membrane lipopolysaccharides 1,2 . Strains of the O1 serogroup are divided into two biotypes i.e., classical and El Tor, according to their phenotypic differences. The classical strains are sensitive to 50 units of polymyxin B and Mukerjee's type IV bacteriophage while the El Tor strains are generally dually resistant with the exception of some strains isolated in southern Bangladesh 3,4 . The El Tor strains are more adapted and resilient in environment, and cause higher infection to case ratio and more asymptomatic carriers than the classical counterpart 5 . Clinical manifestations of cholera caused by classical V. cholerae are more severe and prolonged than those caused by the El Tor 6,7 . This is attributable to the subtle difference of cholera toxin (CT) encoded by ctxAB genes of V. cholerae. Each of the V. cholerae O1 biotype can be divided into three serotypes i.e., Ogawa, Inaba, and Hikojima. Since 1817, the world has experienced seven cholera pandemics caused by V. cholerae O1. Strains of classical biotype were considered as the causative agents for the first six pandemics while the 7 th cholera pandemic which started in 1961 from Sulawesi Island, Indonesia, was caused by El Tor V. cholerae O1. Since then, the El Tor V. cholerae had replaced the classical biotype as the sole cause of cholera epidemics until 1982 when there was a re-emergence of the classical V. cholerae isolated from patients during an epidemic in Bangladesh 8- 10 . Both biotypes co-existed in Bangladesh until the classical vibrios became extinct in 1993. Until 1991, only toxigenic V. cholerae O1 strains caused cholera epidemic and pandemics. In 1992, a large cholera outbreak was reported from southern India and subsequently spread rapidly to neighbouring countries in several countries in Asia but did not spread to any other continent. The epidemic organism was non-O1 V. cholerae which could not be allocated into any of the pre-existing non-O1 serogroups. Subsequently, the organism was designated as serogroup O139 synonym Bengal in recognition of the place of origin [11][12][13] .
New V. cholerae O1 variants carrying mixed classical and El Tor phenotypes were first isolated from hospitalized patients with severe watery diarrhoea in Matlab, Bangladesh, in 2002 3 . These isolates could not be allocated into the classical or El Tor biotype using conventional biotyping tests. Genotypically, these were found to carry the El Tor genome backbone including El Tor specific gene clusters: VSP-I and -II and RTX, indicating that these belonged to El Tor lineage. These isolates carried different combinations of alleles of tcpA and CTx prophage repressor gene (rstR) 4 16 . This nomenclature has been followed in this study.
The 7 th pandemic cholera arrived in Thailand in 1963, when the El Tor strains completely replaced the classical vibrios and established endemicity 17 . The O139 Bengal was first isolated from hospitalized patient with severe watery diarrhoea in Thailand in 1993 18 . The O139 serogroup completely disappeared from Thailand since 1996 17 . Because it is known that classical V. cholerae strains with ctxB C inflicted more severe symptoms than the typical El Tor infection 6,16 and because there had been a resurgence of cases of severe watery diarrhoea that required hospitalization during 1999-2002, it was of interest to make an insight into both phenotypic and genotypic characteristics of V. cholerae O1 isolated from cholera patients in different years in Thailand. Primer sequences used in PCRs are shown in Table III 19 . Amplification mixture (25 µl) for ctxB-MAMA-PCR and rstR-PCR composed of 1 µl bacterial genomic DNA template, 2.5 µl 10x PCR buffer, 2 µl each of 2.5 mM deoxynucleotide triphosphate (Fermentas, Vilnius, Lithuania), 2 µl of 25 mM MgCl 2 , 2 µl of 10 µM of individual forward and reverse primers (Bio Basic Inc., Toronto, Canada), 0.5 units Taq DNA polymerase (Fermentas) and sterile ultra pure distilled water. Amplification of other genes was essentially the same as described previously 19 . The PCR products were analyzed by using 1.5 per cent agarose (Seakem LE, BMA, Glendate, CA, USA) gel electrophoresis and ethidium bromide staining (Sigma Chemical Co., USA). A Gel Doc 2000 (Bio-Rad, CA, USA) was used for DNA band documentation.
Results & Discussion
All of the 330 V. cholerae O1 Thai clinical strains collected over 24 years (1986-2009) were found to carry ctxA, ctxB, zot, ace, toxR, tcpA E and hlyA E which verified genetically their toxin producing capacity and epidemic potential. Two hundred and sixty six strains were prototype El Tor (resistant to the polymyxin B, and positive for CCA and VP test) and the remaining 64 strains were not biotypable (Table I). Identification of rstR by conventional PCR showed that the 36 strains of 1986-1992 carried either the El Tor rstR (rstR E ) or combination of the El Tor and classical rstR (rstR E/C ) ( Table I). MAMA-PCR for ctxB of these isolates revealed that 18 (50%) carried ctxB E . Only 15 of these 18 strains had prototype El Tor phenotype (resistant to 50 units of polymyxin B, and positive for CCA and VP test) indicating that they were typical El Tor biotype. The other 3 strains, although carrying ctxB E , appeared to be hybrid biotype as they possessed mixed phenotypes (Tables I and IV). There were 11 strains of 1986-1992 (31%) that carried ctxB E/C . Among these only one strain had mixed classical and El Tor phenotypes implying that this was hybrid biotype. The remaining 10 with ctxB E/C , however, could not be assigned into any of the redefined biotype scheme 16 although these showed conventional El Tor phenotype (Tables I and IV). The remaining seven (19%) of the 1986-1992 (all were isolated in 1992) strains carried ctxB C ; four of these had conventional El Tor phenotypes implying that these were El Tor variant while the other three had mixed phenotypes, and were hybrid (Table I).
These data indicate the presence of hybrid biotype of V. cholerae O1 in Thailand since 1986 or even before and these co-existed with the typical El Tor strains.
The V. cholerae O1 Thailand strains that carried ctxB E / rstR E i.e., typical El Tor strains, were found for the last time in 1992 in this V. cholerae O1 collection which was the same year when the strains of El Tor variant biotype (strains 30-33) carrying ctxB C /rstR E/C emerged in the country (Table I). It is noteworthy that in 1992 the epidemic V. cholerae O139 strains emerged in Southern India 11 . The Fig. shows MAMA-PCR results of representative strains of V. cholerae chronologically isolated in Thailand i.e., ctxB C (Fig. A) and ctxB E (Fig. B).
The V. cholerae O1 Thailand strains of 1993-2009 (294) were all found to carry ctxB C and either rstR C or rstR E/C . Majority of these strains (237 strains), however, were El Tor variants as their phenotypes were typical El Tor. The minority (57 strains) belonged to hybrid biotype because these had mixed phenotypes of classical and El Tor (Table I). The 1986-2009 Thailand strains with hybrid biotype could be arbitrarily classified into 13 different hybrid groups, 1-13 (Table IV). During 1986-1992, the biotypes of the 36 V. cholerae O1 Thailand strains were 15 prototype El Tor, 7 hybrid (groups 1-5), 4 El Tor variant, and 10 unclassified (unclassified groups 1 and 2) (Tables I and IV). The 294 strains of 1993-2009 belonged to hybrid (Tables I and IV).
The V. cholerae O1 of hybrid biotype was isolated from patients in India in 1991 when typical V. cholerae classical and El Tor biotypes co-existed suggesting the horizontal CTx prophage exchange between strains of the two principal biotypes in order for the infecting strains to be more adapted to the host hostile intestinal environment 15 which conformed to the more severe cholera symptoms in the afflicted hosts in the recent years 3,22,24 . It is noteworthy, however, that the classical V. cholerae O1 disappeared from Thailand since 1963 25 when the 7 th cholera pandemic caused by typical El Tor strains first hit the Kingdom's population. There has been no report on the period of co-existing classical and El Tor strains during 1986-2009 within Thailand. Our finding that the V. cholerae hybrid biotype could be detected among strains of 1986 suggested that there might be a re-emergence of the classical V. cholerae before or during 1986 or there might be other confounding molecular mechanism(s) in the shifting of the characteristics of V. cholerae bacteria in Thailand. (Table I).
Between 1992 and 1993, the V. cholerae O1 strains carrying ctxB C predominated in Kolkata, India 15 and Thailand (this study). Thus, there seemed to be incomprehensible event of genetic evolution of the V. cholerae yielding strains of mixed traits/phenotypes of the two authentic biotypes during this period. After 1994, isolates of V. cholerae O1 in Kolkata, India, seemed to carry only ctxB C ; thus these were El Tor variants or hybrids (no phenotypes were given to define the biotype) 16 . Similarity was found among the Thailand strains of this study, however, two years earlier than the Kolkata's series. All of the Thai strains after 1992 carried ctxB C of which 57 (19%) were hybrid biotype and 237 strains (81%) were El Tor variants according to the conventional biotyping method and MAMAand conventional-PCR determinations. In Punjab and Haryana, northern India, where a re-emergence of classical V. cholerae has not been reported, the V. cholerae hybrid biotype were also found in 2007 (80% of the isolates) 26 . As has been mentioned earlier, many V. cholerae isolates of several other countries in Asia and Africa were also found to be biotype hybrid/El Tor variant 15 indicating that the El Tor V. cholerae bacteria, regardless of the geographical areas, tend to evolve for acquisition of the classical CTx prophage. This phenomenon will have impact, more or less, on the treatment of cholera, public health measures, as well as vaccine development.
|
2017-09-06T20:06:51.338Z
|
2011-04-01T00:00:00.000
|
{
"year": 2011,
"sha1": "3c4a55ba8648b7a0adfd645f2b961f01600abd48",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "1a8c7ad54cd9193b68318a6b40bb5a5220343e8f",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
249275776
|
pes2o/s2orc
|
v3-fos-license
|
Maternal Deaths from COVID-19 in Brazil: Increase during the Second Wave of the Pandemic
Objective To compare death rates by COVID-19 between pregnant or postpartum and nonpregnant women during the first and second waves of the Brazilian pandemic. Methods In the present population-based evaluation data from the Sistema de Informação da Vigilância Epidemiológica da Gripe (SIVEP-Gripe, in the Portuguese acronym), we included women with c (ARDS) by COVID-19: 47,768 in 2020 (4,853 obstetric versus 42,915 nonobstetric) and 66,689 in 2021 (5,208 obstetric versus 61,481 nonobstetric) and estimated the frequency of in-hospital death. Results We identified 377 maternal deaths in 2020 (first wave) and 804 in 2021 (second wave). The death rate increased 2.0-fold for the obstetric (7.7 to 15.4%) and 1.6-fold for the nonobstetric groups (13.9 to 22.9%) from 2020 to 2021 (odds ratio [OR]: 0.52; 95% confidence interval [CI]: 0.47–0.58 in 2020 and OR: 0.61; 95%CI: 0.56–0.66 in 2021; p < 0.05). In women with comorbidities, the death rate increased 1.7-fold (13.3 to 23.3%) and 1.4-fold (22.8 to 31.4%) in the obstetric and nonobstetric groups, respectively (OR: 0.52; 95%CI: 0.44–0.61 in 2020 to OR: 0.66; 95%CI: 0.59–0.73 in 2021; p < 0.05). In women without comorbidities, the mortality rate was higher for nonobstetric (2.4 times; 6.6 to 15.7%) than for obstetric women (1.8 times; 5.5 to 10.1%; OR: 0.81; 95%CI: 0.69–0.95 in 2020 and OR: 0.60; 95%CI: 0.58–0.68 in 2021; p < 0.05). Conclusion There was an increase in maternal deaths from COVID-19 in 2021 compared with 2020, especially in patients with comorbidities. Death rates were even higher in nonpregnant women, with or without comorbidities.
Introduction
Since the first case of COVID-19 was notified in Brazil on February 26, 2020, the pandemic caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) has spread out at an accelerated pace. According to the Brazilian Ministry of Health, from the beginning of the pandemic up to December 09, 2021, a total of 21,184,824 people were infected by SARS-CoV-2, and 616,691 died in Brazil. 1 In the face of a viral pandemic of unpredictable and unknown evolution, the primary concern among obstetricians was whether pregnancy could be a risk factor for severe COVID-19 outcomes, as was seen for respiratory disease caused by the influenza virus in the past decades. 2,3 The first Brazilian studies using 2020 data from the Acute Respiratory Distress Syndrome (ARDS) Surveillance System (SIVEP-GRIPE, in the Portuguese acronym) reported many maternal deaths. However, they did not define whether pregnant women have a higher risk of severe outcomes than the general female population. Furthermore, early data on maternal mortality from COVID-19 were scarce and limited. [4][5][6][7] In 2020, the peak of the Brazilian pandemic occurred in the 30 th epidemiological week, when 319,653 new cases of SARS-CoV-2 infections were reported in one week. After a short period of slowdown of the pandemic in late 2020, Brazil experienced a rapid acceleration in 2021, reaching 539,903 new cases in the 12 th epidemiological week of 2021. 1 Although official obstetric data from the first and second waves have not been fully consolidated in Brazil, we have seen an increase in COVID-19 serious outcomes in pregnant women at our healthcare center, a referral hospital for high-risk pregnancy care that covers a population of > 6 million people in the Southeast of Brazil (unpublished data).
In the present study, we aim to compare death rates due to ARDS by COVID-19 in pregnant or postpartum and nonpregnant women during the first and second waves of the Brazilian pandemic.
Methods
We conducted an exploratory analysis of death rates due to ARDS by COVID-19 in women aged between15 and 49 years old with SARS-CoV-2 confirmed by real-time polymerase chain reaction (RT-PCR) or serological antibodies. We compared pregnant and postpartum (up to 45 days after delivery) women, namely the obstetric group, with nonpregnant and nonpostpartum women, who composed the nonobstetric group. We collected data from the SIVEP-GRIPE 8 from the 8 th to the 53 rd epidemiological weeks of 2020 (February 26, 2020, to January 2, 2021) 9 and from the 1 st to the 26th epidemiological weeks of 2021 (January 3 to June 30, 2021). 10 Outcomes were defined as crude rates of death or cure among persons included in the SIVEP-GRIPE Brazilian databank. Women were also categorized according to the presence or absence of the following comorbidities: chronic respiratory disease, cardiovascular disease, diabetes, pregestational obesity and/or other conditions (immune deficiency, hematologic disease, hepatopathy, genetic syndrome, kidney chronic pathology, neurology disorder).
Palavras-chave
The detailed database used in the present study is available at the Mendeley Data repository. 9, 10 We evaluated public and unidentified data from a national database that did not require prior ethical committee approval.
We calculated the mean age between the 2 groups and compared it with the Student t-test and the odds ratio (OR) with a 95% confidence interval (CI) to compare mortality. We used the software EPI-Info version 7.2.2.16 (Centers for Disease Control and Prevention, Atlanta, GA, USA).
Results
We identified 47,768 women with ARDS by COVID-19 in 2020, of which 4,853 were from the obstetric group and 42,915 were nonobstetric. In the first 6 months of 2021, we identified a total of 66,689 female patients, of which 5,208 were from the obstetric group and 61,481 were from the nonobstetric group (►Table 1).
Considering the concurrent medical conditions, we observed an OR increase from 0.52 (95%CI: 0.44-0.61) in 2020 to 0.66 (95%CI: 0.59-0.66) in 2021, which not happened with the group of women with no comorbidities. We observed that the increase in deaths was more pronounced in the obstetric group (1.7-fold; from 13.3 in 2020 to 23.3% in 2021) than in the nonobstetric group (1.4-fold; from 22.8 to 31.4%). However, the inverse was observed for women without comorbidities, in whom the increase in the number of deaths was more expressive in the nonobstetric group (2.4-fold; from 6.6% in 2020 to 15.7% in 2021) compared with the obstetric group (1.8-fold; from 5.5% in 2020 to 10.1% in 2021).
As illustrated in ►Figure 1, there was an expressive increase in MMR by COVID-19 in 2021 in almost all Brazilian states. Nonetheless, this increase was most prominent in the North of Brazil, specifically in the states of Amazonas and Roraima, the epicenter of the P.1 emergence.
Moreover, nonpregnant women enrolled in the present study had a significantly higher mean age than pregnant and postpartum women (p < 0.0001, t-test; data not shown in tables). This difference was observed both in the 2020 (32 versus 29 years old) and 2021 study periods (38 versus 29 years old).
Main Findings
The main findings of the present study were: 1) maternal death rates more than doubled in the second pandemic wave in 2021 (n ¼ 804), even considering a shorter period that corresponded to nearly half of the one analyzed in the first wave in 2020 (n ¼ 377); 2) there was a remarkable increase in maternal mortality ratio by COVID-19 in 2021 in almost all Brazilian states, after the emergence of the P.1 variant; 3) this increase in mortality was prominent in pregnant or postpartum women with comorbidities; 4) in both periods, deaths were higher in nonpregnant women than in pregnant or postpartum women.
Results in Context with the Scientific Literature So Far
When interpreting any evidence of the impact of COVID-19 on pregnancy, some considerations should be made. Studies focusing on the effects of COVID-19 on pregnant women are often based on symptomatic patients, thus underestimating the rates of unfavorable outcomes in women without COVID-19. Equally scant is the evidence from studies comparing pregnant with nonpregnant women with severe forms of COVID-19. [12][13][14] Our results are in agreement with data from the literature showing that COVID-19 is a sufficient factor for serious clinical outcomes in pregnant/postpartum women and for worse neonatal outcomes. 15 A prospective cross-sectional study conducted at the beginning of the pandemic with a smaller sample size showed an increase in adverse neonatal outcomes in pregnant women with COVID-19 compared with pregnant women without COVID-19, especially in pregnant women with comorbidities. 16 Although the database we used does not provide information on pregnant women without COVID-19, we show a significant rate of maternal deaths by COVID-19, especially in patients with some comorbidity.
A meta-analysis conducted by Khalil et al. 17 showed that maternal intensive care admission due COVID-19 was higher in cohorts with higher comorbidity rates. A small cohort of pregnant patients with initial asymptomatic COVID-19 showed that the development of severity appears similar to that found for nonpregnant women. 14 In our study, the mortality rates of nonpregnant women were higher than those of the obstetric groups in all conditions analyzed. Even so, the increase in maternal mortality from 2020 to 2021 that we found is an important factor, since pregnant women constitute a vulnerable group with restricted access to medications and generally do not participate in clinical research to combat COVID-19, such as vaccine trials.
Comorbidity appeared to be a more significant risk factor for COVID-19 in the nonobstetric group than in its counterpart. Moreover, nonpregnant women enrolled in the present study had a significantly higher mean age than pregnant and postpartum women. We observed this difference in the 2020 (32 versus 29 years old) and 2021 study periods (38 versus 29 years old). It is well-known that aging is associated with worse COVID-19 clinical outcomes, as was demonstrated by Khalil et al., 17 who indicated that maternal intensive care admission was higher in cohorts with maternal age > 35 years. In our study, the average age difference between the obstetric and nonobstetric groups may be a confounding factor on the influence of comorbidities in older women from the obstetric group. An extension of the present study, adjusting the effect of age and comorbidities in the analyzed groups, is currently being outlined by our research team.
Recently, the world has followed with concern the healthcare system collapsing in the city of Manaus, the capital of the state of Amazonas, due to the upsurge of the COVID-19 pandemic. The raised hypothesis of the emergence of a more transmissible variant was confirmed by Faria et al., 18 who identified a novel SARS-CoV-2 variant of concern in samples collected in November and December 2020 in Manaus. The so-called P.1 lineage acquired 17 mutations, including a trio in the spike protein (K417T, E484K, and N501Y). These researchers also estimated that the P.1 lineage may be more transmissible and more likely to evade protective immunity elicited by previous infection with non-P.1 lineages.
Although the SIVEP-GRIPE database does not provide information on the virus variant, the acceleration of the Brazilian pandemic observed in late 2020 and early 2021 coincides with the identification of the P.1 lineage. The spread of the newly mutated virus throughout the Brazilian population, consequently infecting more younger women, probably reflected in the observed increase in maternal deaths. As illustrated in ►Figure 1, there was an expressive increase in MMR by COVID-19 in 2021 in almost all Brazilian states. Nonetheless, this increase was most prominent in the North of Brazil, specifically in the states of Amazonas and Roraima, the epicenter of the P.1 emergence. As a comment, in our hospital, there were no deaths in pregnant women in 2020 and, in 2021, 5 deaths in until April. These five cases were positive for SARS-CoV-2; RNA sequencing identified four of them as the P.1 variant and one was identified as a British variant (unpublished data).
Implications for Clinicians and Public Health Managers
The national pandemic scenario presented herein highlights the need for coordinated strategies to contain the pandemic and avoid further spreading of other potentially dangerous variants. Brazil is a continental country, with ethnic, social, cultural and economic disparities that impact access to the health system in different regions. The irregular distribution of the number of deaths throughout the Brazilian states is probably related to different patterns of population exposure and to the deficient infrastructure available to assist severe cases.
Pregnant women represent a unique and vulnerable group. In this context, our study provides crucial epidemiological information for the scientific community and public health managers about the impact of the COVID-19 pandemic on pregnant women. We analyzed data from an extended period, allowing the evaluation of the Brazilian pandemic evolution from its onset in 2020 to its worsening in 2021, the latter coinciding with the emergence of the P.1 variant in Manaus.
Knowledge about the benefits of vaccination in mitigating the pandemic is a consensus all over the world. In Brazil, 63,276,223 people received all the doses prescribed by the vaccination protocol until September 7, 2021. 19 Although there is no consolidated information on the efficacy and safety of COVID-19 vaccines in pregnant women, current guidelines recommend their administration even during pregnancy, as the risks of COVID-19 outweigh the hypothetical risks of the vaccines. 20 Vaccination against SARS-Cov-2 in Brazil started on January 17, 2021, in the priority groups defined by the National Immunization Program of the Ministry of Health. 21 In May 2021, pregnant and postpartum women with comorbidities were included in this priority group, which is consistent with the concerns raised by our results on maternal mortality.
The present study is based on a comprehensive public database that provides information on large numbers of women. Epidemiological information about the deaths of pregnant women is particularly important for the scientific community and public policy managers to fight this pandemic in Brazil.
The present study shows findings that raise the need for actions to protect the health of all women during the pandemic, since COVID-19 is an important cause of women deaths in Brazil, especially in second wave after the emergence of the P.1 variant of SARS-Cov-2. The increase in maternal deaths in 2021 fosters the debate on the expansion of coverage and acceleration of the vaccination in pregnant women before more dangerous SARS-CoV-2 variants emerge.
Strengths and Weaknesses of the Study
The present study is based on a comprehensive public database on COVID-19, which allowed us to include a large sample size comprising pregnant and postpartum women and compared them with nonpregnant women, all hospitalized with the severe form of COVID-19 confirmed by laboratory and clinical tests.
Although notification of COVID-19 is mandatory in Brazil, we cannot guarantee that the databank covers all women hospitalized due to COVID-19. Data entry is done manually by health professionals throughout the national territory; therefore, there is probably a significant amount of data gaps.
The present study is a comparative analysis of the number of deaths due to COVID-19 between obstetric and nonobstetric women in the years of 2020 and 2021, in the presence or absence of comorbidities. The inclusion of a postpartum group could bias the data analysis, as there is the possibility that some pregnant women with viable fetuses could have been hospitalized with ARDS and underwent childbirth due to the worsening of the clinical condition. Eventual deaths are registered as having occurred in the postpartum period when they resulted from the evolution of the disease since pregnancy. The public database on which the present study is based did not have the necessary data for this evaluation.
Conclusion
There was an increase in maternal deaths from COVID-19 in 2021 compared with 2020, especially in patients with comorbidities. Death rates were even higher in nonpregnant women, with or without comorbidities.
Contributions
Scheler C. A. and Discacciati M. G.: conception and design, analysis and interpretation of data, writing of the manuscript, critical review of the intellectual content, final approval of the version to be published. Scheler C. and Discacciati M. contributed equally to the present paper. Vale D. and Surita F. G.: analysis and interpretation of data, writing of the manuscript, critical review of the intellectual content, final approval of the version to be published. Lajo G.: analysis and interpretation of data. Teixeira J.: conception and design, analysis and interpretation of data, writing of the manuscript, critical review of the intellectual content, final approval of the version to be published.
|
2022-06-02T23:30:04.896Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "320ecb283b0787274c08fb49a28850969941f220",
"oa_license": "CCBY",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0042-1748975.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f148b42fa998714a5b259d3b7e50d387c1e9170a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215414660
|
pes2o/s2orc
|
v3-fos-license
|
Superficial Siderosis Misdiagnosed As Parkinson’s Disease in a 70-year-old Male Breast Cancer Survivor
A 70-year-old African American male with a history of hypertension, congestive heart failure, breast cancer status-post six rounds of doxorubicin/cyclophosphamide, and Parkinson’s disease managed with carbidopa/levodopa presented to the emergency department with bilateral hearing loss and ataxia. The patient was admitted and evaluated for possible traumatic, oncological, and pharmacological etiologies. Further investigation revealed hypointensities along the cerebellar folia and basal cisterns on MRI in addition to the two-year history of progressive bilateral hearing loss and gait ataxia. In view of these findings, the patient was diagnosed with superficial siderosis and Parkinson’s medications were discontinued. Superficial siderosis should be considered as a diagnosis in cases of bilateral hearing loss and ataxia in patients with history of anticoagulation and risk factors for prior cerebrovascular accidents or head trauma.
Introduction
Superficial siderosis is a rare neurological disease associated with chronic subpial deposition of hemosiderin throughout the brain and spinal cord due to recurrent episodes of subarachnoid hemorrhage [1][2][3][4][5][6][7][8][9]. Disease prevalence within the general population remains unclear, although population-based studies have reported a range of 0.21%-1.43% in patients aged over 55 years, with greater prevalence in those aged over 69 years [2,10]. The disease presents clinically as a triad of bilateral sensorineural hearing loss, ataxia, and myelopathy with pathognomonic findings of hypointensities along the brainstem, cerebellum, and spinal cord identified by multisequence MRI.
In view of the nonspecific sensorineural, neurological, and cerebrovascular findings associated with superficial siderosis combined with its low prevalence within the population, the disease can often be misdiagnosed or missed altogether. Herein, we present a case of superficial siderosis that was misdiagnosed and mistreated as Parkinson's disease. We suggest that in order to avoid future misdiagnoses, superficial siderosis should be considered as a differential diagnosis for elderly patients, especially those on anticoagulation or with a history of brain trauma or injury, who present with bilateral sensorineural hearing loss and gait ataxia.
Case Presentation
A 70-year-old African American man was admitted to the hospital with bilateral hearing loss and ataxia. The patient was initially brought in by his wife owing to concern for a potential traumatic brain injury, as he had hit his head on a metal gate three days previously while working on his farm. Upon further inquiries concerning history, the patient's wife stated that his hearing and gait had progressively declined over the previous two years. The patient had first struggled with high pitched sounds, followed by both high-and low-pitched sounds. His wife noticed him sitting closer to his television, struggling to converse in loud settings, and asking others to repeat themselves more frequently. Both the patient and his wife attributed the initial hearing losses to old age.
The patient was diagnosed six years earlier at an outside facility with obstructive sleep apnea, hypertension, benign prostatic hyperplasia, and breast cancer. At that time, he underwent a bilateral mastectomy followed by six months of chemotherapy with doxorubicin and cyclophosphamide. He was subsequently diagnosed with deep vein thrombosis, pulmonary embolism, and congestive heart failure attributed to the chemotherapy. Long-term anticoagulation with warfarin was initiated. The remainder of his medications included lisinopril, metoprolol, amlodipine, tamsulosin, anastrozole, and tamoxifen. The patient followed up consistently with his primary physician and oncologist. At an appointment two years earlier, his gait was noted to be in decline with a tendency to lose balance, and his movements were slowed. He was inappropriately diagnosed with Parkinson's disease, and carbidopa/levodopa therapy was initiated due to the similarity of physical manifestations between the patient's presentation and the misdiagnosed movement disorder.
Presently, exam findings on presentation were unremarkable with the exception of bilateral sensorineural hearing loss, ataxia, and 1+ pitting edema to the anterior tibia bilaterally. Weber testing showed no lateralization and Rinne testing revealed air conduction greater than bone conduction. The patient had no signs of trauma or infection to the external ear or tympanum bilaterally. There was no evidence of a bulging membrane. The light reflex was observed. Cranial nerves (CNs) II-XII were otherwise grossly intact. The patient's gait was markedly ataxic and spastic with a tendency to fall. Romberg testing was negative. He did not exhibit a resting tremor or cogwheeling of the extremities. Head CT without contrast was ordered, which showed no acute intracranial abnormalities; however, old bilateral basal ganglia infarcts were noted. Twelve-lead ECG showed normal sinus rhythm with a first-degree atrioventricular block ( Figure 1).
FIGURE 1: Twelve-lead ECG
HR: heart rate; A-V: atrioventricular; ECG: electrocardiogram; Ax; axis Remaining abbreviations cannot be expanded and refer to ECG waves and intervals.
Multisequence brain MRI showed no signs of acute stroke; however, old blood products consistent with the hypointensity of hemosiderin were noted along the cerebellar folia and basal cisterns ( Figure 2). Hemosiderin hypointensities (black) noted along the cerebellar folia and basal cisterns in both coronal and axial planes (yellow arrows). Cranial nerve VIII (CN8) noted (white arrows).
FLAIR, fluid-attenuated inversion recovery
Periventricular small vessel disease and parenchymal atrophy were also noted. Carotid artery and renal ultrasounds showed no evidence of occlusion, obstruction, plaque deposition, or hydronephrosis. Transthoracic echocardiography showed an ejection fraction of 60%, trivial mitral regurgitation, and physiological pulmonary regurgitation ( Table 1). There was no sign of ventricular abnormality or dysfunction. The patient was found to have a kidney injury which was probably chronic and secondary to his hypertension. Elevated prothrombin time was attributable to warfarin; however, the international normalized ratio was subtherapeutic. First-degree atrioventricular block on ECG was attributed to beta-blocker compliance and the pedal edema to amlodipine, a dihydropyridine calcium-channel blocker. Neurology recommended a five-day trial of prednisone in case the symptoms were secondary to an autoimmune or inflammatory etiology; however, the patient exhibited no response and MRI results suggested an alternative diagnosis.
M-Mode/2D Measurements and Calculations Doppler Measurements and Calculations
The patient was discharged following a negative stroke and traumatic head injury work-up with an outpatient audiogram appointment. At time of discharge and otolaryngology follow-up visit, the diagnosis was consistent with superficial siderosis. The patient did not meet Parkinson's criteria, and he was instructed to discontinue carbidopa/levodopa.
Discussion
Superficial siderosis is a rare and frequently neglected cause of sensorineural hearing loss and progressive ataxia in the elderly [3,4,9]. It develops secondary to slow and repeated intracranial hemorrhages into the subarachnoid space, which result in chronic intra-and extracellular hemosiderin deposition in the subpial layers of the brain, spinal cord, and CNs [3,6,7,9]. Possible causes of these bleeds include intracranial tumors, head trauma, arteriovenous malformations, aneurysms, cervical root avulsion, neurosurgical procedures, brachial plexus injury, amyloid angiopathy, and chronic subdural hematomas [6][7][8]. Hemosiderin is most commonly found surrounding the brain stem, cerebellum, and basal cisterns as it pools in the posterior fossa, although superficial cortical deposition can be seen as well [1,5,9]. The diagnostic procedure of choice is T2 and susceptibility-weighted (SW) MRI, which visualizes paramagnetic blood products as hypointense [5,9]. T1 and gradient echo MRI are less sensitive in detecting blood products [5]. CT is not ideal for detecting hemosiderin, but these scans can be used to detect hemosiderin as hyperintense or to rule out other substances such as calcium that can appear as hypointense on T2 and SW MRI [9].
Superficial siderosis preferentially disturbs tissues with greater exposure to CSF and longer glial segments. This increases the rate of iron overload and subsequent lipid peroxidation of surrounding structures [3]. The condition has an atypical triad of impairment, which includes bilateral sensorineural hearing loss owing to the nature and long time course of CN VIII injury around the basal cisterns ( Figure 2); gait ataxia due to involvement of the cerebellar folia and vermis ( Figure 2); and myelopathy due to pyramidal tract involvement [3,8,9]. Other symptoms can include anosmia, which is often overlooked owing to the long CN I glial sheath; dementia due to cortical necrosis; cerebral atrophy; and declining executive function [3,9].
Hearing loss in this age group is often attributed to presbycusis, and further investigation is curtailed [4]. However, the presence of additional symptoms should raise a strong clinical suspicion for an alternative ongoing disease process. Differential diagnoses should include multiple sclerosis, autoimmune disease, neurosyphilis, Lyme disease, ototoxic pharmaceuticals (salicylates, aminoglycosides, platinum-based chemotherapeutic agents, etc.), and superficial siderosis [4,7]. It is less likely for Parkinson's disease to be confused with siderosis. We recommend increasing suspicion for individuals on anticoagulation and individuals with prior head injuries.
Bleeding sources are identified in less than 50% of cases [4,9]. Management is approached stepwise, with a primary goal of stopping the bleeding, and secondary goals focused on chelation of hemosiderin deposits with lipid-soluble agents such as deferoxamine, deferiprone, and trientine. However, the risks often outweigh the benefits as symptomatic improvement can be negligible [7,8].
Conclusions
Superficial siderosis should be considered in any case of progressive bilateral sensorineural hearing loss and ataxia with or without additional accompanying symptoms, especially in individuals on long-term anticoagulation or those with prior head injuries. It is commonly misdiagnosed or underdiagnosed owing to its low frequency of occurrence in the general population. Treatment entails surgical correction of the hemorrhage, followed by iron chelation should benefits outweigh risks. We recommend increasing clinical sensitivity in order to decrease time to diagnosis and improve overall patient outcomes.
|
2020-03-19T10:17:42.786Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0029b43f7f85da38a1599cd193d7abf22ab9a437",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/28837-superficial-siderosis-misdiagnosed-as-parkinsons-disease-in-a-70-year-old-male-breast-cancer-survivor.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "796a035c383f24ed182647e0f2baee3fe14320da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252409159
|
pes2o/s2orc
|
v3-fos-license
|
Development and application of an environment monitoring system based on IPv6
The widespread use of the Internet of Things (Iot) makes it possible to connect everything but having enough IP addresses is a fundamental requirement of this paradigm. All previous environmental monitoring systems in China are based on IPv4. In combination with the characteristics and requirements of China's atmospheric environment monitoring system, this paper develops a monitoring system based on IPv6 technology. Users can directly access the monitoring equipment through the IPv6 website to view data and configure operations. This paper first introduces the design and implementation of the software and hardware of the system, then introduces the simplification of IPv6 protocol, the transplantation of IPv6 protocol on ARM and the design and implementation of embedded Web server system. The experimental results show that the developed atmospheric environment monitoring system can realize continuous data acquisition based on IPv6 and provide data-driven support for environmental protection management and decision-making.
great significance for the prevention and control of air pollution. Therefore, monitoring the atmospheric environment has always been the focus of research at home and abroad. Since the 1970s, developed countries have begun to pay attention to the monitoring of ambient air quality. To date, the United States has established multiple mature and stable air monitoring networks, including State and Local Air Monitoring Stations (SLAMS), National Air Monitoring Sites (NAMS), Special Purpose Monitoring Stations (SPMS) and the Photochemical Assessment Monitoring Site (PAMS) 1,2 , which can be used to obtain data on conventional pollutants in the atmosphere and other pollutants harmful to human health in real time online. In recent years, various sensor networks have been applied to atmospheric environment monitoring in a large number of studies. For example, Kelechi et al. 3 implemented a low-cost air quality monitoring system using a design from Arduino and Thing Speak. Chen et al. 4 used low-cost sensor air quality monitoring to study students' exposure to PM 2.5 . Schneider 5 uses observations and model information from low-cost sensors to map urban air quality in near real time.
The construction of China's air quality monitoring network began in the 1970s, and a series of national air quality monitoring stations were built 6,7 . There are two main domestic detection methods for pollutants in the atmospheric environment, namely, auto-mated monitoring stations and traditional manual detection. The manual detection method requires the staff to directly collect air samples and then detect and analyze the sample gas in the laboratory, which is time-consuming and labor-intensive. Moreover, the result is susceptible to human interference, making the final result unreliable. Automated monitoring sites monitor air quality through complex instruments and equipment and upload air quality data to the central server in real time. However, they possess the dis-advantages of uneven distribution and low density. In addition, the complicated and ex-pensive equipment of national control sites and the high cost of daily maintenance make it impossible to deploy air quality monitoring sites in a large area and with high density, which means that automated monitoring stations cannot obtain air quality data with high temporal and spatial accuracy 8 .
With the maturity of sensor network technology, increasingly abundant communication methods have appeared. In view of the abovementioned problems, the introduction of IoT technology into air quality monitoring has become a research hotspot. Wang et al. 9 constructed an air quality monitoring system based on partial least squares regression algorithm with the support of national projects. The university student innovation and www.nature.com/scientificreports/ realized the communication with the cloud platform through IPv6 network to realize the intelligent monitoring system of indoor air quality based on IPv6. Zhu et al. 22 proposed a way to realize the collection of urban environmental data by integrating IPv6 networks. Rzepecki et al. 23 designed corresponding experiments to test and analyze the network-ing of IPv6 in sensor networks. Although these tests showed that IPv6 has good application prospects, they are mostly at the theoretical stage. This paper uses the results of domestic and foreign research theories to explore the application of IPv6 in atmospheric environment monitoring and to design and implement an atmospheric environment quality monitoring system in the field. This work realizes that each device can have a globally unique followable IP address as a part of the IoT. The IPSec security protocol of ipv6 makes the terminal device more secure. This work solves the problem of discontinuous address space, and facilitates address management by con-catenating device node addresses in a region. With IPv6 there is no NAT. The direct end-to-end communication between devices improves the efficiency of data transmission and makes the node network more convenient and intelligent. It is worth noting that this article describes how to implement ipv6 on a resource-constrained ARM for an HTTP Web server. The data collection end becomes HTTP server, instead of the traditional data collection node can only one-way send to a server and then display data, which not only makes full use of the advantages of ipv6 massive address, but also realizes that each collection node can be individually and real-time access, control and obtain data.
Structure of the system
When monitoring air quality in the field, the environment is often complex and changeable. It is required that monitoring nodes are deployed with high density and maintained stability to ensure the completion of long-term monitoring tasks. Solar power is mostly used to achieve low power consumption. To meet the requirements of collecting different pollutants, the monitoring node should possess the ability to easily replace components like sensors. This system is mainly composed of atmospheric environment monitoring system nodes with the IPV6 protocol embedded as an IPv6 communication module and data center, as shown in Fig. 1, where the monitoring nodes are responsible for collecting data and uploading the data to the data center.
Hardware design of the monitoring system nodes
The overall hardware design framework of the atmospheric environment monitoring system (hereinafter referred to as "the system") is shown in Fig. 2. Various factors, such as data requirements, energy supplies, and outdoor environments, are fully taken into ac-count in the development of hardware. The modular design is adopted to realize that each module of the monitoring node is connected to the processor. The software controls the coordination of various modules to complete the tasks of data collection, storage and re-mote sending. The monitoring node is composed of modules such as a data acquisition board, a core processing board (the board hardware is shown in Fig. 3), and a smart solar-power management system. The motherboard is the core of the entire platform. The embedded operating system in the MCU on the motherboard performs unified scheduling and coordination of the entire platform to ensure the reliability and adaptability of the system in the actual application environment.
The central controller is the STM32F107 produced by STMicroelectronics, which be-longs to the Cortex™-M3 series 32-bit ARM, with abundant peripherals and ample RAM space. The native STM32 library can provide upper-layer software with interfaces for operating the hardware registers of the STM32 ARM chip, which need to be developed while being used accordingly. In this paper, the BNU embedded operating system embedded in the chip is used to control the whole system, including the processing of the collected data, the storage of the data, the software driver of the communication module, the network communication protocol, the processing of sensor data, the system task scheduling and the controlling of power supply system 24,25 .
The interface of the data acquisition module adopts the serial port method. To connect enough serial port sensors, the system design adopts the method of digital switch polling to obtain sensor data, which effectively reduces the demand for the number of MCU serial ports. The external sensors in this system are electrochemical gas sensors (for SO 2 , CO, O 3 , and NO 2 ) and PM 2.5 and PM 10 laser-scattering sensors.
SPIFLASH is selected as the data storage module. In practical applications, there may be a temporary interruption in communications. Uploading the data in a centralized manner after storage can reduce not only the frequency of data uploads but also the power consumption of the system. [26][27][28] . The method of chemical experimentation cannot be applied to online monitoring stations. The infrared detection method is not suitable for large-area monitoring stations due to its high cost of sensors. Therefore, electrochemical sensors are generally used in online monitoring equipment. PM 2.5 and PM 10 are particulate pollutants. The traditional gravimetric method and micro-oscillation balance method are commonly used in laboratories, while the laser scattering method can automatically monitor the concentration of particulate matter by using a small and low-cost sensor 29 . A laser scattering sensor is chosen for monitoring in the system. The model and detailed parameters www.nature.com/scientificreports/ of the sensor are shown in Table 1. The measuring range of the sensor can fully meet the needs of atmospheric environment monitoring, and its shorter response time can re-duce the power consumption of the system. Data store driver. Large capacity storage devices are usually required for embedded system devices to store data. Currently, the generic Erasable Programmable Read Only Memory (EEROM), FLASH (FLASH memory), and Secure Digital Memory (SD) cards can be integrated. The capacity of the first two SD cards is usually smaller than 1 MB. The storage capacity of an SD card is usually larger than 1 GB. The system uses SD card to store sensor data, SD card has the characteristics of removable, easy to read the sensor data directly. In this design, the atmospheric data is single and in order to save processor resources, so only the SD card is used as the large-capacity data storage unit of the node, without complex file system. SD card initialization and single block read and write operation are realized in the system. As shown in Fig. 4, the communication between SD card and ARM adopts the send-reply mechanism, and the data exchange between SD card and ARM is carried out by SDIO bus.
To initialize the SD card, complete the following software configuration: 1. SD default working mode is SD. To simplify read and write operations, it is configured to work in SPI mode. Firstly, STM32F107 initializes the SDIO bus, sets the SPI to work in three-wire system, and sets the operating frequency below 400 kHz. 2. Turn on the SD power, set the chip selection signal CS to high level, and wait for the completion of initialization of SD internal storage, which requires at least 80 SPI clock cycles. 3. The SD card is enabled by lowering the chip selection signal CS. The STM32F107 sends the CMD0 command to the SD card, and the SD card enters the SPI mode. 4. After the CMD1 command is sent and the correct response 0 × 00 is returned, the SD card is initialized successfully and the data transfer begins.
The data transmission of the SD card is mainly completed by command interaction. The data reading process of the SD card is as follows: 1. Pull down the chip selection signal and receive R1 correct response after sending CMD17 command. 2. Send Start Data is received after receiving token 0XFE. 3. CRC integrity check is performed on the received data. 4. Data reception completed, pull up the chip to select the signal, no SD card.
The process of writing data on an SD card is as follows: 1. Pull down the chip selection signal and receive R1 correct response after sending CMD24 command. 2. Data is written after sending the Start Write token 0XFC. 3. Send the data to be written, and then send the two-byte CRC check code for the data block. www.nature.com/scientificreports/ Receive data write status, write failure to write again, after the success of the end of the write data, pull up the chip to select signal CS. Solar power management system. The system provides a stable power supply for the monitoring nodes by combining solar energy and storage batteries. The optimization of the software procedure can effectively reduce the power consumption of the system to ensure that the monitoring nodes continuously perform monitoring tasks. The smart solar power management system is developed on the basis of an embedded operating system, which can: remotely control the power supply, including turning it on, shutting it off and restarting it (the time point, time period, power mode of "on" or "off ", etc. can all be set); provide 12 V, 5 V, 3.3 V and other DC outputs; protect the system from overcharging and over discharging (the system decides whether to turn off the charging device according to the voltage level of the power supply) and perform other operations. The system can make full use of the solar energy at the installation site to achieve a relatively stable power supply. The performance indicators and characteristics of the smart solar power supply designed for this system are as follows: 40/60 AH rechargeable battery (optional depending on the installation location); A solar panel charging interface that supports 15 W solar panels; Management of charging and discharging of the battery. According to the charging and discharging performance of the battery and the local temperature conditions, the charging voltage and the maximum voltage of the battery can be corrected in real time.
The smart power management system mainly consists of two parts: software and hardware. The hardware structure is shown in Fig. 3. The components are as follows: 1. MCU (Main Control Unit): The MCU controls the entire power module through the in-ternal embedded software, checks the power status of the system through the solar voltage and battery voltage collected by ADC, and controls battery charging of and power supply to other modules of the system through the switch circuit according to different conditions. 2. Triode switch control circuit: The current switch is controlled by the triode, and the central controller controls the triode through changes in the level of the ordinary IO (input/output) port. When the level is high (3.3 V), the switch is on; when the level is low (0 V), the switch is off. 3. ADC (analog-to-digital converter): The ADC converts the analog voltage of solar and battery power into a digital signal quantity and provides it to the central processing unit, provides effective real-time power information for software control, and manages the power module of the system. 4. 18 V solar panel: This panel provides a long-term power supply for the system through renewable solar energy. 5. 12 V battery: A rechargeable battery provides a long-term power supply for the system.
Voltage conversion circuit: The voltage regulator chip converts the input voltage into the required output voltage, and this portion of the input voltage is controlled by the triode switch. The sampling of the power supply voltage is completed by the ADC inside the STM32F107. The ADC converts the power supply voltage into a digital signal so that the software can read the real-time voltage of the solar panel and battery and provide the de-vice's own data status for the autonomous management of the system. According to the voltage data of the battery and solar energy, the device can automatically adjust the sampling frequency of the sensor. When it is judged that the battery power is low, the system reduces its energy consumption by reducing the sampling frequency to ensure that the monitoring time of the device can be maximized without shutting down.
Software design of the system
The software design framework of the system is shown in Fig. 5. All software runs on the 32-bit ARM processor STM32F107. To monitor the concentration of major air pollutants and particulate matter and display the web page, the software design is divided into four layers from bottom to top: the hardware driver layer, kernel task layer, application driver layer and application program layer.
The hardware driver layer mainly provides ARM peripheral drivers to complete data exchanges between different peripheral hardware components, which is the basic method of external interaction for embedded systems. ARM performs the input and output functions of the chip by connecting peripherals to other devices outside the chip. Each peripheral module usually performs a single function. The peripherals range from simple serial communications to complex 802.11 wireless devices. The software of the hardware driver layer directly reads and writes the hardware registers. The current embedded software de-sign usually uses the standard peripheral drivers provided by the corresponding chip manufacturers directly. The STM32F107 chip selected for the system is the "STM32F1XX_StdPeriph_Driver" driver package provided by the American company STMicroelectronics, which includes the register of read and write drivers for all peripherals of the chip.
As a key layer in the embedded system, the kernel task layer provides basic thread, semaphore and message management functions for the software system. At the same time, it also abstracts the hardware peripherals, which is conducive to calling upper-layer ap-plications. The kernel task layer of this system mainly uses a simple real-time operating system RT-Thread, which contains basic functions to meet different application requirements.
The application driver layer provides functional application drivers such as communication and storage for upper-layer applications and comes with modules such as the IPv6 protocol and file system. These modules directly provide the function interface called by the upper layer so that the applications do not need to rely too much on the lower layer in the actual design and operation of the system and can directly accomplish the tasks that need to be performed by calling the abstract functions.
The application program monitors the node equipment, including the collection and storage of sensor data, management of embedded web servers and minimization of power consumption.
Brief Introduction of the IPv6 Protocol in sensor networks. The IPv6 protocol was proposed to solve the problems of the IPv4 protocol. The IPv6 protocol stack is a well-established standard, which can be understood in related articles 8,31 .
The implementation of IPv6 in sensor networks is very different from those of general network protocol stacks. Table 2 gives the breakdown and detailed content of each layer in the IPv6 sensor network and TCP/IP protocol in the general PC and compares them with the OSI (Open System Interconnect) seven-layer model.
As shown in Table 2, the TCP/IP protocol is simplified to four layers: the application layer, transport layer, network layer, and network interface layer. In the IPv6 sensor network, all layers except the network interface layer are implemented by a pure software protocol stack. The network interface layer in an embedded system is mainly formed by specific network peripherals and PHY control chips to control the transmission medium and data packets 32 .
IPv6 in the embedded system comprises software and a low-level configuration of hardware peripherals. It has four complete hierarchical functions and contains important components, such as ICMPv6, IPv6 message processing and TCP/IP communication. IPv6 lies between the real-time embedded operating system and the functional program, and its working position in the system is shown in Fig. 6. The embedded real-time operating Figure 5. Overall software framework of the system. www.nature.com/scientificreports/ system provides the basic service functions of thread management, thread synchronization and message transmission for the protocol stack. The upper application and the protocol stack are independent of each other. The application only needs to call the function to satisfy the functional requirement. Therefore, this layered design method is also used in the system to simplify the system design. Under different application requirements, it is only necessary to focus on the design of the upper-layer application, while details of the specific implementation of the protocol stack are not needed. The hardware equipment includes MCUs, ethernet physical layer chip PHYs, and memory. The embedded operating system manages hardware resources through the drivers of the different hardware peripherals, thereby providing services for the protocol stack and other upper-layer applications.
Simplification of the IPv6 protocol. Simplification of the IPv6 protocol mainly solves the problem of monitoring nodes that access the IPv6 backbone network. Because the hardware storage space and processing speed of the monitoring node are very limited, the original protocol must be streamlined to achieve access at the IP layer. Meanwhile, a compromise is made in terms of carrying capacity and maintenance difficulty. The simplification of the IPv6 protocol stack in the system mainly involves the connection between layers and the mutual exchange of internal data. The IPv6 protocol stack itself is designed with a four-layer hierarchical structure. In implementation, the more independent the hierarchical modules are, the higher the performance usually is, but the more resource overhead is required. Under normal circumstances, the embedded system has a single application scenario, so strict layering may not be required. When the connections between different layers are implemented, design methods such as function calls can be directly used to save system resources.
The protocol stack processes the message data through the data buffer pBuff. The functions of the data buffer include: (1) providing a management structure for network data packets, which can improve the processing efficiency of packets, and at the same time can improve data processing performance and save memory resources by reducing the number of data memory copies; (2) controlling the throughput of data through the design size of the buffer. In practical application, a buffer of appropriate size is designed in accordance with the data communication requirements of the device to save available memory for other applications in the system. Therefore, the memory management module provided by the protocol stack can better manage the memory resources required by the protocol, reclaim and reallocate the used memory, and ensure that the system will not cause task interruption and failure due to memory resources being run out by network applications. The LwIP protocol stack uses shared memory, message queues and semaphores to connect various layers in order to save resources. Although such a design will lead to less strict and independent layers, it can effectively reduce the resource consumption of embedded devices. The design can fully meet the communication requirements of embedded devices, since they have relatively specific functions, and usually have very clear application scenarios and specific application requirements. The code of LwIP protocol stack is open source. The functional requirements should be fully considered during the porting process, and the buffer size and application modules should be adjusted for different processors and memory resources. The protocol stack adopts modular layering, which enables or disables different modules through global macro configuration to achieve better resource allocation.
Transplantation of the IPv6 Protocol onto ARM. When the system transplants the protocol stack, the timer is implemented by the timer peripheral inside the STM32F10XX controller to provide time management functions for the protocol stack. The synchronization of this process is realized by the semaphore of the real-time operating system. Therefore, the responsibilities of IPv6 on ARM in the system mainly include providing the functional modules required by the protocol stack, including the semaphore, timer, message mailbox and thread functions; maintaining the receiving and sending interface for data on the network interface layer; initializing the protocol stack; driving the network card of the physical layer; and setting basic parameters for IPv6. www.nature.com/scientificreports/ The semaphore is used for task synchronization and resource management. Through the semaphore, events between two independent tasks can be processed synchronously. The message mailbox is an abstract method of message transmission provided by the operating system for the protocol stack, with a similar implementation process to that of the semaphore. However, the memory address pointer of the message is included in the message mailbox delivery process, so the data transfer within the protocol stack can be realized. The mailbox can operate message mailing and message reading. Neither the semaphore nor the message mailbox blocks the process. The task is in a suspended state while waiting for the semaphore or message, and the thread is dormant in this state. When the semaphore or message mailbox is released, it looks for the corresponding thread in the dormant thread queue and then wakes it up. After the thread wakes up, it starts to process the tasks after the semaphore or the message is successfully received. The realization of the semaphore and message mailbox in this system is as follows: (a) Semaphore: The err_t sys_sem_new(sys_sem_t *sem, u8_t count) function creates a semaphore. The sem of the semaphore is transmitted by the pointer, and the semaphore function can be provided to the system after it is completely created. There are two ways to call the semaphore in the application thread. One is to limit the waiting time, that is, if the semaphore is not received within the set time, the task is deleted or other operations are performed. The second is infinite waiting, that is, the task is in a suspended state until the signal is received. The u32_t sys_arch_sem_wait (sys_sem_t *sem, u32_t timeout) function is used to wait for the semaphore in the thread. The parameter "timeout" is the waiting time of the semaphore. When the value is set to 0, it means the system is waiting for the semaphore indefinitely. (b) Message mailbox: The err_t sys_mbox_new(sys_mbox_t *mbox, int size) function creates a message mailbox. The difference between the message mailbox and semaphore is that the creation of the message mailbox involves opening up a new memory pointer to provide a data transmission channel, while the semaphore does not occupy memory space and provides synchronous processing signals for system tasks through count variables.
The protocol stack also needs to create threads, which mainly include TCP packet processing threads and UDP message processing threads. Therefore, the operating system must provide thread management functions for the protocol stack.
Thread creation: The sys_thread_t sys_thread_new(const char *name, lwip_thread_fn thread, void *arg, int stacksize, int prio) function creates a thread for the lwIP stack. The thread created by the lwIP always exists in the application. For example, the data receiving and data sending processes are divided into two application threads. When the data are not received or are not sent, the two threads are in a suspended state. After receiving the data from the message mailbox, the thread activates the task and starts to send or receive IP packets for processing.
The initialization of the network interface layer mainly involves the addition of netif to the lwIP stack. The transplantation configuration process is shown in Fig. 7: Check whether the hardware network card has been initialized successfully. If it is not initialized, then proceed with the network card initialization process. www.nature.com/scientificreports/ Configure the network interface, receive and send data, process functions, and write two function pointers into the network structure to provide an interface for the upper layer protocol.
Configure the MLD (multicast listener discovers) to enable IPv6 to discover network devices on the LAN segment.
Configure the DHCP (dynamic host configuration protocol) to automatically configure IPv6 addresses for the device.
Set various protocol flags, call netifapi netif_add, and add the configured netif to the protocol stack to complete the configuration of the network interface layer.
The initialization of the protocol stack mainly creates two thread tasks: data sending and receiving. By creating a corresponding message mailbox for each thread and providing buffer space for the data at the same time, the thread and message mailbox are executed after the creation process is completed. So far, the lwIP has been transplanted on the embedded system. In actual applications, the upper-level user application can directly call related modules for direct use, such as socket connection, HTTP, TFTP and other application modules. This system uses the HTTP module to implement an embedded web server.
Design and implementation of an Embedded Web Server. Introduction to Embedded Web Server
Programs. Unlike the traditional sensor network topology, this system adopts an embedded web server network structure 33,34 . The network connection topology is shown in Fig. 8. All nodes are designed as single web servers and do not require management by a central server. Each sensor node can be directly accessed through a browser to obtain monitoring data and device statuses. This program distributes the traditional background server tasks to each node, and a single node can directly collect, store and display data. On the basis of the rapid development and wide application of IPv6 network technology, each sensor node can obtain a fixed IPv6 address to realize the actual Internet of Things.
HTTP webpage design. In embedded systems, there are usually only simple real-time operating systems or direct bare metal programs with limited storage space. Therefore, embedded web design requires tailoring a large number of modules to achieve the desired functions.
The webpage of this system is designed with the HTML + CSS + JavaScript architecture. HTML (Hypertext Markup Language) is a markup language rather than a programming language. An HTML text is a webpage. CSS (cascading style sheets) is a programming language for HTML web pages that is used to statically decorate web pages and dynamically format web page elements by cooperating with different scripting languages. Different www.nature.com/scientificreports/ web page styles can be realized through CSS programming. The JavaScript programming language enables HTML to dynamically request data, change the display text of the web page, and render the display style. Rich, powerful and stable web pages can be displayed through these three languages. The webpage of this system is designed with DreamWeaver software. The specific design process is shown in Fig. 9. First, the web page html file is created on the client computers. The dedicated makefsdata tool is used to convert the HTML file into an ARM-recognizable C-language array file. The array file is stored statically in flash memory or other external memory in the ARM. After receiving the web page connection request, the ARM processor reads the data of the web file. Then, the data is transmitted to the requested webpage through the HTTP protocol, and the webpage automatically parses the HTML file data and then displays the webpage. After the web page is embedded in the monitoring node, the HTTP protocol sends the web page data. Figure 10 shows the actual working process of the web server when the node device is actually used. When the lwIP protocol stack and network card driver are initialized, the software system creates an HTTP monitoring service thread. Users can directly access the IPv6 address of the node through a browser in an IPv6 network environment to obtain the monitoring node's web page. The node uploads static webpage data when it detects an HTTP connection. When the browser receives the HTML format file, it automatically parses the file and displays it. At the same time, the browser parses and executes the JavaScript program in the HTML document and may request the node device to send data again under different functional requirements.
Data requests and design display. Embedded web pages usually use static arrays to store HTML documents. The data collected each time in this system need to be dynamically displayed on the web page, so JavaScript programs need to be embedded in the HTML document to dynamically request node data. The workflow of the browser connecting to the node once is shown in Fig. 11. The sensor data are sent in JSON (JavaScript Object Notation) format, which is currently a common format used for communication and exchanging data and is independent of programming languages. JSON formatted data are represented by characters, which is conducive to debugging in the development process. At the same time, a standard data exchange format can also improve software compatibility. The format of the JSON package is as follows: The $.get("./ getair. json", function (data)) command responds to the sensor data in the following format: The data for each field in this example changes based on the actual sensor data.
{
"resultcode": "200", // The reply code is used to identify the HTTP protocol "reason": "SUCCESSED", // The protocol field indicates whether it is normal "result": { // Data from each sensor "PM25": 39, "PM10": 87, "CO": 0.9, "NO2": 38, "O3": 0.4, "SO2": 50, "AQI": 70 }, "error_code": 0 // Indicates whether an error exists } www.nature.com/scientificreports/ The command $.get("./ getdevice. json", function (data)) requests a reply in the following format (device information, such as ipv6 address). www.nature.com/scientificreports/ As shown in Fig. 11, at the beginning, users first access the embedded webpage of a node through the node's IPv6 address in the browser. After the node monitors the HTTP connection, it first reads the indexed static web page data directly from the stored data and then sends them. After receiving the HTML formatted file, the browser parses and displays it. To obtain and display the sensor data again, this system adds JavaScript program code to the HTML and sends a fixed URL request to the device. The "$.get("./GetAir.json", function (data))" function requests sensor data from the device. After receiving the request, the device sends sensor data in JSON format to the browser. The browser parses the data and displays it. Then, the browser executes the "$.get("./ GetDevice.json", function (data))" function again to request the connection information of the device so that the IPv6 addresses of the device node and client computers can be seen on the web page, which is convenient for users to manage. Finally, the process is executed to obtain a complete display of the web page as shown in Fig. 12 www.nature.com/scientificreports/
Testing and analysis
Analysis of the stability test. To verify the operational capability of the system in practical applications, two tests were conducted during this study: testing on the stability and power consumption of data collection by the sensors and testing of the IPv6 network. The data shown in Table 3 were obtained by monitoring nodes at multiple sites from January to April for 3 consecutive months under unmanned maintenance conditions. It can be seen from the table that the data integrity rate of the equipment was basically higher than 90%, which means that the equipment did not completely shut down during this period of time. The main reasons for the loss of some data are that communications may be incorrectly altered or lost when uploaded or that the device may behave abnormally under the influence of the external environment. However, due to the adoption of a watchdog and timing reset strategy, the device can recover on its own. The results show that the system can meet the requirements of long-term monitoring of the air environment.
The TDS (theoretical data size) indicates the amount of data that the device should obtain according to the normal acquisition frequency. The ADS (actual data) represents the actual amount data finally obtained. The formula "DIE (data integrity rate) = ADV/TDV" calculates the data integrity of the device. Analysis of the system power consumption test. The power supply of the system is a combination of solar and battery power. In the actual test, under the circumstance of low-power operation, the power supply can fully meet the needs of the monitoring system. According to the monitoring data, the device's battery voltage fluctuations basically remained within the range of 12-14.5 V throughout the year, indicating that the device did not experience insufficient power supply conditions. Testing of the IPv6 network. The test includes IPv6 network initialization, automatic address acquisition, IPv6 socket communication and an HTTP web service. The test environment is as follows: Compared with IPv4, one feature of IPv6 is automatic state configuration. To test the running status of IPv6 in the system, a function for outputting debugging information is added to the code, the real-time information status is output to the computer through the serial port, and debugging information is output through the computer-side serial port service program SSCOM. The test result is shown in Fig. 13, from which it can be seen that after the lwIP protocol stack is successfully initialized, the system sends out address configuration status information to the router. The switch assigns an IPv6 address, which does not change after the assignment, to the system according to the current LAN IP environment. Therefore, the remote webpage can directly access the embedded server through this address, thereby accessing the webpage that displays the monitoring data. The test results also show that the delay from the initialization of a successful connection by the protocol stack to the reception of the IP address is less than 10 s, indicating that the response speed of the protocol stack in the embedded system can meet the requirements of the application.
Using a socket is a common way to realize data transmission of two-way communication connections between two applications based on the TCP/IP protocol stack, which is an important component of the protocol stack. Therefore, to test whether the IPv6 protocol stack in this system can support socket applications, a corresponding test module is designed in this system. Furthermore, an IPv6 TCP monitoring server is implemented on the computer side based on Linux programming. In the actual test, the system acts as a TCP client to connect to the monitoring server on the computer. After a successful connection, the server sends test data to the system, and the system client returns feedback to the server after receiving the test data. The test information output is shown in Fig. 14. The test information shows that the system has an IPv6 TCP two-way communication function and that the communication is stable.
Finally, the embedded webpage of this system is tested. The web page was obtained by directly accessing the IPv6 address of the system through the browser. With the help of the developer tools of the Google Chrome browser, the processes for requesting and receiving data are displayed in detail. The webpage obtained by the test is shown in Fig. 15. It can be seen from the figure that the browser directly accessed the IP address of the
Conclusion and prospects
Taking the automatic monitoring of atmospheric pollutant gases and particulate matter concentrations as the research background, this paper studies the application of IPv6 technology to the online visualization of atmospheric data in a monitored area and establishes a complete environmental quality monitoring system. According to the system application requirements, the following work has been studied and completed. Based on the 32-bit ARM processor, the system designs the corresponding hardware for the working environment and application www.nature.com/scientificreports/ of the sensor equipment and completes the miniaturization of the equipment and the remote real-time online automatic environmental monitoring task. The collection and storage of the concentrations of the SO 2 , NO 2 , CO, and O 3 gases and PM 2.5 and PM 10 particulate matter through electrochemical sensors constitute the sensing components of the sensor network. A large-capacity SD card is used to provide storage space for air pollution data and equipment status information. The corresponding storage scheme has been designed to realize the data self-management capability of each node. By transplanting and simplifying the IPv6 protocol stack, the system is provided with network application functions such as sockets and HTTP and can access the IPv6 network of the embedded system. An efficient node management scheme is formed by assigning a unique IPv6 address to a single sensor node. With the help of the large address space and strong security of IPv6, an embedded web page for the atmospheric environment monitoring system is designed. The data requesting function and display design are realized through HTML documents and JavaScript programming. Users can directly access the collected data and device information of sensor nodes through the IPv6 address of the device. The system adopts a solar power management module to provide an energy supply for the system. A watchdog is also installed to prevent the software system from crashing. According to the actual working environment and functional requirements, corresponding low power consumption strategies are designed. The processor is hibernated under different conditions to realize the low-power management of the system. The verification tests show that the device can support continuous monitoring tasks under IPv6.
There is still much work to be done to realize the Internet of Things, such as the development of a universal interface device, which can easily apply devices under the IPv4 protocol directly to the IPv6 environment. www.nature.com/scientificreports/
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
|
2022-09-22T13:50:48.813Z
|
2022-09-21T00:00:00.000
|
{
"year": 2022,
"sha1": "ac81af72a0dc2928b6340de3db42daf4622b0c7e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ac81af72a0dc2928b6340de3db42daf4622b0c7e",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55299534
|
pes2o/s2orc
|
v3-fos-license
|
Relative Availability of Iron in Mined Humic Substances for Weanling Pigs *
Humic substances include several biological active and inactive compounds that are commonly used for improving soil fertility. Use of humic substances in swine diets is a novel concept. Humic substances contain 8,700 mg/kg of iron but its bioavailability is unknown. This study was conducted to test the bioavailability of iron in humic substances for nursery pigs. One hundred twenty five pigs (Newsham, Colorado Springs, CO) were not given supplemental iron while nursing for 21 d. Pigs were weaned on d 21 and allotted to one of five treatments (four control treatments with different levels of supplemented iron; 0, 30, 70 and 88 mg/kg from FeSO4 and one treatment with 70 mg/kg iron from humic substances). Pigs were fed diets for 5 wk ad libitum and water was accessible freely. Body weight and feed intake were measured weekly. Blood samples were taken from pigs on d 28 to measure the number of red blood cells and hemoglobin concentration. Pigs fed a diet with the humic substances grew faster (p<0.05) during the first week postweaning, but performance was not different during the entire 5 wk period. Feed intake and gain/feed were the same among treatments. The slope ratio technique was used to estimate relative iron bioavailability. The concentration of blood hemoglobin did not respond to dietary iron levels using this model. However, the number of red blood cells (106/μl) was modeled by 4.438+0.017× ‘iron (mg/kg) from FeSO4’+0.012בiron (mg/kg) from the humic substances’. Based on the comparison between the slopes (0.012 from humic substances and 0.017 from FeSO4), iron in humic substances was 71% as available as the iron in FeSO4. The slopes for dietary feed intake of FeSO4 and the iron in humic substances did not differ (p>0.05). Humic substances can replace FeSO4 as an alternative iron source for pigs at 71% relative bioavailability. (Asian-Aust. J. Anim. Sci. 2004. Vol 17, No. 9 : 1266-1270)
INTRODUCTION
Humic substances are defined as 'a series of relatively high-molecular-weight, yellow to black colored substances formed by secondary synthesis reactions' (Stevenson, 1994).Humic substances can include most of the organic matter in most soils (Goh and Reid, 1975) but specifically include humic acids, fulvic acids, and humin as major constituents as well as several minerals such as iron, manganese, copper, and zinc (Aiken et al., 1985).Among the minerals in humic substances, iron is most abundant.
Use of humic substances in pig diets is a rather novel approach.Previously humic substances have been applied to reduce ammonia emission from manures of livestock either by dietary supplementation or application to manure (Ndayegamiye and Cote, 1989;Shi et al., 2001).However, dietary supplementation with humic substances in pig diets has not been reported.Organic acids and minerals in humic substances may benefit animal performance even though the actual mechanism is not yet understood.This study was conducted as the first effort to characterize humic substances as a feed supplement for use in pig diets.The objective of this study was to measure the bioavailability of iron in humic substances relative to iron sulfate.
Humic substances
Humic substances were obtained from Humatech Inc. (Mesa, Arizona) and the registered commercial product name was Promax .Humic substances are naturallyoccurring, mined products containing trace minerals (including iron, manganese, zinc and copper), organic acids (including fulvic and humic acids) and other organic compounds (including humin).Promax contained 8,700 mg/kg iron as assayed by atomic absorption spectrophotometry.
Design, animal and diet
One hundred twenty five pigs were used to determine the bioavailability of iron in humic substances.None of the pigs were given supplemental iron while nursing for 21 d.Pigs were weaned at 21 d of age and allotted to one of five dietary treatments including four standards (S1, S2, S3 and S4) and a humic substances treatment (HS).Each treatment had five replicates and five pigs were in each pen replicate.
The basal diet contained corn and fat as major energy sources and dried skim milk, plasma protein, and soybean meal as major protein sources (Table 1).The basal diet contained 27 mg/kg total iron or 15 mg/kg bioavailable iron based on NRC (1998) values.Graded levels of iron sulfate were supplemented to the basal diet resulting in four diets at rates of 0, 30, 70 and 88 mg/kg of additional iron, respectively.Humic substances were supplemented at 0.8% to provide 70 mg/kg of additional total iron to the basal diet.
The HS diet was designed to provide approximately 90% of the iron requirement of nursery pigs (NRC, 1998).
Pigs had free access to their experimental diets and water during the 35 d experimental period.Pig weights and feed intakes were measured weekly.Three pigs from each pen were selected randomly at 28 d and blood samples were obtained over sodium heparin from the vena cava to measure the number of red blood cells and hemoglobin concentrations.This study was approved by Texas Tech University Animal Care and Use Committee (# 01124).
Chemical analysis
Iron content of the diets was measured by atomic absorption spectrophotometry as described by Lee and Clydesdale (1979) and Acda et al. (2002).The number of red blood cells, hemoglobin content, the number of white blood cells, and packed cell volume were measured as described below.
The Unopette microcollection system (Becton Dickinson & Company, Franklin lakes, NJ) containing 1.98 ml of 3% acetic acid was used for total Leukocyte counts.A 20 µl of whole blood was withdrawn using a capillary pipette (dilution ratio 1:100), and was inserted into the Unopette and diluted.Unopettes were left for 10 min and then 20 µl drawn into the capillary tube and inserted into two wells on a Bright-line hemacytometer (Hauser Scientific Horsham, PA).Wells were divided into a grid of nine, 1 mm 2 .The total number of cells in nine squares was counted using a light microscope (10×).The two counts were averaged if they were within 5% of each other to determine the total leukocyte, 10 3 /µl (Howard and Matsumoto, 1977).
The Unopette microcollection system (Becton Dickinson & Company) containing 1.99 ml of diluents, a mixture of sodium azide and sodium chloride in HPLC grade water, was used for erythrocyte determination.A 10 µl of whole blood was drawn using a capillary pipette (dilution ratio 1:200) and was inserted into the Unopette and diluted.Samples were allowed to stand for 10 min and then 10 µl of the sample was drawn into the capillary tube and inserted into two wells on a Bright-line hemacytometer (Hauser Scientific Horsham, PA).Using a light microscope (430×), the erythrocytes were counted using the middle 1mm grid square which was dived into 5×5 squares.The middle and four counter sub squares were counted.An average of the two wells (within 5%) were taken and multiplied by 10,000/mm 3 (Howard and Matsumoto, 1977).
A 20 µl of whole blood was drawn into a 40 mm Statspin microhematocrit tube (Statspin Technologies, Norwood MA) to determine pack cell volume (PCV).The capillary tube was sealed and all tubes were placed into a 12 position Hematocrit rotor CritSpin Digital Reader (S120, Norwood, MA) to determine PCV.
Hemoglobin contents were determined using the Drabkin's method (Balasubramaniam and Malathi, 1992).Drabkin's reagent (Sigma-Aldrich, St. Louis, MO) was mixed with 1,000 ml of HPLC grade water and 0.5ml of 30% BRJi-35 Solution (Sigma-Aldrich).A 50.0 ml of this solution was mixed with one vial of Hemoglobin Standard Preparation (Sigma-Aldrich) making 18 g hemoglobin per dL of whole blood.Using a Microtest™ u-bottom tissue culture plate (Becton Dickinson & Company, Franklin Lakes, NJ), a standard curve was obtained and 20 µl of each blood sample was plated with the Drabkin's reagent.The plate was analyzed using the BIO-RAD Model 2550 EIA reader (Hercules, CA).
Statistical analysis
Data were analyzed as a completely randomized design.The pen was the experimental unit.The statistical analysis was performed with the General Linear Models procedure (PROC GLM) in SAS/STAT software (SAS Inst.Inc., Cary, NC).Least-squares means, probability of differences, and standard errors were used to evaluate the differences among the treatment groups.Data from one pen of HS group was excluded because of known contamination in the feeder.Thus, observations were five for all the treatments except for the HS.The General Linear Models procedure (PROC GLM) was also programmed to perform the statistical analysis used for relative bioavailability studies as described by Littell et al. (1997) and Kim and Easter (2001).
Regression equations were obtained between the number of red blood cell and additional iron intake from different iron sources.Slopes from regressions were compared to obtain the relative bioavailability of HS compared with iron sulfate.
Initial weights of pigs were the same among the treatments (Table 3).Final weights of pigs were the statistically similar among the treatment.Pigs fed HS had numerically 9.0% higher weight gain compared with pigs fed the S1 diet.Average daily gain (ADG) was different among the treatments during the wk 1.The HS group had higher (p<0.05)ADG than pigs fed S1 and S4 whereas ADG was similar for pigs fed S2 and S3 diets.Average daily gain of the pigs during the wk 2 to 5 did not differ (p>0.05)among the treatments.Overall, ADG of the pigs were the same (p>0.05)among treatments even though pigs in the HS group had numerically 16% greater ADG than pigs in the S1 group.
Average daily feed intake of the HS group was higher (p<0.05)than those of the S1 and S4 groups during the wk1 (Table 3).The S2 group had higher (p<0.05)ADFI than S1 during the wk 2 and wk 4.There were no differences (p>0.05) in ADFI among the treatments during the wk 3 and wk 5. Overall, ADFI was the same (p>0.05)among pigs in each the treatment, even though the HS group had numerically 17% greater ADFI than that of the S1 group.Gain:feed ratio was the same (p>0.05)among the treatments during the each week as well as during the entire experimental period.Hemoglobin concentration was the same (p>0.05)among the treatments.Blood hemoglobin concentrations did not respond to the dietary iron supplementation levels.The numbers of red blood cells (RBC) per µl of blood from the S4 and HS groups were higher (p<0.05)than that of the S1 group (Table 4).The number of RBC increased (p<0.01)linearly as dietary iron supplementation increased.Pack cell volume and the number of white blood cells were the same (p<0.05)among the treatments.
Pigs were fed the diets with varying levels of dietary iron supplemented for 5 weeks but hemoglobin concentrations did not respond to dietary iron content.Instead, the number of RBC responded in a sensitive manner to dietary iron content.Determination of the status of functional iron in an animal can be carried out by measuring the number of red blood cells, hemoglobin levels, and the size and shape of red blood cells (Cavill, 2002;Parvanta et al., 2003).In this study, the best indicator was the number of RBC but the blood hemoglobin content did not respond to dietary iron levels.The number of RBC and hemoglobin contents was positively correlated (Cavill, 2002;Parvanta et al., 2003), but it is not known why this correlation was weak in this study.
Bioavailability of iron in humic substances relative to iron sulfate was calculated by measuring the change of the number of RBC as iron intake changed (Figure 1).The numbers of RBC from the pigs received varying amounts of supplemental iron were plotted and the regressions were obtained.The changes in RBC number from the pigs received iron sulfate as a source of additional iron were modeled as 4.4386+0.017×'ironfrom FeSO 4 ' (R 2 =0.87, p<0.05).The changes of RBC number from the pigs received HS as a source of additional iron was modeled as 4.438+0.012×'ironfrom HS' (R 2 =0.87, p<0.05).The slopes from these regressions were 0.017 and 0.012 for iron sulfate and HS, respectively.The slope of HS did not differ statistically from the slope of FeSO 4 (p>0.05).From the slopes, the relative bioavailability (0.012/0.017) of iron in HS was 71% as bioavailable as the iron in iron sulfate.Relative bioavailabilities of other iron sources were compared to iron sulfate including iron methionine (68.3%;Lewis et al., 1996), iron in spray dried blood cells (24.0%; Anderson and Easter, 1999), and iron in defluorinated phosphate (58.5%;Kornegay, 1972).Harmon et al. (1969) showed that ferrous carbonate is was an effective dietary iron supplement.In this study, humic substances with 8,700 mg/kg iron and 71% relative bioavailability can be used as a source of iron for pig diets.
Figure 1 .
Figure1.Standard curve for weanling pigs fed iron sulfate based on individual pen feed intake.The slope of the regression equation for pens fed humic substances (HS) indicate that the iron in HS was 71% as available as the iron in iron sulfate (FeSO 4 ), based on the relative slopes of the following equation (RBC=4.438+0.017×Fe from FeSO 4 and RBC=4.438+0.012×Fefrom HS) and relative bioavailability of iron in HS (%) was obtained by (slope of FeSO 4 /slope of HS)×100.
Table 1 .
Composition of the basal diet The vitamin premix provided the following per kilogram of complete diet: 4,433 IU vitamin A as vitamin A acetate, 484 IU vitamin D 3 , 36.3 IU vitamin E, 1.6 IU vitamin K as menadione sodium bisulfite, 32.2 µg vitamin B 12 , 8.1 mg riboflavin, 25.8 mg D-pantothenic acid as calcium pantothenate, 32.2 mg niacin and 972 mg choline as choline chloride.d The trace-mineral salt provided the following per kilogram of complete diet: 110 mg manganese as manganous oxide, 55.5 mg zinc as zinc sulfate, 11.2 mg copper as copper sulfate, 608 mg magnesium as magnesium oxide, 144 mg calcium as calcium propionate and 0.24 mg selenium as sodium selenite.e Basal diets contained 0.8% of iron supplement and the compositions of iron supplements for each treatment are shown in Table 2.
Table 2 .
Composition of iron supplement
Table 3 .
Growth performance of nursery pigs (3 to 8 wk of age) a a Sample size equals 5 pens per treatment except for HS (humic substances), Promax treatment which had 4 pens.b Iron levels are analyzed values.Calculated values were 0, 30, 70, 88 and 70 mg/kg, respectively.cd Means with a different superscript differ (p<0.05).
Table 4 .
Measures of hematology of nursery pigs at 4 wk postweaning a Sample size equals 5 pens per treatment except for Promax treatment which had 4 pens.b Iron levels are analyzed values.Calculated values were 0, 30, 70, 88 and 70 mg/kg, respectively.c Linear effect of dietary iron (p<0.01).cd Means with a different superscript differ (p<0.05). a
|
2018-12-05T03:19:57.624Z
|
2004-01-01T00:00:00.000
|
{
"year": 2004,
"sha1": "bb2ffd90e6587825a6e6ce415de5bd78d8c04020",
"oa_license": "CCBY",
"oa_url": "https://www.animbiosci.org/upload/pdf/17_207.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bb2ffd90e6587825a6e6ce415de5bd78d8c04020",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
116178170
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of operational performance of a mechanical ventilation cooling system with latent thermal energy storage
Latent Thermal Energy Storage (LTES) is a promising solution to reduce cooling energy consumption in buildings. Laboratory and computational studies have demonstrated its capabilities while commercial passive and active systems are available. This paper presents data and analysis of the performance of an active LTES ventilation system in two case-studies, a seminar room and an open plan office in the UK. Analysis using environmental data from the system’s control as well as additional space monitoring indicates that (a) internal temperature is maintained within adaptive thermal comfort limits, (b) acceptable Indoor Air Quality is also maintained (using metabolic CO2 as indicator) and (c) energy costs are ctive LTES perational performance
Introduction
Energy storage is a very active area of research in recent years as it provides a sustainable solution to energy demand fluctuations and increases energy efficiency. Different energy storage methods such as mechanical energy storage (gravitational energy, flywheels); electrical storage (e.g. batteries); thermal storage (sensible, latent) and thermochemical heat storage [1] can be used. Thermal energy storage (TES) is particularly suited to buildings because a high percentage of their energy demand relates to heating and cooling needs. Sensible TES utilizes the heat capacity properties of materials while latent TES uses heat exchanges via the phase change of materials, usually between solid and liquid for building applications. Latent thermal energy storage (LTES) can provide more energy per volume than a sensible thermal storage system, making LTES a promising solution for buildings either integrated into building envelope (passive LTES) or in ventilation systems (active LTES) to reduce cooling demand [2] or reduce heating demand [3].
Active LTES integrated in mechanical ventilation systems has received attention during the last two decades. A room ventilation system incorporating heat pipes embedded in PCM thermal battery was tested experimentally for applications in the UK 20 years ago [4,5]. Heat transfer rates of up to 200 W were measured under simulated UK summer conditions comparing the system favorably to conventional air conditioning and other technologies such as cooled beams. Since then, many investigations through experiments and simulations followed. A recent review [6] critically discusses experimental studies of PCM applications in buildings dividing them in free cooling passive and active methods, active and passive heating methods and hybrid applications. It describes developments in ventilation and air-conditioned systems based on PCMs as well as nano-enhanced PCMs. The extensive literature review has revealed that active LTES incorporated within the ventilation system can overcome heat exchange limitations of passive systems because of the increased heat transfer by convection and LTES is an appropriate solution to increase energy efficiency of cooling systems in buildings. A review [7] focusing on cooling LTES applications summarises experimental results of active LTES and discusses the importance of PCM selection according to cooling needs due to internal heat gains and climatic conditions. The PCM melting temperature is one of the most influencing parameters for the success of the application. Such recent reviews have also highlighted that limited analysis has been published from operational buildings with commercially installed LTES systems; such Nomenclature Symbols y i Measured datâ y i Simulated data N Sample size Y s Sample mean of the measured data T 1 Cool external intake air temperature T 2 Recirculated temperature T 5 Temperature before thermal batteries T 7 Temperature This paper attempts to fill part of this gap by presenting field measurements from two operational case-study rooms ventilated and cooled through a commercially available mechanical ventilation system incorporating an active LTES. Section 2 describes the ventilation unit, the two case-studies and how the system works. Section 3 presents the operational performance of the system in terms of thermal comfort, indoor air quality and energy use. Section 4 presents validated thermal and CFD models which were used to carry out parametric analysis for performance improvements. Finally, Section 5 includes the conclusions.
LTES ventilation unit
There are few technologies available on the market of LTES for free cooling and heating; one is Cool-Phase ® by Monodraught Ltd. 8 and 10 kW) and model; 3995 mm to 5805 mm width, 966 mm depth and 400 mm height. The PCM is encapsulated in an aluminium panel available commercially [8]. Each panel holds 2 kg of PCM able to provide 88 Wh (317 kJ); each module of the LTES studied has 18 panels. Modules are put together depending on the required capacity; for the 10 kW model, the unit consists of 18 panels x 6 modules (3 modules per side) which equals to 9.5 kWh (34.2 MJ); the 8 kW model consists of 18 panels x 4 modules (2 modules per side) which equals to 6.37 kWh (22.8 MJ). An electronic system controls the damper and directs the airflow through the LTES or bypasses it through an EPP Expanded Polypropylene duct. The fan provides 260 l/s maximum during cooling mode and 300 l/s during charging mode. The PCM's solidifying temperature used in LTES is limited by the temperature used to charge the material (night outside air temperature). This selection is also based on the cooling demand (lower melting point for high demand and higher melting point for lower demand) [9]. Due to this, the Salt-hydrated PCM SP 21E by Rubitherm (melting: 22-23 • C; solidifying: 21-19 • C; heat capacity: 170 kJ/kg for a temperature range of 13-28 • C) suits the studied system and its applications. Salt hydrates work by arranging and breaking the reaction salt-water (hydrate-dehydrate). They have high latent heat per unit volume, high conductivity (double of paraffin) and little volume change during melting. However, salts have a density higher than water and stay at the bottom of the container, making the freezing process more complicated [1].
The studied system is essentially a demand control ventilation and cooling system, controlled by temperature, relative humidity and CO 2 levels inside the conditioned space. A summary of the operation modes is presented in Table 1. Fig. 2 presents a schematic of the system indicating the location of the sensors used to control its operation. Temperature is controlled by either air drawn from outside or recirculated from the room; depending on the temperature and relative humidity in the conditioned space, external air is used directly or mixed with recirculated air by-passing the LTES. If cooling is needed the air is directed to the LTES. If the room exceeds metabolic CO 2 concentration limit, fresh air from outside is intro- Fig. 2. Schematic of the LTES system indicating the location of the sensors analysed in this paper.
Table 1
Brief description of operation modes.
Direct outside air ventilation
Used when the outside temperature is cooler than inside, the air is supplied into the room bypassing the LTES until it reaches a set point temperature.
Outside ventilation and cooling
Used when the outside temperature is lower than inside but is not low enough to cool the space; the air crosses the LTES before entering the room.
Recirculation and cooling
When the temperature outside is higher than inside, recirculating air passes over the LTES before entering the room.
Summer Charging
During unoccupied hours, the fan supplies outside cold air to charge the LTES and release the build-up heat. When the LTES is full charged, the system turns off automatically. Heat recovery cycle In winter times, when the room is unoccupied or warm, the air is re-circulated through the LTES to charge and use it to reduce the heating system load. CO2 control When CO2 concentration inside is higher than a specific set point, outside air is supplied. Humidity control When the room relative humidity is lower or higher of a preset set point, the system will change outside air supply until the set point range is achieved. For all modes during occupied hours, an outside minimum volume flow is provided to ensure a minimum air flow rate according to regulations duced. The control system has default set-points but the user is able to adapt them according to needs. Table 2 presents the default set-points; air temperature varies according to the season and airflow rate according to metabolic CO 2 and air temperature while maintaining relative humidity within a range of 30-70%.
Description of case-studies
The main case-study with the system installed is a seminar room at a university campus in West England. Similar units have been installed in many rooms (typical classrooms and offices) of the building but a seminar room was chosen because of its use (com-puter laboratory) with higher internal heat gains. In addition, an open plan office building is presented as a second case-study. This second case-study shows the system's performance (based only on the system's monitoring data) to indicate a range of applications.
The climate in both locations (UK, Bristol and Cambridge) is temperate maritime with 2684/2024 Heating Degree Days and 196/319 Cooling Degree Days (base 15.5 • C) [10] indicating low cooling requirements due to external conditions so cooling load is mainly determined by internal heat gains.
The main case-study was renovated into a seminar room by joining three pre-existing rooms. The existence of a plenum favoured Telaire 7001 0 to 2500 ppm ±50 ppm or 5% of reading ±1 ppm the installation of the suspended ceiling system model of the system. The refurbished seminar room floor plan can be seen in Fig. 3 where the position of space monitoring sensors (for the purpose of this study) are shown. The room has a floor area of 117 m 2 and includes 29 desk top computers, peak occupancy of 29 students, and artificial lighting comprising of 24 luminaires each equipped with one 48 W lamp. The total maximum internal heat gain in the room is 60 W/m 2 . The room has one external wall facing west with U-value of 0.56 W/m 2 K while 23% is glazing (overall U-value 1.82 W/m 2 K) with internal blinds. Ventilation and cooling is provided via the 10 kW LTES unit which is positioned in the middle of the room above the suspended ceiling. Heating is provided through perimeter hot water radiators and windows are openable. The second case-study is a ground floor open plan office with a floor area of 535 m 2 , housing 50 office workers with internal maximum heat gains of 45 W/m 2 . Five 8 kW units (two master and 3 slaves) have been installed and can be seen on Fig. 4. The master units are directly connected to the control sensors while the slaves follow the operation of the master unit. The open plan office is one storey with external walls facing north, east and south and a pitched roof with a false ceiling. Wall U-value is 0.46 W/m 2 K, floor 0.75 W/m 2 K, roof 0.187 W/m 2 K and window 3.3 W/m 2 K with internal blinds. The area of windows is 30% of external wall area. Heating is provided through perimeter hot water radiators and windows are openable.
Description of system operation
Indicative operation of the LTES units in the two case-study spaces during summer is shown in Figs. 5 and 6 . The control system is very similar apart from differences in the times of 'day mode' due to the different use of space. A description of the operation modes is presented below: Operation Mode 2: Night Purge-Midnight to 01:00-the fan runs at the highest speed setting for 1 h with the aim of flushing the room of stale air. Fig. 5 shows that all temperatures within this period decrease including the Internal Space Temperature, TUI. It should also noticed that T1 (external intake duct) is the temperature of the sensor placed at the face of the intake duct. This sensor can be influenced by infiltration from the room if the plenum is not well sealed. This is what happened in this case and highlights the importance of sensor position for optimum operation of the system. In the open plan office (Fig. 6) the external intake duct sensor is better positioned.
Operation mode 3: Summer Night Charge-01:00 to 07:00-During this period, the inlet and outlet temperatures of the LTES (T5 and T7, respectively) can be seen to decrease indicating that the LTES is being 'charged' as the external intake air is being passed through them.
Operation mode 4: System Switched Off − 07:00 to 08:00-The system is turned off for 1 h in the morning in a period before the occupancy schedule begins. A crucial observation however, is that the internal space temperature (TUI) rises during this period which effectively negates some of the work done to cool the space during the night. The internal space temperature at 7:00 was well within the thermal comfort range and there was no risk of overcooling. A similar increase of internal space temperature is observed in the open plan office (Fig. 6); in this case by more than 1 • C and it may influence the timing of peak internal temperatures later in the day.
In the open plan office (Fig. 6), the internal space temperature is above the set-point (23 • C) at 8:00, so the cooling mode is initiated. The outside air temperature is below the set-point so recirculated air is by-passing the LTES until about 10:40 when the internal temperature reaches the set-point. During this period, T5 and T7 readings are misleading as there is no air flow through the route of these sensors and so T1 is effectively the supply air temperature. At 10:40 the air is directed to the LTES and the inlet air (T7) is cooled by the melting PCM; this is shown by the sudden rise of T5 and drop of T7. The difference between T5 and T7 reaches its maximum at 17:15 with a 7.2 • C difference. Internal space temperature is maintained to below adaptive themal comfort upper limit calculated to 27.5 • C for this day (see Fig. 8).
Operation mode 4: System Switched Off − 21:00 to 00:00 (seminar room) 18:00-00:00 (open plan office) -At 21:00 or 18:00 the conditioned period in the space ends, the system switches off and all temperatures converge to the same value. The system remains switched off until midnight where the process starts again.
Method
As mentioned before, data from the unit's control system were available as well as additional space monitoring inside the seminar room. These are used for the performance evaluation. Monitoring of room temperature and relative humidity was carried out within the seminar room using 8 HOBO loggers measuring for one year at 5 min interval. The position of the sensors are shown in Fig. 3. Sensors H1, H2, H3 and H4 were installed at 0.70 m from the floor, H5 and H6 at 1.80m, H7 at the same level as the system's wall mounted user control (1.5m) and H8 was placed close to exhaust grille (located on the ceiling). In addition, four ibutton loggers were placed in the inlet diffuser measuring air temperature at 5 mins interval. CO 2 measurements were taken using two Telaire sensors on two days (25-26/11/2015) to analyse CO 2 distribution and compare with system data. Sensors' specifications are presented in Table 3.
Thermal comfort analysis -summer
Adaptive thermal comfort approach is used for the evaluation during cooling season [11], as these are current guidelines for school buildings in the UK [12] and followed for non-AC office buildings [13]. The upper and lower limits of adaptive thermal com- fort are based on category II (T min = T − 3 and T max = T + 3) with T min and T max calculated by: T = 0.33 T rm +18.8 where T rm is the running mean external temperature. Fig. 7 shows the results during weekdays (8:00 to 21:00) for the seminar room for two summers; temperature and relative humidity is the average of all sensors at 0.7m. Relative humidity is within a good range between 30 (apart form 3 h on one day in May) and 70% (apart from one day in August). Temperature did not exceed the upper thermal limit. In general, the low solar gains because of the small area of windows and ground floor position of the room allow the system to maintain comfortable conditions during summer periods with full occupancy; for example during examination times in mid May and mid August 2016. However some overcooling occurs for some hours indicating that night purging might need more detailed control than relying on a timer; this is discussed in more detail in Section 4. Comparison of temperatures at different heights have shown no deviation from thermal comfort ranges [14]. Fig. 8 shows the system space temperature in the open plan office for one summer season. There is a gap due to lost data. The system shows very good performance with regards to the calculated thermal comfort, with very few thermal comfort limits exceedance, during one week in July (upper limit) and in May (lower limit). The exceedance in May is due to low night temperatures and during the charging at night the 18 • C night cooling limit is reached very quickly so when the system is turned off at 7:00 the increase in temperature is not enough to reach the lower limit of thermal comfort when the conditioned period begins. This behaviour of the control system is representative of the trade-off between sufficiently charging the LTES and cooling the space during the night in preparation for optimum performance on a hot day but causing the space to be too cool for thermal comfort if the morning (and day) turns out to be unusually cold for a summer period. This is a known issue and highlighted in guidelines [13] that the lower limit of thermal comfort can sometimes be misleading for cold mornings of summer periods. As such, this characteristic of the system should not be considered as a major performance-defining downfall, but still addressed within the investigation.
Indoor air quality during winter based on metabolic CO 2
During winter, thermal comfort is provided by a separate heating system (perimeter radiators) but is controlled by the LTES central unit. Table 4 presents CO 2 levels monitored by the system during the occupied hours of 2014 and 2015 in the seminar room. Average CO 2 is below the limit of 1000 ppm for any month and 1500 ppm is exceeded by more than 20 min [15] only once (29 min in 2014) which shows a good performance in terms of IAQ. Additional CO 2 monitoring was carried out in the seminar room space. Fig. 9 shows the CO 2 distribution during two days (25-26/11/2015). It can be seen that the values were always below 1500, with the exception of one minute on 25/11/2015 where CO 2 concentration reaches 1923 ppm. Because this value cannot be observed on the second sensor, it is possible that one student has blown direct on the sensor or an unexpected error occurred.
In addition, CO 2 system data compare well with room data but a response delay was noticed (Fig. 9). This will cause the system's reaction to be delayed and could provide unnecessary air flow when the room is unoccupied. A faster response system CO 2 sensor or positioned well within occupancy space (if possible) or in the exhaust air might improve performance.
Regarding the open office, Fig. 10 shows that the limit of 900 ppm of CO 2 (maximum set-point) measured by the system data was not exceeded during the coldest months of the year (January and February) when ventilation rate is kept to a minimum. The ventilation flow rate is controlled by the system based on CO 2 monitoring within the space to ensure that the set-point is not exceeded.
LTES system energy consumption
The system includes a variable speed fan and motors to control the dampers resulting in low overall energy consumption. Electrical energy consumption in the seminar room was 91.7 kWh (0.78 kWh/m 2 /annum) in 2014 and 78 kWh (0.67 kWh/m 2 /annum) in 2015. In monetary terms, this will cost less than £10 per year (based on 2015 cost average of £0.104 per kWh for a medium size building [16]. Simulations with IESVE (discussed in more detail in section 4) show an energy demand of 8.83 MWh to maintain the same internal conditions. Therefore, the energy used by the system is a small fraction of the energy required by an AC system (the exact saving is dependent on the AC system and its COP). Annual electricity energy use intensity for secondary schools in the UK has a median of 51 kWh/m 2 [17] including electricity used for lighting and office equipment. This increases by 5kWh/m 2 when moving from 'heating and natural ventilation' to 'heating and mechanical ventilation' buildings, indicating the typical magnitude of energy use by mechanical ventilation. CIBSE TM57 [17] presents good casestudies with cooling energy intensity of 12.5 kWh and 3.5 kWh/m 2 .
In the open plan office, fan energy during the summer period (May to September) was 121 kWh (0.21 kWh/m 2 ). The good practice energy benchmark in the UK [18] for the cooling, fans, pumps and controls of a 'standard air conditioned office space' is 44 kWh/m 2 annually. Therefore in both cases, energy needed to operate the system is small compared to available benchmarks.
Thermal and air flow modelling
The thermal simulation program IESVE [19] and ANSYS FLUENT [20] were used to carry out energy, environmental conditions and air flow simulations in the seminar room to examine the performance of the system in more detail and propose improvements. IESVE was chosen because it has a plug-in that enables the user to design the LTES system by changing parameters such as the system type, size and number of units required according to the heat gains.
Thermal modelling
The computer room model was run for one year and the accuracy was checked by comparing with measured air temperature data on an hourly and monthly basis for one year by calculating the Mean Table 2) in the seminar room.
Bias Error (MBE) and the Coefficient of variation of the Root-Mean-Square Error CV(RMSE) [21]: y i andŷ i are measured and simulated data at instant i, respectively; Y s is the sample mean of the measured data and N is the sample size (8760 for hourly based validation analysis or 12 for monthly based validation analysis. ASHRAE Guideline 14 [21] and recent research studies [22][23][24] recommend to achieve a difference of less than 5% in MBE and less than 15% in CV(RMSE) between monthly prediction and measurements. When hourly data are used, a difference of 10% in MBE and 30% in CV(RMSE) is recommended as good agreement between predictions and measurements.
Weather data for the prediction period was sourced from Weather Underground [25] and introduced in simulations by updating the EWY weather file with air temperature and relative humidity data of the monitored period (2015). IESVE plug-in standard control system was improved using CO 2 data [26] to calculate the number of students and computers during operation until the simulation results for each month achieved values of MBE and CV(RSME) below 10% and 30% respectively when compared to measurements ( Table 5).
The same calibration procedure was followed for the open plan office but using available data during the summer period in 2016. The MBE achieved was 2.55 and 2.65 for July and August respectively, while the CV(RMSE) was 25.52 for July and 28.42 for August.
Space temperature and air flow modelling
To analyse the system performance and propose further improvements, CFD modelling was carried out to investigate that space conditions are within thermal comfort ranges throughout the room. A 3D model of the seminar room was constructed and the day of 24/09/2015 was chosen for investigation due to the better match between room data, system data and IESVE temperature profiles. A constant air flow (100 l/s) by the system was also important to secure a uniform air distribution and a consistent CFD simulation. 100 l/s is the design air flow to operate for most of the cooling period and when occupants are accommodated (for this particular case, when students are seated). 100 l/s occur in 26.8% and 44.3% of the cooling time in 2014 and 2015 respectively. Airflows between 100 and 140 l/s in 2014 and 2015 occur in 63% and 78% of the cooling time. Heat gains from computers (11) and students (14) at 15:00 were extracted from IESVE and introduced in ANSYS FLUENT as the boundary condition. Because most of the students are seated during the class, the body (represented as a cylinder) have diameter of 0.4 m and 1.36 m of height. The body surface area of approximately 1.83 m 2 corresponds, according to DuBois [27], to a man with 1.70 height and approximately 73 kg. Thermal skin properties were introduced with a skin temperature of 33.7 • C [28], computers and lights have a temperature of 40 • C [29], tables are made of wood with zero heat flux. The room wall, ceiling and floor temperature was assumed to have the same temperature as the outer rooms and because of this the heat flux is zero. The fourth wall faced to outside building has convection as a boundary condition and the heat transfer coefficient was calculated using McAdams correlation [30] with weather data taken from Weather Underground [25].
Boussinesq approximation was used because density does not vary a high range. Radiation model was not considered on this analysis and Realizable k-e with standard wall function was chosen with second order upwind solution methods for momentum, turbulence and energy. Meshes of 580,094, 801,471, 920,117 nodes were generated and a few variation on average temperature between the last two meshes on the z plane located at 1.2 m above the floor was encountered. Due to that the mesh of 801,471 nodes was used.
Results of this simulation are presented in Fig. 11 where a plane at seated level (0.70m) shows an average of 24.21 • C, 1.62 • C higher (or 7.14%) from room data at the same height. This is a reasonable result; the indoor environmental modelling chapter of ASHRAE fundamentals [31] indicates that a difference of 20% could be considered excellent for complex flow problems. Fig. 11 also shows that at 0.70m, no thermal discomfort is expected based on the adaptive thermal comfort limits. Air velocity contours show a range of
Parametric analysis for performance improvements
With the calibrated models, parameters in IESVE were adjusted and improvements on system performance can be tested with confidence.
Increased air flow rate
Improvements in the computer room were simulated. The question investigated is whether air flow rate increases will improve IAQ and reduce CO 2 levels and whether this might affect thermal comfort and energy use negatively. This was investigated by increasing the airflow at each temperature set-point (see Table 2). The results presented in Fig. 12 indicate increased percentage of air temperature inside the summer, autumn/spring and winter set-points. For the summer period, 43.6% of hourly temperatures are inside the set-point range (22 • C ± 0.5 • C) an increase of 14.2%. During the autumn there was also an increase of 12.3% of hourly temperatures inside the set point temperature (23 • C ± 0.5 • C) while temperatures above 24.5 • C had a significant reduction of 27.7%. During winter, temperatures were at set point of 24 • C (±0.5 • C) for 69.6% of the time, an increase of 3.7%. For the whole year, the base-case model present temperatures inside the season set-point range for 37.4% and the model with enhanced air flow rates 47.4%. This increase in air flow rate of approximately 10% will consume additionally 44 kWh per year (or 51.4%) which is a small penalty despite the high increase in percentage value.
Timing and set points of night purge and night charge modes
Improvements during the summer period in the open plan office were investigated. Summer is the target period for cooling in the offices as IAQ assessment based on metabolic CO 2 is less important than in the seminar room because of the much lower occupancy density. A parametric analysis was carried out changing timing and set points of the operational modes described in section 2, as follows: • Night Purge-extended to 2 and 3 h.
• Summer Night Charge-extended; no system shut-off period and varying set points of internal temperature (18 • C, 19 • C, 20 • C, and 21 • C).
The main aim was to reduce peak internal temperatures and time spent above the upper limit of thermal comfort. Fig. 13 presents the results showing the reduction in temperature frequencies above the upper thermal comfort limit over a 3 day period; July 20th-22nd. With the exception of the 21 • C target temperature, the extended night charge modes provide the best improvements in comparison to the base model, with the largest reduction of 130 min (34%) by the extended night charge with 18 • C internal target temperature. However the night charges to 19 • C and 20 • C, as well as the 2 optimised simulations, have very similar reductions. As the modified night charge and purge modes were effective and achieving a lower PCM temperature, it was considered that involving the increased air flow rates within the combined simulations might mitigate the rate at which the PCM rose in temperature, which led to an increase in peak internal temperatures. This was trialled but the same impact of the increased air flow rates was realised due to the small temperature differential.
To increase the cooling capacity any further, other elements of the system (such as the shape of the heat exchange surfaces or the PCM itself) must first be developed to increase the heat transfer rates of the thermal batteries, and thus improve thermal comfort. This is an area of current investigation by the authors. However, as the simulations analysed in the parametric analysis do achieve a positive influence on the time spent below the upper thermal comfort limit, it is predicted that these improvements to performance will be amplified with any increase in system cooling capacity.
Despite the benefits that are recognised throughout the analysis, there are consistent disadvantages in the frequency and magnitude of breaches of the lower thermal comfort limit (Fig. 14). The general trend shows that the results with most notable reductions in time spent above the upper limit, also tend to acquire the most breaches of the lower limit. These are found to occur within the first 2 h of the conditioned period due to the modified night modes cooling the space to a lower temperature. Some of these occurrences are also found to be unavoidable, even by the Base model, when the system does not even activate during the night period and the limit is still breached. As mentioned before this is a point raised in the literature [13] about the lower limit of adaptive thermal comfort calculations which can be overcome with behavioural adjustments.
Increased flow rates in the open plan office
A summary of the performances in relation to cooling set-point and increased flow rate is presented in Fig. 15. The results show a similar trend to those concerning the upper thermal comfort limit ( Fig. 13) with the exception of the 3 h night purge and increased air flow rates simulations providing better performance compared to the other strategies than they did for peak temperatures. Increasing the air flow rates means that cool air can be provided at a faster rate and control the internal temperatures more effectively, however the downside to this was that during peak temperatures, the thermal batteries losing their cooling capacity too early within the occupied period to cope with peak temperatures. In nearly all the analysis conducted, the extended night charge to 21 • C was shown to perform worse than the Base model as it was shown that it did not provide sufficient cooling of the space or charging of the thermal batteries to control the internal conditions as effectively.
In terms of energy consumption for the fan, the optimised strategy to 19 • C with increased air flow rate results to an increase in energy consumption of 168 kWh for the whole summer period (May-September 2016). This increase equates to 0.31 kWh/m 2 fan energy consumption compared to 0.21 kWh/m 2 of the base case, an increase of approximately 48%.
Conclusions
This paper presented an active ventilation cooling system that can be suitable for newly built and retrofit applications. The system is a mechanical ventilation system which uses PCM thermal storage in the ventilation path and utilise night cool air for solidifying the PCM which in turn cools recirculated or external air during its melting phase. The two case-studies presented are both retrofit applications; a seminar computer room (with internal heat gains of 60 W/m 2 ) and an open plan office (with internal heat gains of 45 W/m 2 ). The system was installed in the existing plenum of the space with access to outside. Detailed monitoring of the spaces and thermal/CFD analysis indicated that the system can provide acceptable thermal comfort throughout at seating occupant level (0.7 m from the floor) in the moderate weather summer conditions of south and west England using adaptive thermal comfort limits.
A parametric analysis was carried out to investigate possible improved performance by an optimised control strategy. It was found through simulations that increasing air flow will keep internal temperatures more frequently within the set-point range without compromising thermal comfort and indoor air quality with a small electricity increase penalty. Considering the increase of night purge duration and charging in relation to room air temperatures, the maximum improvement to thermal comfort was obtained by increasing air flow rates with an extended night purge of 3 h and extended night charge mode with a night time internal space target temperature of 20 • C that runs through until the occupied period. This strategy achieved an improvement which is 3% less than the biggest reduction in the time spent over the upper thermal comfort limit achieved by the extended night charge with a target temperature of 18 • C. The control proposed also achieves smallest time spent below the lower thermal comfort limit, with 21% less time spent below the limit compared to the strategy of night charge to 18 • C.
The analysis indicated that changes in the control strategy improve the performance of the system. However, this is limited by the cooling capacity of the PCM. One solution is to increase the PCM material but this will result to larger space requirements. Another solution is to increase the heat transfer rate of the existing configuration; further research will investigate this numerically and experimentally by examining the geometry of encapsulation and stacking density in relation to pressure drop and cost of production.
|
2019-04-16T13:27:09.259Z
|
2018-01-15T00:00:00.000
|
{
"year": 2018,
"sha1": "72ffd5227b775030831ca6f40ce5de70d20538bb",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.enbuild.2017.11.067",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7a5342fc8a7bdd61f232c8e8abcff4f2cfc0b69e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
}
|
53980338
|
pes2o/s2orc
|
v3-fos-license
|
Archimedean local height differences on elliptic curves
To compute generators for the Mordell-Weil group of an elliptic curve over a number field, one needs to bound the difference between the naive and the canonical height from above. We give an elementary and fast method to compute an upper bound for the local contribution to this difference at an archimedean place, which sometimes gives better results than previous algorithms.
Introduction
Let E be an elliptic curve defined over a number field K. By the Mordell-Weil theorem the K-rational points on E form a finitely generated group E(K) ∼ = Z r × Tor(E(K)); here r ≥ 0 is the rank of E/K and Tor(E(K)) is the (finite) torsion subgroup of E(K). One of the fundamental computational problems in the study of the arithmetic of elliptic curves is to compute generators for E(K). Applications of this include, for instance, the numerical verification of the full conjecture of Birch and Swinnerton-Dyer in examples, as well as the computation of S-integral points on E, e.g. using the recent approach of von Känel and Matschke [vKM16].
Generators of Tor(E(K)) are typically easy to find. No effective method for the computation of r is known, but there are still several methods which often succeed in practice. Suppose that we know r and Q 1 , . . . , Q r ∈ E(K) whose classes generate a finite index subgroup of E(K)/Tor(E(K)). The final step is then to deduce generators of E(K) from this. This is done by saturating the lattice generated by Q 1 , . . . , Q r inside the Euclidean vector space (E(K) ⊗ R,ĥ), whereĥ is the canonical height. The most widely used saturation algorithm is due to Siksek [Sik95] and requires, in particular, an algorithm to enumerate points on E(K) of canonical height bounded by a fixed real number B (this set is finite by the Northcott property).
In practice, this is done by first computing an upper bound β for the difference between h and the naive height h : E(K) → R; the points with canonical height bounded by B are then contained in {P ∈ E(K) : h(P ) ≤ B + β}, which can be enumerated for reasonably small B + β. Note that the heights we consider are logarithmic, so that β shows up exponentially in the size of the search space. It is therefore of great practical importance to make β as small as possible. At the same time, it is desirable to keep the computation of β reasonably fast.
The standard approach for bounding the difference h −ĥ is to write it as a sum of local terms, one for each place of K, and to bound the local contributions individually, see [CPS06] or Section 3 below. For non-archimedean places, optimal bounds are given in [CPS06]. Our main contribution is Theorem 4.2, which provides an elementary method for bounding the local contribution at an archimedean place. This method is extremely fast in practice, and yields better results than other existing approaches in many examples. The approach is analogous to an algorithm due to Stoll [Sto99], with modifications by Stoll and the first-named author [MS16b] for Jacobians of genus 2 curves and to Stoll [Sto17] for Jacobians of hyperelliptic genus 3 curves. In the case of elliptic curves, the validity of our formulas can be established using essentially only linear algebra.
This article is partially based on the second-named author's Master thesis [Stu18]. We thank Michael Stoll for suggesting this project and Peter Bruin for answering several questions about his paper [Bru13] and the corresponding code.
Action of the two-torsion subgroup
In this section, we let K be an algebraically closed field of characteristic zero and let E/K be an elliptic curve, given by a Weierstrass equation with point at infinity O. We denote by b 2 , . . . , b 8 the usual b-invariants of E. Let κ : E → P 1 be the x-coordinate map with respect to the given equation (2.1), extended to all of E by setting κ(O) = (1 : 0). Given a representative (x 1 , x 2 ) for κ(P ), we have κ(2P ) = δ(x 1 , x 2 ), where δ = (δ 1 , δ 2 ), and The purpose of the present section is to prove an explicit version of the following result.
Proposition 2.1. There are quadratic forms y 1 , y 2 , y 3 ∈ K[x 1 , x 2 ] and constants a ij , b jk ∈ K, depending only on E, such that for i = 1, 2 and j = 1, 2, 3 we have The constants a ij and b jk are given in (2.2) and (2.3), respectively.
For T ∈ E[2] let + T : E → E be translation by T . Since −(T + P ) = T − P , the map + T descends to a map on P 1 . In fact there is a linear transformation m T on P 1 such that κ • + T = m T • κ. A simple calculation shows that m T is represented any non-trivial scalar multiple of the matrix . For the proof of Proposition 2.1, we analyze the action of E[2] on the space of homogeneous polynomials in two variables of degree 2 and 4, respectively. We first lift the transformation matrices M T to a subgroup G E of SL 2 (K) such that Proof. The assertion is trivial when It is easy to compute .
In particular, which proves the result.
Lemma 2.2 shows that the classes of the matrices M T form a subgroup of PSL 2 (K). We now lift this to a subgroup of SL 2 (K).
is a subgroup of SL 2 (K). Moreover, G E is isomorphic to the quaternion group Q 8 , and The remaining statements are clear.
Proof of Proposition 2.1. Let ρ denote the standard representation of G E on the vector space V of K-linear forms in x 1 , x 2 . Then the symmetric square ρ 2 factors through E[2]. Hence we can view ρ 2 as a representation of E[2] on Sym 2 (V ), and we have It is easy to check that for each nontrivial 2-torsion point T the polynomial is an eigenform of ρ 2 . Fix any ordering of the non-trivial 2-torsion points and call them T 1 , T 2 , T 3 ; let y j := Y T j . Since Y := (y 1 , y 2 , y 3 ) is linearly independent, Y forms a basis for Sym 2 (V ). We find that the coefficients of x 2 1 and x 2 2 with respect to Y are given by .
Global height differences
Let K be a number field. We define M K to be the set of places of K, where we normalize the absolute value |·| v associated to v ∈ M K by requiring that it extends the usual absolute value on Q when v is an infinite place and by setting |p| v = p −1 when v is a finite place above a prime number p.
w is the place of Q below v. Then the product formula v∈M K |x| nv v = 1 holds for all x ∈ K × .
Consider an elliptic curve E/K, given by an integral Weierstrass equation (2.1). We define the naive height of P ∈ E(K) \ {O} by where (x 1 , x 2 ) ∈ K 2 represents κ(P ). The canonical height of P is defined as the limit h(P ) = lim n→∞ 4 −n h(2 n P ).
In this work, we are not really interested in the canonical height itself, but rather in upper bounds on the difference h −ĥ. As in [CPS06] and [MS16], we decompose the difference into a finite sum of local terms where the functions Ψ v : E(K v ) → R are continuous and bounded. It is then clear that it suffices to compute upper bounds on all Ψ v to deduce an upper bound on the difference h −ĥ. Recall that we may write and (x 1 , x 2 ) ∈ K 2 v represents κ(P ). See [CPS06] and [MS16] for details.
Archimedean local height differences
In this section we show how to bound the local contribution Ψ v to the height difference, where v is an archimedean place of a number field. We will drop v from the notation for simplicity and assume that K v = C, unless stated otherwise. So consider an elliptic curve E/C, given by a Weierstrass equation (2.1). Note that Proposition 2.1 lets us bound |x i | 4 (i = 1, 2) in terms of |δ j (x 1 , x 2 )|, j = 1, 2. From this we easily get an upper bound for Φ using the triangle inequality. Via the geometric series we deduce: where the constants a ij , b jk ∈ C are as in Proposition 2.1.
This idea was first used by Stoll [Sto99,Sto17] to bound the height difference for Jacobians of genus 2 curves and hyperelliptic genus 3 curves, respectively. We will follow his approach closely; in fact, the elliptic case is much simpler. Furthermore, we will iterate the bound for Φ to get a better bound for Ψ than the one obtained from the geometric series; this was used by Stoll and the first-named author for genus 2 [MS16b], and by Stoll [Sto17] for genus 3.
For the iteration we define the function and we set for N ≥ 1, where · denotes the supremum norm. Hence c 1 is precisely the upper bound from Corollary 4.1. Our algorithm for bounding Ψ is based on the following result, whose statement and proof follow [MS16b, Lemma 16.1].
Theorem 4.2. The sequence (c N ) N ≥1 is monotonically decreasing and we have Proof. To verify that the upper bound holds, let α ∈ C 2 . A simple induction shows that for N ≥ 1 we have Shifting N by 1 and using that ϕ is homogeneous of degree 1/4, it follows that We now apply (4.1) to α = δ •N n (x 1 , x 2 ), where n ≥ 1 and x ∈ C 2 represents κ(P ) for P ∈ E(C), to obtain N (1, 1) .
Upon noting that
To show that c N is monotonically decreasing, consider the function Note that the Jacobi matrix of ψ has positive entries and that its rows sum to 1/4, because ϕ 1 and ϕ 2 are homogeneous of degree 1/4. It follows that For N ≥ 1 we have c N = 4 N 4 N −1 ψ •N (0, 0) . In particular, (4.2) implies whence c 2 ≤ c 1 . We now proceed by induction on N ; so let N ≥ 2 such that c N ≤ c N −1 .
and hence c N +1 < c N . The case b N < b N −1 is similar.
In particular, (c N ) N and (b N ) N both converge to the same limit, and this limit is an upper bound for Ψ. In practice, the sequence converges quickly, and a few iterations suffice. This gives us a very simple method to bound Ψ from above.
Remark 4.3. Suppose that v is a real place and that E(R) has only one component. Then b 22 and b 32 are non-real, but all P ∈ E(K v ) have real coordinates, so we have for j ∈ {2, 3} and x ∈ R 2 representing κ(P ). Modifying the definition of the function ϕ accordingly, we often get a better bound in practice.
Alternative algorithms
In this section we briefly discuss other approaches to bounding Ψ v from above for an archimedean place v. The approach of Cremona-Prickett-Siksek [CPS06] is to find the largest value γ of Φ v ; then γ/3 is an upper bound for Ψ v . For real places this translates into a simple algorithm which is trivial to implement. For complex places, they give two approaches: one based on Gröbner bases and another one based on refining an initial crude bound via repeated quadrisection. The latter is faster and yields better bounds in practice than the former. The method of [CPS06] is implemented in Magma and as part of Cremona's mwrank (which is also contained in Sage). A variation of this approach was presented by Uchida [Uch08]; he computes the largest value of an analogue of Φ v , but with duplication replaced by multiplication by m for m > 2.
An alternative approach is to use that for K v = C, which we may assume without loss of generality, Ψ v can be expressed in terms of the Weierstrass ℘-function and an archimedean canonical local height function, which in turn is closely related to the Weierstrass σ-function. This was used by Silverman [Sil90] to provide an easily computed upper bound for Ψ v in terms of the values of the j-invariant and the discriminant of E; according to [CPS06], this bound is usually larger than the one due to Cremona-Prickett-Siksek, at least for real embeddings. In a spirit similar to the repeated quadrisection method in [CPS06], Bruin [Bru13] uses a recursive approach (starting from a fundamental domain of the period lattice of E/C) to approximate the maximal value taken by Ψ v on E(C) to any desired precision. Bruin's algorithm therefore gives nearly optimal bounds for complex embeddings, whereas for real embeddings the bound computed using the algorithm of Cremona-Prickett-Siksek is often smaller. A Pari/GP implementation of Bruin's method can be found at https://www.math.leidenuniv.nl/~pbruin/hdiff.gp (note that this uses a different normalization from ours; the difference is log |∆| v /6, where ∆ is the discriminant of the given Weierstrass model). While this method is reasonably fast for curves with small coefficients, it can be slow even for medium-sized coefficients. For instance, it took about 18 minutes to compute an upper bound for the curve with Cremona label 11a2, which has minimal Weierstrass equation So while this approach leads to superior bounds, it is somewhat less useful in practice, because the need for computing a very sharp upper bound mostly arises for curves whose coefficients are relatively large.
Experiments and comparison
We implemented an algorithm based on Theorem 4.2 and Remark 4.3 to compute an upper bound for Ψ v for an archimedean place v in Magma [BCP97]. The code is available at https://github.com/steffenmueller/arch-ht-diff. We experimentally compared our code with the Magma-implementation of the algorithm of [CPS06], using a single core on an Intel Xeon(R) CPU E3-1275 V2 3.50GHz processor. Note that the latter sometimes shows that the upper bound is exactly 0 (which is attained by P = O), whereas our code always returns a positive real number. We compared all curves of conductor at most 35.000 in Cremona's database of elliptic curves over the rationals. Here and in the following β is the upper bound returned by our code and β CP S is the upper bound returned by the Magma implementation of the algorithm from [CPS06]. We also list the average value of β and β CP S (including the cases where the latter is 0). So it seems that for large coefficients, our algorithm yields better results most of the time, unless β CP S = 0. We also see that, on average, our bound is much smaller. Here is a particularly striking example.
Example 6.1. Let E/Q be given by y 2 + xy + y = x 3 − x 2 + 31368015812338065133318565292206590792820353345x + 302038802698566087335643188429543498624522041683874493555186062568159847 This example was found by Elkies in 2009 and currently holds the record for the elliptic curve of largest known rank (r = 19) which is provably correct, independently of any conjectures. In this case β CP S = 18.018, whereas β = 0.147.
We also compared the two implementations for a few thousand curves with coefficient sizes as above, but over some imaginary quadratic fields. Here we found that our bound was better in all examples. Moreover, it also took less time to compute in all cases (on average 0.003 seconds compared to 1.2 seconds).
In practice, the methods of this paper, of Cremona-Prickett-Siksek and of Bruin should be combined. For a real embedding, one should first compute an upper bound using Cremona-Prickett-Siksek. If this is non-zero, one should then apply our algorithm, and use whichever bound is smaller. For a complex embedding, our algorithm appears to be a good first choice. If the resulting bound seems too large for saturation, and if the coefficients of the curve are of reasonable size, one can then compute a bound using the algorithm of Bruin. This is basically optimal for complex embeddings, and sometimes beats the other bounds for real embeddings as well, but, as discussed above, it typically takes much longer to compute.
|
2018-07-11T14:21:27.000Z
|
2018-07-11T00:00:00.000
|
{
"year": 2019,
"sha1": "776f091540ea31fecd168a9eb81c5b6b2a1b3a5d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1807.04153",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "7066623ad99f8af7c3ddccd2d9a901044b847db9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
95936242
|
pes2o/s2orc
|
v3-fos-license
|
Irradiation Induced Dimensional Changes in Bulk Graphite; The theory
Basing on experimental data on irradiation-induced deformation of graphite we introduced a concept of diffuse domain structure developed in reactor graphite produced by extrusion. Such domains are considered as random continuous deviations of local graphite texture from the global one. We elucidate the origin of domain structure and estimate the size and the degree of orientational ordering of its domains. Using this concept we explain the well known radiation-induced size effect observed in reactor graphite. We also propose a method for converting the experimental data on shape-change of finite-size samples to bulk graphite. This method gives a more accurate evaluation of corresponding data used in estimations of reactor graphite components lifetime under irradiation.
Introduction
Employment of graphite as neutron moderator as well as construction material has a long history starting with the first reactor created by Enrico Fermi. Artificial reactor grade graphites were designed for these purposes [1]. The graphite components life time in active zone actually determines a life time of water-cooled carbon reactor as a whole, stimulating a study of radiationinduced evolution of reactor graphite properties.
With respect to production technology, reactor grade graphites range from almost isotropic to substantially anisotropic. Here we imply both physical and mechanical properties as well as irradiation effects. Widely used reactor grade graphites, such as Pile Grade A, ATR-2E, GR-280, produced by extrusion process exhibit a transversal-isotropic symmetry of physical and mechanical properties and radiation-induced effects [2][3][4][5][6][7]. Such symmetry requires five independent mechanical constants; in different representations these are elasticity constants, or compliances, or the set of Young's modulus, rigidity modulus, and Poisson coefficient [8].
Graphite rod subjected to irradiation undergoes transversely isotropic alternating-sign shape changes reaching several percent. The most important factors leading to non-uniform shape changes and hence, growth in internal stresses, are gradients of temperature and neutron exposure on the scale of a particular monolithic graphite element. Radiation-induced creep of graphite is not sufficient to compensate effects caused by shape changes and temperature gradients. Finally, a complicated irradiation-induced evolution of elastic moduli along with shape changes may lead, under low creep, to a destruction of monolithic graphite components.
In calculations of construction lifetime by using, e.g., the crack initiation and growth criterion, the determining factors are the data on evolution with neutron fluence of elastic moduli and relative shape changes in directions of an axis of symmetry ( ) and in a plane of isotropy ( ⊥ ) [2][3][4][5][6][7][9][10][11][12]. The standard way to obtain such data is by irradiation of finite-size graphite samples in materials testing reactors at a higher, compared with power reactors, neutron flux accompanied with periodical measurements of characteristics. In 5-6 years the neutron fluence is acquired corresponding to 30-40 years lifetime graphite unit in operation conditions of power reactor. Such experiments are inevitably conducted under conditions different from operation ones, which has an inevitable influence on the data obtained. Necessary adjustments require a development of the corresponding models which would help to convert the data obtained under higher neutron flux for finite-dimension samples to conditions of bulk graphite and power reactors.
Formulation of the problem
In adjusting data on transversally isotropic reactor grade graphite irradiated in materials testing reactors one encounters three main problems, requiring a creation of models and a recalculation of corresponding data: I. Differences in neutron energy spectra. The problem is sufficiently simply solved by standard methods of neutron fluence recalculation to solid-state doze values in terms of displacements per atom (dpa) [5,13]. Note that such conversion is not universal. When heterogeneous nucleation of microstructural elements is important (for example, PWR pressure vessels) the conversion to dpa results in a partial loss of needed information.
II. Noticeable differences in intensities of Frenkel pair generation (dpa / s). Morphology of reactor grade graphite has a complicated multiscale structure formed by the filler, which comprises the objects with different degrees of order based on graphite micro-crystallites with the ideal crystal structure and dimensions of ∼ 10 2 nm, a binder with an isotropic microcrystalline structure, and an ensemble of microcracks [1,6,[9][10][11][12]14,15]. By the objects are meant formations of various scales from micro-crystallite complexes with various packing to larger formations including grains. As the dimension scale increases, the objects become less anisotropic. An ensemble of microcracks is a result of relaxation processes at the scales with sufficiently high anisotropy level. At larger scales the anisotropy is too small to initiate the microcracks initiation.
A driving force of radiation-induced effects in reactor grade graphite are processes taking place in an ideal crystalline structure of microcrystallite. Frenkel pair generation under the action of a neutron flux on crystal lattice results in the formation of a supersaturated two-component solid solution of interstitial atoms and vacancies. For a variety of reasons, the decay of quasi-2D supersaturated solid solution mainly leads to the formation of ensembles of interstitial dislocation loops of basal type, which, in turn, change microcrystallite shapes [6,7,12,16]. Below we list current understanding of the microstructural mechanisms of radiation-induced shape changes of microcrystallites: The formation of interstitial clusters, dislocation loops and new graphitic planes will cause an expansion of the microcrystallite in the c-axis [17]. Adjacent lattice vacancies collapse parallel to the layers and form sinks for other vacancies, causing shrinkage parallel to the graphite layers [18]. Just the change of crystallite shape is a driving force for developing stresses, evolution of crack ensembles, and so on. At the bulk graphite scale, these effects are responsible for the shape changes, evolution of mechanical properties (partially) etc. Thus, juxtaposing radiation-induced effects obtained under different generation rates of Frenkel pairs (dpa / s) requires solving the nontrivial problem on development of quasi-2D microstructure in graphite crystal lattice with microcrystal self-consistent boundary conditions [19].
III. Discrepancy between radiation-induced shape changes of bulk graphite and data obtained on finite-dimension samples. In our previous work [20], based on numerous measurements of shape changes in cylindrical samples the two results were obtained: Initially circular cross-section of the sample under irradiation takes an increasingly pronounced elliptical shape and orientations of the ellipses vary randomly along the sample on a scale of 0.6 cm. The results of calculations of the relative volume changes using traditionally accepted methodology (where (∆L/L) and (∆L/L) ⊥ are relative length changes of the samples cut from the bulk material in directions parallel and perpendicular to the extrusion direction) increasingly diverge with irradiation doze from the results of the direct measurements of the relative volume changes (see Fig. 1). As it was shown [20] the reason for such a mismatch is that expression (1) only relates the relative elongations of samples to relative volume change while neglecting changes in crosssectional areas.
Basing on the presented experimental facts it was assumed that at macroscale graphite can be considered as an transversally-isotropic elastic medium with irradiation-induced shape changes described by gradient tensor: Here, (∆L/L) ,⊥ are the relative bulk graphite shape changes in parallel and perpendicular directions relative to the axis of symmetry (which coincides with the extrusion direction). But on a scale of ≃ 0.6 cm, nonuniformities in shape changes are observed. Actually, to such regions without pronounced boundaries may correspond local (at a scale of l ≃ 0.6 cm) random deviations of the axis of symmetry from the average direction in the bulk graphite. This representation was formalized by introducing a concept of "domain" -the region of size l possessing the gradient tensor of relative radiation-induced deformation in local coordinate systemF Here (∆l/l) α are irradiation-induced relative changes in sizes of free domain (i.e. mentially cut out of the elastic medium). The axis of symmetry of the domain is inclined at random angle θ with respect to that of bulk graphite (see Fig. 2). The domains are continuously transformed in space into each other on the scale of domain size l. It is impossible to uniquely identify the boundaries of such "diffuse" domains, which can be defined only statistically as regions in space with correlated directions of anisotropy. The developed model consistently explains the entire array of obtained experimental data. Taking into account that sample cross-sections are of the same scale (d = 0.6 cm and d = 0.8 cm) as domain size, the irradiationinduced change of domain shape in samples can be considered as almost free dilatation (parameters (∆l/l) and (∆l/l) ⊥ ). At the same time, if we consider the case of radiation-induced effects in bulk graphite, the presence of an ensemble of disoriented domains with changing shapes would give rise to internal stresses, which entails a conclusion that shape and volume changes in bulk graphite are not equivalent to those in finite-size cylindrical samples. Keeping in mind that we are aimed at determining shape changes of bulk graphite while experimental data refer to samples with cross-sections on the same scale as the domains size, we have to recalculate the experimental data.
This work is devoted to further development of model [20] in order to suggest a method for recalculating data obtained on small-size samples to radiation-induced shape changes of bulk graphite. In the model developed for transversal-isotropic medium we employ pure elastic approach rather than elastoplastic one as required for rigorous solution of the problem. This is explained by insufficient understanding of the mechanisms of graphite irradiation creep related to the problem II above. Actually we obtain a lower bound for influence of disoriented domain structure on shape changes in bulk graphite.
In section 3 we develop a new approach to the description of elasticity of such diffuse domain structure. We consider both the case of high anisotropy (see Fig. 3 b) and uniform distribution of anisotropy directions (see Fig. 3
c).
Those cases correspond to different grades of reactor graphite [1,[4][5][6]. Macro- Figure 3: Domain structures in graphite: spherical shape of diffuse domains with transversely isotropic symmetry (shown by grid lines) and size l becomes ellipsoidal (with sizes l ⊥ = l ) due to irradiation-induced shape-change of free domains. Regular structure a) deforms affinely (L ⊥ /L = l ⊥ /l ). The domains in irregular domain structure b) and c) randomly deform due to interaction with their elastic environment, and the structure as the whole deforms nonaffinelly (L ⊥ /L = l ⊥ /l ): In case b) of high anisotropy the macroscopic deformation has the same transversely isotropic symmetry (L ⊥ = L ), but different amplitude compared with the domains. In case c) of isotropic distribution of domain directions the macroscopic deformation is isotropic (L ⊥ = L ) even for anisotropic domain deformation (l ⊥ = l ).
scopic description of diffuse domain structure on scales large with respect to domain size is developed in section 4. In section 5 we describe formation of diffuse domain structure in graphite produced by the extrusion process. We also estimate size and a degree of alignment of the emerging domains.
Elasticity of diffuse domain structure
In this section we introduce main concepts and formulate a physical model of elasticity of diffuse domain structure. We consider diffuse domains as more or less homogeneous regions with anisotropic elasticity.
Strain of domain structure
Deformation of solid can be described by strain tensorε, which is related to the gradient tensor asε = (FF T − 1)/2, where T means transposition. Since the domain size is much larger than the scale of micro-cracks the strain tensor can be considered as a continuous function of coordinates x. We describe deformation of graphite by taking the state before irradiation as reference state. Thus defined strain tensor can be presented as the sumε tot =ε M +ε of two contributions: macroscopic deformation of sampleε M (x) and local elastic deformationε (x) with components where u α are components of elastic deformation vector. In equilibrium, this vector describes local deviations on the scale of domain size l, so that its average over the macro-scale vanishes.
In the case of bulk graphite with free boundaries the macroscopic strain tensorε M is diagonal in global coordinate system related to sample (see Fig. 2). In linear approximation in relative changes of sample sizes (∆L/L) α in principal directions α = 1, 2, 3 (see Fig. 3
b) it can be written as
Here and below, if possible, we shall drop the index M characterizing the bulk graphite. Gradient tensorF M is defined in Eq. (2) and δ is the unit Kronecker tensor. Graphite domains are randomly oriented in space. The local coordinate system related to the domains (see Fig. 2 b) can be obtained from global coordinate system related to sample by means of rotations by three Euler angles Ω: two polar angles ϕ and ψ and the azimuthal angle θ, characterizing the orientation of the local frame with respect to the graphite orthotropy axis. All three angles Ω (x) = (ϕ (x) , θ (x) , ψ (x)) randomly vary in space with coordinates x on the scale of domain size l. Domain rotation by given angles Ω = (ϕ, θ, ψ) can be described by rotation matrix: This matrix relates the strain tensors in globalε tot =ε M +ε and in localε loc coordinate systems: Eq. (7) can be rewritten in matrix notations as Here and below we use Greek indexes (αβ) to distinguish components of symmetric tensor ε αβ = ε βα and Latin indexes (i) when presenting this tensor in vector notation. The first three components of such vector determine normal strain ε i = ε ii (i = 1, 2, 3) and the other three components of this vector determine shear strain, ε 4 = 2ε 23 , ε 5 = 2ε 13 and ε 6 = 2ε 12 . Similar expressions for stress tensor are σ i = σ ii (i = 1, 2, 3) and σ 4 = σ 23 , σ 5 = σ 13 , σ 6 = σ 12 .
Elastic energy of domain structure
Energy of diffuse domain structure takes simplest form in local coordinate system, which continuously rotates in space (see Fig. 2 Irradiation-induced dilatation of freely dilating domainsε F describes irreversible deformation of the domain with free boundaries (which was cut out of the sample) due to irradiation. The dilatation is diagonal in local coordinate system related to the domain: HereF 0 is defined in Eq. (3) and (∆l/l) α are corresponding relative changes of free domain sizes in principal directions α = 1, 2, 3, see Fig. 3 a. The strain in local coordinate systemε loc (x) is related to the strainε in global coordinate system via Eqs. (7) and (8). Notice, that the above defined strain tensors have different origination:ε F (Eq. 10) describes irreversible irradiation induced domain dilatation and creep,ε is truly elastic deformation tensor and the macroscopic deformationε M (Eq. 5) has both inelastic and elastic contributions. Symmetric stiffness matrixĉ in Eq. (9) is nearly the same for all domains, but the axis of its symmetry randomly rotates with respect to direction of bulk graphite orthotropy axis. Components c ij of local stiffness matrixĉ are elastic moduli of the domain. They change under irradiation and depend on neutron fluence and temperature (see section 6 below). Here we consider the case of transversely isotropic symmetry, when the matrixĉ in local coordinate system has the form Substituting the local strainε loc from Eq. (8) to elastic energy (9) we finally get elastic energy of diffuse domain structure Using this elastic energy we describe below in section 4.4 three the most important cases of domain structures: Regular: Regular structure (see Fig. 3 a) deforms affinely with domain deformation. In this case the equilibrium macroscopic deformation is the same as for free domains, Eq. (10),ε M =ε loc =ε F .
Free: In the case of "unconnected" freely dilated domainsε loc =ε F and the macroscopic deformationε M =Rε F differs from deformation of individual domainsε F only due to difference in domain orientations.
Here the bar means averaging over domain orientation.
Irregular: Domains of irregular structure (see Fig. 3 b) are in more cramped conditions with respect to the case of regular domain structure. Elastic environment tends to compress the domain with respect to the case of freely dilated domains andε loc =ε F . Equilibrium deformationε M is determined by balance between elastic deformation of the domain and elastic response of its environment on this deformation.
Uniform: In the case of uniform distribution of domain directions (see Fig. 3 c) the macroscopic deformationε M is isotropic even for strongly anisotropic deformation of individual domains,ε loc =ε F .
Macroscopic description of diffuse domain structure
In this section we develop macroscopic theory of reactor graphite elasticity. Although diffuse domains have different orientations, many such domains at spatial scales large with respect to domain size form an orthotropic and homogeneous material from a marcomechanics standpoint. The main feature of diffuse domain structure is absence of cross-domain boundaries, near which the breakup of stress or strain may take place as in ordinary polycristalline solids. In the case of diffuse domain structure both stress and strain change continuously and the domains of diffuse domain structure directly feel the stressσ (x) of the elastic enviroment. Under the influence of such stress the randomly turned domains deform non-affinely with macroscopic deformation of bulk graphite (see Fig. 3 b and c) and the strainε of such irregular domain structure gains large short wavelength (on the order of domain size l) components.
Local stress
Elastic energy (12) can be expanded in powers of the strainε. We leave linear term in this expansion unchanged, and transform only its quadratic term using the equality 1 HereŜ is local compliance matrix in global coordinate system andŝ =ĉ −1 is local compliance matrix inverse to the local stiffness matrixĉ, Eq. (11). Let us show that if the stressσ is found from extremum of the right hand side of Eq. (13), the latter will be equal to the left hand side of this equation. Substituting the solution of extremum equation for the local stress back to the right hand side of Eq. (13) we really reproduce the expression for the left hand side of this equation. Using Eq. (13) we can transform quadratic inε term in Eq. (12) into corresponding linear term and rewrite elastic energy (12) in terms of the local stressσ (x): whereσ F is stress because of irradiation-induced deformation of the domain. In matrix notations Stressσ M describes elastic response of medium on this deformation: Minimum of elastic energy (16) gives standard equation of equilibrium in the presence of external stress σ i (x): To derive this equation we substituted elastic strain (4) in Eq. (16). Variation of elastic energy for given stress tensorσ takes the form Integrating this expression by parts and equating volume term to zero, we find Eq. (19). The solution of this linear equation can be written as a superposition of contributions from all domains. Each of these contributions slowly decays with the distance from the domain (as a power law at large distances). The domains have random orientations, and their total contribution to the stress is effectively self-averaged because of discussed above long-range character of elasticity. Therefore, in deriving equations of macroscopic elasticity we may consider the stressσ (x) as weakly varying on the scale of domain size function. Since angles Ω randomly alter in space, the constant stressσ describes strainε (x) strongly varying in space on the scale of domain size, see Eq. (15).
Note that Eq. (19) corresponds to minimum of energy (16). Because of this minimum condition substitution of the approximate solutionσ (x) ≃ const of stochastic equation (19) in energy (16) gives only minor corrections of second order in small-scale space variations of the functionσ (x).
Derivation of macroscopic elasticity equations
where bar means averaging over space. Equation of equilibrium (19) can also be transferred to its pre-averaged form with the aid of equality which is valid for any function F (x) and bell-like weight function w (x ′ − x). The weight function is centered at point x and has characteristic size larger than l. As the result of such averaging Eq. (19) for slowly varying stresŝ σ (x) takes the form In equilibrium neighbouring domains can self-consistently adapt their deformations since rigid directions of these domains are deployed by random angles to each other. Eq. 21 should be solved with the boundary conditions on surface of bulk sample: where n β are components of normal to the surface, and P α are components of external force applied to the surface. This force can be considered as a substitute at the surface of corresponding volume source of deformationσ F αβ in Eq. 21. Elastic energy (20) can be rewritten in standard form of macroscopic theory of elasticity by introducing "smoothed" on the domain scale l strain tensorε with componentsε which is obtained by averaging the microscopic strainε with the above weight function w:ε Substituting Eq. (23) into elastic energy (20) we can rewrite it in standard form for macroscopic elasticity: whereC =S −1 is macroscopic stiffness matrix,σ F i (x) andσ M i (x) are averages of the corresponding values (17) and (18). Minimum of this energy reproduces equation of equilibrium of effective elastic medium (21), where the stressσ is related to the strainε via Eq. (23). Since we defined ε M as equilibrium macroscopic strain (see Eq. 5) in equilibrium we have σ αβ (x) =ε αβ (x) = 0. Eq. (24) withε αβ (x) = 0 describing elastic response of graphite to external perturbation (elastic moduli, see next section).
Elastic moduli of bulk sample
The average compliance matrixS in Eq. (20) is found by space averaging the local compliance matrixŜ (Ω), Eq. (14): Averaging of inverse stiffness matrix, and not of stiffness matrix itself, is related to the fact that domains deform non-affinely with the macroscopic strain, see Fig. 3 b [25]. Instead, they directly feel the applied macroscopic stressσ (x) [26]. Space averaging in Eqs. (20) and (25) can be rewritten as averaging over all orientations of domains Ω = (ϕ, θ, ψ). Rotation angles ϕ and ψ are randomly distributed in the interval (0, 2π) while probability distribution of the azimuthal angle θ can be characterized by the second and fourth moments of sin θ: In case of uniform distribution of domain directions Ψ (θ) = 1 substituting the moments m 2 = 2/3, m 4 = 8/15 (28) into the above equations we obtain moduli of macroscopically isotropic material (see Fig. 3
c):
S 11 =S 33 = 1 15 (8s 11 + 4s 13 + 3s 33 + 2s 44 ) , S 12 =S 13 = 1 15 (s 11 + 5s 12 + 8s 13 + s 33 − s 44 ) , S 44 = 2 S 11 −S 12 = 2 15 (7s 11 − 5s 12 − 4s 13 + 2s 33 + 5s 44 ) Anisotropy and degree of disorientation increase with decreasing spatial scale and become large on scale of a stack of microcrystallites. Under the effect of thermal expansion of microcrystallites amplitude of stress on this scale can reach the level of crack initiation and growth. The characteristic scale of emerging microcracks is small compared to domain size. Therefore, relation (27) between macroscopic and domain modules do not depend on evolution of microcrack subsystem and is determined only by the distribution of domain orientations.
Free boundary conditions
Solution of Eq. (21) for samples subjected to external pressure P isσ M −σ F = σ ext with diagonal tensor σ ext αβ = P δ αβ . In case of free boundary conditions there is no stress at the boundary of the sample,σ ext = 0, and the equilibrium macroscopic strainε M is determined by the condition of zero average stress: This condition corresponds to equilibrium between the average stress due to irradiation-induced deformation of domains, Eq. (17) and the average stress due to response of elastic medium on this deformation, Eq. (17). Averaging the local stresses, Eqs. (17) and (18), over Euler angles Ω = (ψ, φ, θ) we find linear matrix equation for the macroscopic strain (5): The solution of this equation is (∆L/L) = (∆l/l) + k (∆l/l) ⊥ − (∆l/l) , The factors k in this expression can be written in explicit form: Amplitude of volume change is described by the factor k V : In case k V = 0 (c ⊥ = c ) the relative volume change is the same as for medium with freely dilating domains (∆V /V ) F = (∆l/l) + 2 (∆l/l) ⊥ although such sample deforms strongly anisotropically (see Fig. 3 b). In case k V = 0 the difference between those volume changes is related to deformation of domain shape due to elastic response of environment. The case k V > 0 corresponds to compression of the domain by the medium, and k V < 0 describes its dilation.
Consider most important cases in more detail: and corresponding volume change of such medium: Irregular: At high anisotropy of domain directions (see Fig. 3 b) the factors k are proportional to the second moment m 2 ≪ 1: Nonaffine deformation of bulk sample is related both to random rotation of domains and elastic response of the medium on their shapechange. To separate both contributions compare Eqs. (31) and (38) with Eqs. (36). The factors k and k ⊥ are both proportional to m 2 , and similarly to Eq. (36) they take into account the effect of domain disorientation on sample size. The difference between k , k ⊥ and m 2 is due to compression of domains by their elastic environment.
Uniform: In case of uniform distribution of domain directions (see Fig. 3 c) with moments (28) we find from Eqs. (32) and (33) that the sample as the whole deforms isotropically under irradiation:
Formation of diffuse domain structure
In this section we propose simple model of diffuse domains formation in graphite prepared by direct melt extrusion process. We also estimate the size of such domains and describe orientational alignment of domains in extrusion flow.
Typical domain size l
We will treat a mixture of pitch and coke particles at fabrication conditions as very viscous incompressible fluid. Velocity field v(x) of such fluid with density ρ can be described by Navier-Stokes equation: with boundary condition v = 0 at channel walls. Here ▽ is gradient, η is dynamic viscosity of the fluid and force f initiates rotation of random graphite crystallites. Before extrusion the crystallites have random directions of anisotropy (with uniform distribution function Ψ 0 (θ) = 1), and they are rotated in the flow under influence of gradient of the force f. Mechanism of such rotation is different on spatial scales small and large with respect to characteristic correlation length where Re is effective Reynolds number: At small scales x < l main contribution to left hand side of Eq. (39) comes from viscous term −η▽ 2 v ≃ f. This term stabilizes the flow inside the correlation volume l 3 and describes regular laminar motion of the fluid on the scale l. Therefore, the rotation of anisotropy axes due to the force f is strongly correlated at small spatial scales x < l.
At large scales x > l main contribution to the turning force f comes from the term ρ (v▽) v ≃ f , which describes convective acceleration (instability) of the flow and leads to randomization of motion at large distances x > l due to amplification of the effect of random initial orientation of crystallites. This amplification also leads to a decrease in the Reynolds number Re with respect to case of ordinary structureless fluid.
Using characteristic Reynolds number Re ≃ 10 we find from Eq. (40) a simple estimate of domain size for viscous fluid with typical kinematic viscosity ν = η/ρ ≃ 100 cSt (mm 2 / s) and average velocity of flow v ≃ 10 cm / s: This estimation is in good agreement with domain size l ≃ 0.6 cm observed in GR-280 graphite [20]. Although our estimate of the Reynolds number Re needs additional experimental justification, Eq. (41) can be used to predict flow temperature and velocity gradient dependence of the domain size l in reactor graphites.
Distribution of domain orientations
Due to random shape and orientations of crystallites the turning force f in Eq. (39) is also random, and coarse-grained at scale l domains in the flow experience rotations at random angles δθ during a small time interval δt, see Fig. 4. Such diffusion-like rotations can be described by orientational Smoluchowski equation for distribution function Ψ (θ) of domain angles θ: with uniform initial distribution Ψ 0 (θ) = 1 before extrusion. In general, the angular diffusion coefficient D (θ) = δθ 2 /(2δt) can depend on the angle θ.
The drift term in this equation reflects the fact that the domain experiences an average torque τ (θ) due to the flow, which restricts its free rotation. Both opposite directions of domain anisotropy are equivalent, τ (π − θ) = τ (θ) and therefore, the expansion of this function in a Fourier series has the form: Higher order corrections τ n with n > 1 are not important for our consideration and will be dropped below. Amplitude τ = τ 1 of the dimensionless torque is proportional to average strain rate of the flow.
In case of graphite obtained by extrusion at large velocity gradients the torque τ is large, τ ≫ 1, and we get from Eq. (44) m 2 ≃ 1/τ ≪ 1 and m 4 ≃ 2m 2 2 . In case of graphite formed under hydrostatic pressure average torque is small, τ ≪ 1, and we reproduce moments of uniform distribution, Eq. (28). A typical value of the second moment of graphite GR-280 m 2 ≃ 0.3 corresponds to intermediate case τ ≃ 4 between those two limits.
In the Ref. [20] we show that the second moment m 2 can be directly obtained from analysis of experiments measuring irradiation-induced deformation of small size graphite samples. The fourth moment m 4 can be expressed in terms of the second moment m 2 excluding parameter τ from Eqs. (44). Implicit dependence (44) of the fourth moment on the second moment can be interpolated for the whole range of torque τ > 0 by the following simple expression: homogeneous domain structure similarly to GR-280 graphite, since both of them are fabricated by the same extrusion method (see section 5 above). Values of moments m 2 and m 4 and domain size l for these graphites may differ. Effect of irradiation on elasticity of Pile Grade A graphite has been studied in details only for two Young's moduli E , E ⊥ and Poisson's ratios ν and ν ⊥ which are related to the macroscopic compliance moduli as [22,23]: Dose dependence of entire set of compliance moduli has been established only for monocrystalline and pyrolytic graphites over a wide temperature range [24,29,30]. It is noticed that only "small" elastic constants c 33 and c 44 experience significant changes (increase in value) with irradiation dose.
Detailed data processing for the elastic moduli E and E ⊥ of ATR-2E graphite produced by extrusion, is given in Ref. [13]. It is shown that dose dependence of both Young's moduli E and E ⊥ is the same. The only difference is in normalization factors: Functional dependence of structure factor f (Φ) on neutron fluence Φ is shown in Fig. 5 strate similar shape of dose dependence of the elastic moduli: a rather sharp increase, reaching maximum and subsequent decrease. Below we propose simple empirical expression for dose dependence of the structure factor f (Φ).
We assume that there are two main contributions to the structure factor: due to evolution of subsystem of microcracks and due to irradiation hardening of crystallites. It is shown in Ref. [14] (Eq. 8) that the first of these contributions to elastic moduli is proportional to relative volume change ∆V /V . This contribution is responsible for reversal of graphite shrinkage at high irradiation doses Φ [14,20]. Second contribution linearly grows with the irradiation dose Φ (different powers of Φ give worse agreement with experiments). As one can see from Fig. 5 this simple dependence (47) describes adequately well all main features of dose dependence of the elastic moduli at different irradiation temperatures. Note also that the two fitting constants ϑ ≃ 17 and Φ * ≃ 17 almost do not depend (at noise level) on irradiation temperature. The universality of these constants is a further argument in support of equation (47). Large value of dimensionless constant ϑ is related to high contrast between elastic contributions of crystallites and microcracks. Similar correlations between dimensional change and Young's modulus structure factor f (Φ) were also established for Gilsocarbon [27].
In order to determine dose dependence of all other elastic moduli, we first orientation of crystallites main contribution to all macroscopic moduliS ij comes from "large" monocrystalline moduli s m 33 and s m 44 (see section 4.3 for description of orientational disorder effect). Macroscopic compliance modulī S ij are only weakly affected by "small" moduli s m 11 , s m 12 and s m 13 (see Table 1 for relative values of all moduli) since probability of finding basal planes along the direction of extrusion is small enough. In addition, anisotropy of elastic properties of reactor graphite is significantly smoothed out due to the presence of microcrack subsystem and binder [15]. The above described effects are responsible for observation of only moderate level of macroscopic orthotropy of graphite elasticity along with sharply pronounced anisotropy on crystallite scale.
We conclude that dose dependence of the shear moduli of diffuse domain structure (see Fig. 2) can be approximated by the same structure factor f (Φ) as for Young's modules (46): An important consequence of Eqs. (46) and (48) for the Poisson's ratios is confirmed experimentally [28]. Using Eqs. (46) and (48) we find dose dependence of all macroscopic compliance moduli: whereS ij are corresponding values of those moduli in absence of irradiation, see Table 1.
Knowing experimental values of the moduliS ij Eqs. (27) can be solved for the domain compliance moduli s ij Calculated values s ij in the absence of irradiation for Pile Grade A graphite are also presented in Table 1 Here m 2 ≃ 0.3 and m ⊥ 2 ≃ 0.7 are second moments of corresponding samples irradiated at temperature 550 • C.
Experimentally observed dose dependence of the relative length changes is plotted in Fig. 6. Using these data and substituting the solution (∆l/l) ,⊥ of Eqs. (51) into Eq. (31) we also plot the predicted dose dependence of relative elongations of macroscopic samples. Note, that relative elongation of bulk samples is much higher than that of small size samples. In bulk samples the domains are in more crowded conditions in directions perpendicular to the extrusion direction (in these directions the deformation of domains due to orientational disorder is maximal). Therefore, at low doses, the relative strain in these directions is weaker, |∆L/L| ⊥ M < |∆L/L| ⊥ S . Increase in logitudinal strain |∆L/L| M > |∆L/L| S is related to interaction between longitudinal and transverse elastic modes. These conditions can also be rewritten in a form (∆L/L) ⊥ M > (∆L/L) ⊥ S and (∆L/L) M < (∆L/L) S that is also valid for large doses. Since obtained results are sensitive to value of the moment m 2 , it is desirable to carry out further experiments to get better characterization of reactor grade graphites texture.
Summary
The main results of this paper are as follows: 1. Basing on experimental data on radiation-induced deformation of graphite, we introduced the concept of diffuse domain structure, which is the reason of size effects in reactor graphite. In such structure there are no sharp boundaries between domains, and the direction of domain anisotropy undergoes random deviations from global orientation of the sample. We also derived equations of macroscopic elasticity, describing the deformation of solids due to irradiation-induced shape change of their domains having transversely isotropic structure. This theory can also be used to describe deformation of stressed samples. A relation is obtained between elastic moduli of domains and macroscopic solid as the whole. We have proposed a scheme for conversion of experimental data obtained on samples of finite size to describe shape-change of bulk graphite. Such scheme is required for engineering calculations of graphite blocks integrity.
2. We have presented simple model explaining origin of diffuse domains in graphite produced by the extrusion process. Our estimation of domain size is in good agreement with known experimental data [20]. This model can also be used to predict the dependence of the domain size on extrusion velocity and viscosity of visco-plastic medium.
3. We also elaborated simple model describing orientational ordering of domains during extrusion. This model enables us to calculate the moments of domain's orientation, needed in description of elastic properties of reactorgrade graphite.
|
2012-10-14T15:31:01.000Z
|
2012-10-14T00:00:00.000
|
{
"year": 2013,
"sha1": "c56c2f8fdeac9549fca60551bc4d7b353718bb57",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1210.3803",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c56c2f8fdeac9549fca60551bc4d7b353718bb57",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
258034965
|
pes2o/s2orc
|
v3-fos-license
|
Feeding citrus flavonoid extracts decreases bacterial endotoxin and systemic inflammation and improves immunometabolic status by modulating hindgut microbiome and metabolome in lactating dairy cows
The objectives of this study were to determine the effects of dietary supplementation with citrus flavonoid extracts (CFE) on milk performance, serum biochemistry parameters, fecal volatile fatty acids, fecal microbial community, and fecal metabolites in dairy cows. Eight multiparous lactating Holstein cows were used in a replicated 4 × 4 Latin square design (21-day period). Cows were fed a basal diet without addition (CON) or basal diet with added CFE at 50 (CFE50), 100 (CFE10), and 150 g/d (CFE150). Feeding CFE up to 150 g/d increased milk yield and milk lactose percentage. Supplementary CFE linearly decreased milk somatic cell count. Serum cytokines interleukin-1β (IL-1β), IL-2, IL-6, and tumor necrosis factor-α (TNF-α) concentrations decreased linearly as the levels of CFE increased. Cows in CFE150 had lower serum lipopolysaccharide and lipopolysaccharide binding protein compared with CON. These results indicate feeding CFE decreased systemic inflammation and endotoxin levels in dairy cows. Furthermore, feeding CFE linearly increased the concentrations of total volatile fatty acids, acetate, and butyrate in feces. The relative abundances of beneficial bacteria Bifidobacterium spp., Clostridium coccoides-Eubacterium rectale group, and Faecalibacterium prausnitzii in feces increased linearly with increasing CFE supplementation. The diversity and community structure of fecal microbiota were unaffected by CFE supplementation. However, supplementing CFE reduced the relative abundances of genera Ruminococcus_torques_group, Roseburia, and Lachnospira, but increased genera Bacteroides and Phascolarctobacterium. Metabolomics analysis showed that supplementary CFE resulted in a significant modification in the fecal metabolites profile. Compared with CON, fecal naringenin, hesperetin, hippuric acid, and sphingosine concentrations were greater in CFE150 cows, while fecal GlcCer(d18:1/20:0), Cer(d18:0/24:0), Cer(d18:0/22:0), sphinganine, and deoxycholic acid concentrations were less in CFE150 cows. Predicted pathway analysis suggested that "sphingolipid metabolism" was significantly enriched. Overall, these results indicate that citrus flavonoids could exert health-promoting effects by modulating hindgut microbiome and metabolism in lactating cows.
Introduction
Substantial genetic and nutritional advancements have contributed to the increased milk production in the dairy industry. A greater milk production is associated with a greater metabolic load, which increases the risk of and susceptibility toward different production diseases (McArt and Neves, 2020). Previous studies have reported that the inflammatory responses resulting from bacterial lipopolysaccharide (LPS) induced by high metabolic load markedly disrupted the metabolic homeostasis in high-yield dairy cows (Zebeli and Ametaj, 2009;Zhao et al., 2018). The application of phytochemicals (e.g., flavonoids), as a potential approach for improving animal health and maintaining metabolic homeostasis in livestock farms has become more widespread due to the growing public concern about the side effects of antibiotics (Kuralkar and Kuralkar, 2021). Flavanones (e.g., hesperidin and naringin) and Opolymethoxylated flavones (e.g., tangeretin and nobiletin) are the main components in citrus-derived flavonoids . These citrus-derived flavonoids, as promising phytochemicals, have various biological properties, such as antimicrobial, antioxidant, anti-stress, and anti-inflammatory in ruminants (Alhidary and Abdelrahman, 2016;Lenehan et al., 2017;Sharif et al., 2018;Simitzis et al., 2019). Ying et al. (2017) reported that adding citrus extracts at 4.5 g/ d reduced plasma non-esterified fatty acids and increased plasma insulin in early-lactation cows, indicating that citrus flavonoids have the potential effect of regulating lipid homeostasis in dairy cows. In another study by Santos et al. (2014), the inclusion of 9% to 19% DM of citrus pulp increased the concentrations of total polyphenols, flavonoids, and ferric reducing antioxidant power in the milk of dairy cows. Additionally, the intramammary administration of hesperidin and naringenin was shown to reduce milk somatic cell count (SCC) in dairy cows with mastitis. Collectively, evidence from these studies suggests that citrus flavonoids have potential health benefits for lactating cows.
Citrus flavonoids could interact with gastrointestinal microbiota to influence host metabolic health (Stevens et al., 2019). Analysis of our own unpublished data showed feeding citrus flavonoid extracts (CFE) did not significantly affect rumen fermentation parameters and ruminal microbial structure in lactating dairy cows. In ruminants, it is unknown if citrus flavonoids are absorbed across the rumen epithelium. It was reported that rumen bacteria are known to have the ability of partially deglycosylating naringin and hesperidin (Gladine et al., 2007). We speculated that the hindgut microorganisms may further enhance the bioavailability of citrus flavonoids and promote flavonoid metabolite production. Thus, determining the effects of citrus flavonoids on hindgut microbiota and metabolism is essential for characterizing the role of these flavonoids and their influence on dairy cows' immune status.
We hypothesized that supplementing CFE in the diets of lactating dairy cows might enhance carbohydrate fermentation in the hindgut and improve immune status by altering hindgut microbiota and metabolites. These modifications would have a beneficial influence on the productivity of dairy cows. Accordingly, the objectives were to evaluate the effects of dietary supplementation with CFE on milk production, serum immune indices, fecal volatile fatty acids (VFA), fecal microbiota, and fecal metabolomic profile.
Animal ethics statement
Procedures for care and handling of animals required for this experiment were approved by the Ethics Committee on Animal Use at Beijing University of Agriculture. This trial was performed according to the Chinese Guidelines for Experimental Protocols of Animal Care and Animal Welfare.
Citrus flavonoid extract preparation, animals, and treatments
The product CFE was purchased from Shaanxi Xiazhou Biotechnology Co., Ltd. (Xi'an, China). Citrus flavonoids were extracted from the peel powder of Citrus reticulata Blanco. In brief, 1 kg of citrus peel powder was extracted twice with 15 L of calcium carbonate solution (0.1%) at 100 C for 1.5 h. The extraction was then evaporated to dryness using rotary evaporation at 37 C under reduced pressure. The total flavonoids were enriched by AB-8 macroporous absorption resin columns and eluted with 80% ethanol (2-fold column volume). The eluents were collected and concentrated to dryness for use. The total flavonoid content (56.83%) of CFE was determined using aluminum nitrate spectrophotometry (510 nm) with rutin equivalents (Pitz et al., 2016). The concentrations of major flavonoids, including naringin, hesperidin, neohesperidin, nobiletin, and tangeretin in CFE were analyzed through a HPLC system (1290 Infinity; Agilent Technologies, Inc.) according to the method of Jiang et al. (2019). The chemical composition of CFE is presented in Table S1.
Eight multiparous lactating Chinese Holstein cows (36.1 ± 3.79 kg/d of milk yield [MY], 662 ± 57.1 kg of BW, 160 ± 22.4 day in milk at the start of the experiment) were blocked by parity and MY and randomly allocated to a replicated 4 Â 4 Latin square design. Each experimental period lasted 21 d, with 14 d for dietary adaptation (d 1e14) and 7 d for data acquisition and sample collection (d 15e21). Dairy cows in each group were randomly assigned to 4 treatments: the basal diet (CON) and CFE supplementation at 50 g/d (CFE50), 100 g/d (CFE100), and 150 g/ d (CFE150).
All cows were kept in a tie stall barn with straw and sawdust bedding equipped with individual feed bins during the entire period and free access to water. Cows were milked 3 times daily at 05:30, 14:00 and 22:00. Milk production was recorded at each milking throughout the trial. Diets were formulated according to NRC (2001). The ingredient and nutritional composition of the basal diet are depicted in Table S2. The fresh TMR was fed twice daily at 07:00 and 15:30. Amounts of feed were adjusted to allow for a minimum of 5% refusals (as-fed basis). During the experiment, the supplied CFE were top-dressed on the first daily meal immediately after total mixed ration (TMR) delivery and mixed with a small amount of the feed. We monitored the animals after feeding to ensure that extracts were completely consumed.
Feed intake, milk production and composition
Daily feed provision and orts were weighed to calculate dry matter intake (DMI). Weekly samples from all forages and concentrate grains included in the TMR were dried and ground, and a single composite of each ingredient representing the entire study was kept at À20 C until chemical determination. Samples of TMR and orts were dried in a forced-air oven at 55 C for 48 h then ground through a 1-mm screen (Wiley mill, Arthur H. Thomas) before analysis. Samples were analyzed for DM (method 935.29;AOAC, 2006), ash (method 942.05;AOAC, 2006), ether extract (method 920.39;AOAC, 2006), crude protein (method 990.03;AOAC, 2006), and starch (method 996.11;AOAC, 2006). Concentrations of neutral detergent fiber and acid detergent fiber were analyzed according to Van Soest et al. (1991), with the use of heat stable a-amylase and sodium sulfite for neutral detergent fiber determination. The NE L was calculated from tabulated feed values based on NRC (2001).
Milk samples were collected during the last 3 consecutive days (d 19e21) from all 3 milkings and were preserved with 2-bromo-2nitropropane-1,3-diol. Daily milk samples of each cow were composited in proportion based on MY and transported to Beijing Dairy Cattle Center (Beijing, China). Milk fat, protein, lactose, SCC, and urea nitrogen were analyzed using the infrared spectroscopy (MilkoScan 4000, Foss Electric, Hillerød, Denmark). The 3.5% fatcorrected milk (FCM) was calculated according to Sklan et al. (1992). Energy-corrected milk (ECM) was calculated based on the equation of Sjaunja et al. (1990).
Serum immune parameters
Blood was taken from the tail vessels of cows into vacuum tubes without clot activator (Vacutainer, Becton Dickinson, Franklin Lakes, NJ, USA) after the morning milking and before feeding on d 16 of each period. Serum was separated by centrifugation at 2,000 Â g for 15 min at room temperature, and then stored at À20 C until analysis. Concentrations of immunoglobulin A (IgA), IgG, IgM, interleukin-1b (IL-1b), IL-2, IL-4, IL-6, tumor necrosis factor-a (TNF-a), interferon g (IFN-g), LPS, LBP, soluble LPS receptor CD14 (sCD14), fibroblast growth factor 21 (FGF21), and adiponectin (ADPN) were analyzed using commercial ELISA kits according to the supplier's instructions (Beijing Solarbio Science & Technology Co., Ltd., Beijing, China). All ELISA data were recorded by a microplate reader (Multiskan FC; Thermo Fisher Scientific, Waltham, MA, USA).
Feces sampling and fecal volatile fatty acids analysis
Fresh fecal samples were taken manually using new gloves for every collection into sterile tubes after the morning milking and before feeding on d 16 of each period. Fecal tubes were immediately flash frozen in liquid nitrogen and stored at À80 C.
The VFA concentrations in fecal samples were analyzed using the method of Petri et al. (2019) with minor modification. Briefly, 2 g of thawed feces from each sample was mixed with 2 mL of distilled water. Then, 600 mL of the internal standard 4methylvaleric acid (SigmaeAldrich, Saint Louis, MO, USA) and 0.4 mL of 25% phosphoric acid were added to the suspension. Those mixtures were centrifuged at 15,000  g at 4 C for 20 min. Subsequently, the supernatant was analyzed for VFA composition by a gas chromatograph (GC-7890B; Agilent Technologies, Wilmington, DE, USA) equipped with a flame ionization detector and a DB-FFAP capillary column (30 m  0.25 mm  0.25 mm; Agilent Technologies, Wilmington, DE, USA). The temperatures of the injector and detector were 170 and 190 C, respectively. The oven temperature was increased from 100 to 250 C.
Quantitative real-time PCR amplification
Total microbial DNA from each fecal sample was extracted using an E.Z.N.A. Stool DNA Kit (D4015, Omega, Inc., USA) following the manufacturer's protocol. Concentration and integrity of DNA were analyzed using a NanoDrop 2000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA) and 1.1% agarose gel electrophoresis. The extracted DNA samples were stored at À80 C until further processing.
The relative abundances of 8 fecal bacteria, including Bacteroides-Prevotella-Porphyromonas group, Bifidobacterium spp., Clostridium coccoides-Eubacterium rectale group, Clostridium clusters I and XIVab, Escherichia coli, Faecalibacterium prausnitzii, and Lactobacillus spp. were quantified using real-time PCR. Primer sequences of these targeted bacteria are listed in Table S3. The extracted DNA of each sample was standardized to 8 ng/mL for qPCR. The qPCR reaction was performed in triplicate with a total volume of 25 mL. Each reaction system consisted of 12.5 mL of 2 Â SYBR Green Master Mix (Life Technologies, Foster City, CA, USA), 1 mL of each primer, 2 mL of extracted DNA samples, and 9.5 mL of DNase/RNase free water. The qPCR reactions were conducted in a Roche LightCycler 96 (Roche Diagnostics Deutschland GmbH, Mannheim, Germany). The cycling protocol was 3 min at 95 C, followed by 40 cycles of 15 s at 95 C and 30 s at the respective annealing temperature, and 15 min at 72 C for the final extension. The abundance of each target bacteria was expressed as the proportion (%) of the abundance of 16S rRNA genes of each bacterial target against that of the total bacteria using the efficiency-corrected 2 DDÀCt method (Ramirez-Farias et al., 2009).
Fecal bacterial 16S rRNA sequencing and bioinformatics analysis
Based on the results of serum immune indices, we selected the samples of CON and CFE150 to determine the effects of CFE on the fecal microbiome and metabolite profiling. Amplification of the hypervariable regions V3 and V4 of bacterial 16S rRNA gene was performed with the 338F (5 0 -barcode-ACTCCTRCGGGAGGCAG-CAG-3 0 ) and 806R (5 0 -GGACTACCVGGGTATCTAAT-3 0 ) primer set.
The PCR amplification reaction consisted of 1 mL of forward index primer (10 mM), 1 mL of reverse index primer (10 mM), 1 mL of DNA template (10 ng/mL), and 17 mL of Pfx AccuPrime master mix. The amplification started with a denaturation at 95 C for 5 min; 30 cycles of 95 C for 30 s, 55 C for 30 s, and 72 C for 1 min; and a final elongation at 72 C for 5 min. The amplicon was performed in triplicate for each sample, and then PCR amplicons were further purified by the AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, USA) in accordance with the protocol of the manufacturer. Subsequently, the quality of PCR product was evaluated using the QuantiFluor-ST (Promega, Durham, NC, USA). Paired-end sequencing (2 Â 300 bp) was conducted in an Illumina MiSeq sequencing system (Illumina, San Diego, CA, USA). The 16S rRNA gene amplicon sequencing data were deposited into the National Center for Biotechnology Information database under accession number PRJNA805345.
The Quantitative Insights into Microbial Ecology version 2 (QIIME 2) pipeline was used to filter and identify the operational taxonomic unit (OTU) of sequenced reads (Caporaso et al., 2010). The sequence with a quality score <20 was removed before downstream analysis. Screening for chimeric sequences was conducted via USEARCH and its database (USEARCH version 8.1; Edgar et al., 2011). Sequence alignment and clustering were performed by PyNAST (Caporaso et al., 2010) and the SILVA database (version 123) of 16S rRNA gene database using the Basic Local Alignment Search Tool. The identity of OTU was determined based on 3% dissimilarity of sequences to the database, and OTU which clustered with more 10 reads were retained. Taxonomy was assigned to each OTU using the RDP classifier (version 2.2) and the GreenGenes database. Abundance estimates were calculated by summing read counts of OTU with identical taxonomic assignments from kingdom to genus taxonomic level. Shannon diversity index, Chao richness estimator, and all other OTU-level alpha diversity indices were computed using MOTHUR (version 1.30.1). Analysis of similarity (ANOSIM) in MOTHUR was performed to determine the difference in bacterial community structure. Beta diversity was calculated using the unweighted and weighted UniFrac distances and visualized using the principal coordinate analysis (PCoA) plot. The functional features of each sample were predicted using Phylogenetic Investigation of Communities by Reconstruction of Unobserved States 2 (version 2.3.0-b). The final P-values were corrected for false discovery rate (FDR) by the Bonferroni method with a significance threshold of P < 0.05 adopted.
Fecal metabolite extraction, LC-MS/MS analysis, and data processing
Fecal metabolites were extracted according to Yu et al. (2017) with modifications. One hundred milligram of fecal sample was dissolved in 1 mL of ice-cold water from a Milli-Q water purification system (Millipore Corp, Bedford, MA, USA). The mixture was vortexed for 60 s and centrifuged at 13,000  g and 4 C for 15 min. The supernatant was collected and kept on ice, whereas the remaining fecal pellet was further extracted by adding 1 mL of ice-cold LC-MS grade-methanol. The mixture was vortexed and centrifuged at 13,000  g at 4 C for 15 min. The supernatants were combined and subsequently transferred to a fresh Eppendorf tube with a 0.22-mm filter, and then centrifuged at 15,000  g at 4 C for 10 min. Finally, fecal filtrate was injected into the UHPLC system (1290; Agilent Technologies, USA) coupled to Triple TOF 5600 (Q-TOF, AB Sciex, USA). The liquid chromatography (LC) separation was performed with a UPLC BEH Amide column (100 mm  2.1 mm  1.7 mm) with mobile phase A (0.1% formic acid in water) and mobile phase B (0.1% formic acid in acetonitrile). The flow rate was 0.40 mL/min. The elution gradient was 5% B for 0 to 2 min, 5% to 95% B for 2 to 12 min, 95% B for 12 to 15 min, and 95% to 5% B for 15 to 17 min. Mass spectrum was operated in either positive or negative ion mode as described by Wang et al. (2019).
Raw data was extracted and preprocessed using Compound Discoverer software (version 2.0; Thermo Fisher Scientific, Waltham, MA, USA). The metabolites in these harvested data were annotated according to the accurate molecular weight (MW) by searching the exact MW against Kyoto Encyclopedia of Genes and Genomes (KEGG) and Human Metabolome Database (HMDB). The principal component analysis (PCA) and supervised orthogonal partial least squaresdiscriminate analysis (OPLS-DA) were conducted in SIMCA-P software (version 11.0; Umetrics AB, Umeå, Sweden). The differentially expressed metabolites (DEM) were evaluated by the combination of the variable importance in the projection (VIP > 1.5) and the corrected P-values (< 0.05) via Student's t-test. The value of fold-change (FC) was obtained by comparing the peak intensity of each metabolite between two groups. The online platform, MetaboAnalyst 5.0 (https://www.metaboanalyst.ca/) was used to conduct the metabolic pathway analysis (MetPA) based on DEM via the library of Bos Taurus (cow) of KEGG (Pang et al., 2021).
Statistical analyses
The analysis of DMI, MY and composition, serum immune indices, fecal VFA, and 8 fecal bacteria species were conducted with SAS (version 9.4, SAS Institute Inc., USA) using the PROC MIXED statement and the following model: in which, Y ijkl is the dependent variable; m is the overall mean; S i is the fixed effect of square (i ¼ 1 to 2); P j is the fixed effect of period (j ¼ 1 to 4); T k is the fixed effect of treatment (k ¼ 1 to 4); C(S) l(i) is the random effect of cow nested within square (l ¼ 1 to 4), and e ijkl is residual error. The interactions between period and square, period and treatment, and square and treatment were initially included in the model and removed when P > 0.20 (de Souza et al., 2020).
The Kenward-Roger procedure was used to determine the degrees of freedom for the F tests. The linear and quadratic effects of treatments were assessed by orthogonal polynomial contrasts. The TukeyeKramer post hoc test adjusting for multiple comparisons was used to identify the differences among the means. All values presented in the manuscript are means ± standard error of the mean. Effects were assumed to be significant at P < 0.05, whereas tendencies toward significance were assumed when 0.05 P < 0.10. Spearman's rank correlation coefficients between fecal differentially abundant bacteria (DAB) and DEM, DAB and fecal VFA, and DAB and serum immune indices were evaluated in R. These correlations were visualized using the R package 'ggplot'.
Feed intake, milk yield and composition
Feeding CFE up to 150 g/d did not affect DMI of dairy cows (Table S4). Dietary supplementation with CFE significantly affected MY (P ¼ 0.028), ECM (P ¼ 0.040), lactose percentage (P < 0.001), yield (P < 0.001), and SCC (P ¼ 0.003). Dairy cows with CFE100 or CFE150 had greater MY (P < 0.05) compared with CON. The ECM was significantly higher (P < 0.05) in the cows supplemented with CFE100 in comparison with CON. The milk lactose yield (Lin P ¼ 0.028; Quad P ¼ 0.027) and percentage (Lin P < 0.001; Quad P < 0.001) increased linearly and quadratically with increasing levels of CFE supplementation. The SCC linearly decreased (P ¼ 0.043) with increasing amounts of CFE. The milk efficiency (expressed as ECM/DMI) tended to quadratically increase (P ¼ 0.090) with increasing CFE. The yields or percentages of milk fat and protein, milk efficiency (expressed as MY/DMI, FCM/DMI), and MUN were similar (P > 0.10) among treatments.
Dietary supplementation with CFE quadratically affected (P ¼ 0.024) serum IgG concentration, and the greatest value of IgG was found at CFE50. Adding CFE tended to affect (P ¼ 0.082) IgA concentration, whereas the dosage effect of CFE was not observed. Increasing CFE inclusion linearly reduced the concentrations of IL-1b (P ¼ 0.020), IL-2 (P ¼ 0.018), IL-6 (P ¼ 0.019), and TNF-a (P ¼ 0.047) in serum. Serum LPS was lower (P < 0.05) in the cows with CFE50 or CFE150 compared with CON. Serum LBP displayed a quadratic effect (P ¼ 0.017), with the lowest value at CFE150. Serum ADPN increased linearly (P ¼ 0.043) with increasing CFE levels. Serum concentrations of IgM, IL-4, IFN-g, sCD14, and FGF21 were unaffected (P > 0.10) by CFE addition.
Fecal volatile fatty acids
Feeding CFE to dairy cows significantly affected fecal butyrate concentration (P ¼ 0.010), and tended to change the concentrations of total volatile fatty acids (TVFA; P ¼ 0.058) and acetate (P ¼ 0.062; Table 2). The production of propionate, isobutyrate, valerate, and isovalerate were not different (P > 0.10) across treatments. The concentrations of TVFA (P ¼ 0.014), acetate (P ¼ 0.014), and butyrate (P ¼ 0.002) in feces increased linearly with increasing CFE addition. Additionally, the proportion of butyrate linearly increased (P ¼ 0.030) with increasing supplementation of CFE.
Diversity and divergence of fecal bacterial community
The amplicon sequencing of 16 fecal samples generated a total of 663,411 high-quality reads, an average of 41,463 sequences per sample, and the average sequencing read length was 412 bp. High-quality reads were clustered into 1,247 microbial OTU, among which 1,116 OTU were found in all groups and accounted for 89.5% of the total OTU, indicating the presence of an extensive common microbiome (Fig. S1A). The rarefaction curves indicated the sequencing depth was sufficient to describe the microbial composition of each treatment (Fig. S1B). No difference was found in the a-diversity indices, including ACE, Chao, Shannon, and Simpson among treatments (Fig. S2). The PCoA analysis based on unweighted and weighted UniFrac metrics showed that feeding CFE did not modify (P > 0.05) fecal microbial structure (Fig. 1). Phylogenetic analysis of the sequences of fecal bacteria identified 15 phyla. Firmicutes and Bacteroidota were the most abundant phyla, which accounted for an average of 93.4% of the community (Fig. S3A). The relative abundance of phylum Bacteroidota was greater (P ¼ 0.024) in the feces of cows with CFE150 compared with that of CON ( Fig. 2A).
Linear discriminant analysis (LDA) with effect size (LEfSe) was used to identify significantly differential bacterial taxa between CON and CFE150. The LDA histogram (Fig. S4) manifested that there were 21 biomarkers with LDA scores > 3, including 11 biomarkers in the CON group and 10 biomarkers in CFE150, respectively.
Several correlations between the concentrations of fecal VFA and serum immune indices and the abundances of DAB are shown in Fig. 3. The concentrations of fecal TVFA and acetate were both positively correlated with Bacteroides and Ruminococcus_-torques_group. The concentration of fecal butyrate was positively correlated with Bifidobacterium spp., C. coccoides-E. rectale group, and F. prausnitzii, and negatively correlated with Clostridium cluster XIVab. Serum IL-1b was negatively correlated with Bifidobacterium spp. Serum IL-6 was negatively correlated with Bifidobacterium spp. and C. coccoides-E. rectale group, and positively correlated with Clostridium cluster XIVab. Serum TNF-a was positively correlated with Clostridium cluster XIVab. Serum ADPN was positively correlated with Bifidobacterium spp.
Fecal metabolites
A total of 685 valid peaks were detected in the fecal samples, and 548 compounds were obtained against KEGG and HMDB. According to the unsupervised PCA plot, no noticeable separation was found between CON and CFE150 ( Fig. 4A and B). The OPLS-DA results in Fig. 4D and E showed distinct separations for fecal metabolites between the CON and CFE150. In each plot of the OPLS-DA, all samples were within the 95% Hotelling's T 2 ellipse. The parameters of the permutation test showed a satisfactory fitness and prediction effectiveness (R 2 > 0.80) of the model, which can be used to identify DEM.
The Spearman correlation matrix between DAB and DEM is presented in Fig. 6. A portion of up-regulated DEM were positively correlated with Bifidobacterium spp., C. coccoides-E. rectale group, F. prausnitzii, Bacteroides, and Ruminococcus_torques_group. A portion of down-regulated DEM were positively correlated with Phascolarctobacterium, and negatively correlated with C. coccoides-E. rectale group and Bacteroides.
Discussion
Citrus extract has emerged as a promising phytogenic feed additive for improving performance and health in ruminants (Balcells et al., 2012;Paniagua et al., 2021). Naringin, hesperidin, neohesperidin, and nobiletin are usually predominant active flavonoids present in citrus fruits or by-products . In the present study, naringin and hesperidin were the most abundant flavonoids in CFE. Generally, naringin and hesperidin require hydrolysis to their active aglycone forms naringenin and hesperetin to exert various health-promoting activities (Altunayar-Unsalan et al., 2022).
Based on the serum untargeted metabolomic profiling, compared to CON, dairy cows in CFE150 had greater contents of naringenin, hesperetin, tangeretin-4 0 -glucuronide, and 3 0demethyl-nobiletin in serum (data not shown). These results suggested that more active compounds derived from citrus flavonoids metabolism were available for dairy cows in CFE150.
Although DMI did not differ among treatments, supplementing CFE up to 150 g/d increased MY and milk lactose of dairy cows in comparison with CON. Contrary to what was observed in our Y. Zhao, S. Yu, L. Li et al. Animal Nutrition 13 (2023) 386e400 experiment, Ying et al. (2017), in the assessment of feed intake or milk performance by citrus extract supplementation, did not observe any effects on DMI, MY or milk composition. The supplementation level of citrus extract (4 g/d) in Ying et al. (2017) was far lower than that of our study; therefore, differences in supplementary amount might account for this disparity in milk performance. In the present study, the reduced milk SCC accompanied by the increased milk yield may be due to the anti-inflammatory effect of flavonoid compounds. Similarly, several studies indicated reduced milk SCC of dairy cows received diets supplemented with flavonoid-rich extracts from bamboo leaf (Zhan et al., 2021), baicalin (Burmanczuk et al., 2021), or quercetin (Burmanczuk et al., 2018) compared with a control. Furthermore, a newly published research study found that the administration of hesperidin and naringenin via the intramammary route reduced the milk SCC of mastitis-affected cows (Burmanczuk et al., 2022). These results suggest that citrus flavonoids have the potential to improve mammary health status of dairy cows. Biomarkers of inflammation status can be used to provide direct information about homeostasis in dairy cows (Shangraw and McFadden, 2022). In our study, increased IgG concentration in serum was observed after CFE supplementation with 50 g/d, indicating the immune response of dairy cows was enhanced by CFE. An endotoxin-associated immune response can be activated by LPS binding to specific cell surface receptors. Lipopolysaccharide interacts with toll-like receptor 4 to stimulate the release of proinflammatory factors such as IL-1b, IL-2, and IL-6 (Sordillo and Mavangira, 2014). In our experiment, dietary supplementation with CFE reduced serum concentrations of LPS, LBP, IL-1b, IL-2, IL-6, and TNF-a, indicating citrus flavonoids could decrease the levels of endotoxin and systemic inflammation in dairy cows. In the study of Paniagua et al. (2019), CFE supplementation down-regulated the gene expression related to inflammation, such as IL-25, TLR4, and b-defensin1 in the rumen epithelium of Holstein bulls, supporting the modulatory effects of CFE on immune action in ruminants. Additionally, CFE addition linearly increased serum concentration of ADPN. Adiponectin, an adipokine secreted by adipocytes, plays an important role in energy homeostasis regulation, lipid and glucose metabolism, and insulin sensitivity (Li et al., 2019). Thus, these results indicated that CFE modulated lipid metabolism in cows, suggesting the potential effects of reducing liver lipid accumulation during periods of negative energy balance, and the regulatory mechanisms at the molecular level warrants further research.
We speculated that the gastrointestinal microbiota play crucial roles in the metabolism of citrus flavonoids and immune regulation in dairy cows. Our results showed that supplementing CFE tended to linearly increase microbial crude protein and propionate concentrations, but TVFA and ruminal microbial structure and composition were not significantly altered by CFE (data not shown). However, few studies have uncovered the effects of citrus flavonoid intake on hindgut fermentation, microbiome and metabolites in dairy cows. Volatile fatty acids are the end products derived from microbial fermentation of dietary carbohydrates in the gastrointestinal tract, and they are associated with conferring a beneficial influence on the host metabolic homeostasis (Gibson and Roberfroid, 1995). Butyrate, as one of the primary energy sources for the gastrointestinal tract, is expected to inhibit systemic inflammation in the gastrointestinal tract (Hamer et al., 2008). Thus, increased TVFA and butyrate production in the current experiment was beneficial for maintaining energy homeostasis and controlling systemic inflammation. It is widely believed that decreasing hindgut pH with greater VFA concentrations could suppress the growth of pathogenic bacteria, consequently promoting the growth of beneficial microbiota (Brownawell et al., 2012). In this study, feeding CFE to dairy cows increased the proportion of Bifidobacterium spp. and Faecalibaterium prausnitzii, which are regarded as beneficial bacteria (Unno et al., 2015). Similar responses to increasing the proportions of Bifidobacterium spp. and Faecalibacterium prausnitzii were also observed when CFE was offered to rats (Unno et al., 2015) or humans (Lima et al., 2019). Bifidobacterium spp. are acetate producers (Mandalari et al., 2010); thus, an increase in fecal acetate was expected with a linear increase in the relative abundance of Bifidobacterium spp. induced by CFE supplementation. The Bifidobacterium spp. was negatively correlated with serum IL-1b and IL-6, indicating CFE may serve to modify the growth of beneficial bacteria via increasing VFA production to improve the inflammatory status of dairy cows. In addition, it has been documented that Bifidobacterium spp. could hydrolyze certain rutinose-conjugate flavonoids (e.g., hesperidin), thus releasing their aglycone form Table 4 Identification of significant differentially expressed metabolites in feces of dairy cows by comparison of control diet (CON) and citrus flavonoid extracts (CFE150) with variable importance in the projection (VIP) > 1.5 and P < 0.05. (Amaretti et al., 2015). Due to this, the increase in the abundance of fecal Bifidobacterium spp. may have potentially enhanced the bioavailability of citrus flavonoid compounds in the current study.
Chemical taxonomy
Faecalibacterium prausnitzii and C. coccoides-E. rectale group have been recognized as producers of butyrate, which provides energy for the intestinal epithelium (Barroso et al., 2014;Jaimes et al., 2019). The increase in fecal butyrate may be ascribed to the increased growth of F. prausnitzii and C. coccoides-E. rectale group. This is further supported by the positive correlation between fecal butyrate and F. prausnitzii or C. coccoides-E. rectale group. In the present study, the tendency of linear decrease in the relative abundance of harmful bacterium E. coli was due to the inhibition effect of CFE. The effect on E. coli was consistent with the decrease in serum content of LPS, as E. coli was the main LPS producer (Somerville et al., 1996).
In the present study, more comprehensive information regarding fecal bacteria community were investigated by 16S rRNA sequencing. The Alpha or Beta diversity was not affected by CFE supplementation, indicating that the hindgut microbial system is predominated by a core community whose structure remains stable regardless of CFE dosage. The relative abundances of phylum Bacteroidota and genus Bacteroides were greater in CFE150 compared with CON. Phylum Bacteroidetes includes genera known to ferment carbohydrates for the production of VFA, mainly acetate and propionate (Boger et al., 2019). Bacteroides was one of the most frequently reported and modified bacterial genus when animals received dietary flavonoid or polyphenol treatment according to a systemic review (Moorthy et al., 2021). Many Bacteroides strains are regarded as next-generation probiotics (Dahiya et al., 2019), such as Bacteroides uniformis, Bacteroides acidifaciens, and Bacteroides dorei. In the present study, it can therefore be inferred that the increase in Bacteroides proportion was related to the health benefits of CFE.
It was reported that the bacteria belonging to genus Roseburia, as butyrate producers, play an important role in controlling intestinal inflammation (Karlsson et al., 2013). However, cows fed CFE150 exhibited a lower abundance of genus Roseburia in feces compared with CON. This finding is inconsistent with the promoting influence of CFE on butyrate that we noted, and the reasons for this result are unclear. Ruminococcus_torques_group, as the dominant flora, could resolve intestinal mucin damage to the mucosal barrier (Yang et al., 2021). The decrease of Rumino-coccus_torques_group in CFE150 indicated the pathogenicity factors of pathogens were weakened by CFE. Unlike Bacteroides, the bacteria of Phascolarctobacterium barely ferment carbohydrates, but use succinate as a substrate to produce propionate (Belda et al., 2021), thus, coexistence with succinic acid-producing bacteria such as Bacteroides is essential for the genus Phascolarctobacterium. Therefore, the increased fecal Phascolarctobacterium abundance of cows fed CFE150 might have resulted from the greater proportion of Bacteroides. The metabolic functions of fecal microbiota were predicted using the KEGG database. Compared with CON, some key metabolic pathways, such as 'glycan biosynthesis and metabolism', 'energy metabolism', and 'lipid metabolism' were altered in CFE150, which indicated that CFE could improve the metabolic pattern of hindgut microbiota, thus exerting a health-promoting effect in dairy cows.
Metabolomic data showed that some metabolites belonging to the superclass of benzenoids were increased in CFE150. Naringenin can be catabolized into phenolic products such as 3-(4 0 -hydroxyphenyl) propionic acid (HPPA), 4 0 -hydroxybenzoic acid, and hippuric acid by gut bacteria (Mu et al., 2020). Therefore, the increased hippuric acid in CFE150 indicated the interaction between CFE metabolism and gut microbiota may be important to increase citrus flavonoid bioavailability in dairy cows. Meanwhile, we observed that CFE could regulate bile acid metabolism by up-regulating the level of taurocholic acid and down-regulating deoxycholic acid. Cholic acid is converted to deoxycholic acid by gut bacteria, and deoxycholic acid has a high toxicity (Rodríguez-Morat o et al., 2018). Therefore, the decreased deoxycholic acid suggested a protective effect of CFE on gastrointestinal health by modulating the levels of secondary bile acids.
In the present study, the pathway of 'sphingolipid metabolism' was significantly enriched based on the DEM between CON vs. CFE150. Sphingolipids, as structural membrane constituents and essential eukaryotic signaling molecules, play an important role in modulating immunity and inflammation status (Brown et al., 2019). On the one hand, inflammatory modulators (LPS or TNF-a) can stimulate de novo ceramide synthesis and contribute to the conversion of sphingomyelin back into ceramides via the various sphingomyelinases (Chaurasia and Summers, 2021). Therefore, in the present study, the decreased ceramide levels in CFE150 could be attributed to the reduction of serum LPS and TNF-a. On the other hand, the decrease in GlcCer(d18:1/20:0), Cer(d18:0/24:0), and Cer(d18:0/22:0) observed in CFE150 can be related to the increase in serum ADPN. It has been demonstrated that ADPN, via ADPN receptors, exerts or enhances ceramidase activity, converting deacylated ceramide to sphingosine and sphingosine-1-phosphate (Vasiliauskaite-Brooks et al., 2017). In this study, sphingosine was increased by CFE150, suggesting that ceramidase activity may be upregulated to promote ceramide degradation. The suppression of ceramide production can also reduce endoplasmic reticulum stress, consequently resulting in inhibition of inflammation (Chaurasia and Summers, 2021). Recently, Brown et al. (2019) found that the lack of Bacteroides-derived sphingolipids contributed to intestinal inflammation and increased host ceramide levels in mice and demonstrated that sphingolipid produced by Bacteroides species could maintain symbiosis with the host. Although the Bacteroidesderived sphingolipids were difficult to quantify in the present study, we speculated that the increase in Bacteroides abundance could promote sphingolipid biosynthesis and decrease host ceramides to improve intestinal homeostasis.
Taken together, supplementing CFE in the diet could be a promising approach for decreasing endotoxin and systemic inflammation in lactating dairy cows. We speculated that the significant change in hindgut microbiota and metabolites may be potentially associated with the improvement in immunometabolic status (Fig. 7). Based on current observations, the improvement in milk performance and mammary gland health may also be explained by the decrease in proinflammatory factors as the amount of CFE supplementation increased. Determining the interaction of citrus flavonoids and hindgut microbiota may need more studies concerning the metabolic fate of citrus flavonoids in ruminants. Also, further studies via targeted metabolomics or lipidomics are required to provide accurate quantification of some of the metabolites (e.g., sphingolipid metabolism) whose functions are currently not well elucidated in dairy cows. Fig. 6. The Spearman correlation matrix between fecal differentially abundant bacteria and fecal differentially expressed metabolites. Asterisks denote significant difference between dairy cows fed the control diet (CON) and citrus flavonoid extracts at 150 g/d (CFE150), *0.01 < P 0.05; **0.001 < P 0.01; ***P 0.001.
Conclusion
In this study, we revealed that CFE has a clear beneficial effect on immune status and milk performance of mid-lactation dairy cows. Supplementary CFE improved immunometabolic status of dairy cows by regulating serum IgG, IL-1b, IL-2, IL-6, TNF-a, and LPS levels. The anti-inflammatory effects of CFE were associated with promotion of hindgut fermentation and the increase in probiotics Bacteroides, Phascolarctobacterium, Bifidobacterium spp., and F. prausnitzii, and the decrease in Clostridium cluster XIVab, E. coli, and Rumino-coccus_torques_group. Sphingolipid metabolism and secondary bile acid production in the hindgut were also modulated by CFE supplementation, contributing to the protective effects of CFE on lipid and intestinal homeostasis. Further understanding of the regulatory mechanisms involved in the metabolic health effects of CFE as well as their metabolites in the gastrointestinal tract of dairy cows is crucial for future application of CFE as feed additives in the dairy industry.
Declaration of competing interest
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, and there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the content of this paper. Fig. 7. A working mechanism to illustrate the hindgut bacteria and metabolites that might be associated with the improvement of immunometabolic status in lactating dairy cows. AdipoR ¼ adiponectin receptor; ER ¼ endoplasmic reticulum; IL ¼ interleukin; LBP ¼ lipopolysaccharide binding protein; LPS ¼ lipopolysaccharide; TLR4 ¼ toll-like receptor 4; TNFa ¼ tumor necrosis factor a; SM ¼ sphingomyelin; S1P ¼ sphingosine 1-phosphate; VFA ¼ volatile fatty acids.
|
2023-04-09T15:20:52.581Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5b5400290457e85a84db18f2fbd00de642d6351f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.aninu.2023.03.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "912082456ff16e4b9b52065f07928df1f1a0ccf0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211724442
|
pes2o/s2orc
|
v3-fos-license
|
AVBH: Asymmetric Learning to Hash with Variable Bit Encoding
Nearest neighbour search (NNS) is the core of large data retrieval. Learning to hash is an effective way to solve the problems by representing high-dimensional data into a compact binary code. However, existing learning to hash methods needs long bit encoding to ensure the accuracy of query, and long bit encoding brings large cost of storage, which severely restricts the long bit encoding in the application of big data. An asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed to solve the problem.+e AVBH hash algorithm uses two types of hash mapping functions to encode the dataset and the query set into different length bits. For datasets, the hash code frequencies of datasets after random Fourier feature encoding are statistically analysed. +e hash code with high frequency is compressed into a longer coding representation, and the hash code with low frequency is compressed into a shorter coding representation. +e query point is quantized to a long bit hash code and compared with the same length cascade concatenated data point. Experiments on public datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of query.
Introduction
Given a query object/point q and a dataset S, the nearest neighbour search (NNS) [1][2][3] is to return the nearest neighbours in S to q. Nowadays, the NNS is widely used in many applications such as image retrieval, text classification, and recommendation systems. However, with the exponential growth of data scale and the disaster of the high data dimensionality, the NNS problem is now much more difficult to solve than before. erefore, new efficient index structures and query algorithms for similarity searches have increasingly become the focus of research for the problem. e hashing-based NNS methods [3][4][5] have attracted much attention. Generally, the hashing methods can project the original data with locality preserved to a low-dimensional Hamming space, i.e., binary codes [4][5][6]. e complexity of those methods is always in sublinear time. In addition, the hashing methods only need a simple bit operation to compute the similarity from Hamming encoding, which is very fast. As the high performance in large-scale data retrieval, hashing techniques have gained increasing interests in facilitating cross-view retrieval tasks [7,8], online retrieval tasks [9], and metric learning tasks [10].
For large-scale data retrievals, the time and space costs are the two important issues. As we know, the accuracy of existing hash methods is limited by the length of hash encoding and usually requires a longer coding to get better accuracy. However, a long coding will increase the space cost, network communication overhead, and response time.
In order to solve this problem, a coding quantization mechanism [11] based on asymmetric hashing algorithm [12] was proposed. Different from the direct hash code comparison, by cascade concatenating the coding of the data point to the same encoding length of the query point, the coding storage cost of the dataset is reduced effectively and the accuracy of the result is ensured. However, this algorithm uses a unified compression method for all data, ignoring the effect of data distribution. Actually, the distribution of large-scale data is generally uneven. Hence, for most hashing algorithms, the frequency of quantization is also different. As we know, longer encoding can preserve most of the original information; however, it will bring high cost and vice versa. A careful trade-off among accuracy, computing overhead, and space-saving needs to be studied. Intuitively, high-density data require longer length encoding to ensure that the original information is preserved as much as possible, while low-density data can use shorter length encoding and still preserve most of the original information.
at is the idea behind our algorithm. In this paper, an asymmetric learning to hash with variable bit encoding algorithm (AVBH) is proposed. e AVBH uses two types of hash mapping functions to quantify the dataset and the query set separately to encode the hash codes with different length bits. In particular, the frequency of dataset is calculated by random Fourier encoding, and then the random Fourier coding with high frequency is compressed into a longer hash code representation, and the random Fourier coding with low frequency is compressed into a shorter hash code representation. e main contributions of this paper are as follows: (1) a variable bit encoding mechanism (named AVBH) based on hash code frequency compression is proposed, which makes the encoding space effectively used, and (2) the experiment shows that the AVBH can effectively reduce the storage cost and improve the query accuracy.
Preliminaries and Description
In this section, we review some basic knowledge of LSH (locality-sensitive hashing) [13][14][15], vector quantization [16], and product quantization [17] that is essential to our proposed technique.
Vector Quantization.
Vector quantization (VQ) is a classical data compression technique, which compresses the original data into discrete vectors. For a vector x of n dimensions, formally, a VQ function f can specified as f(x) ∈ C � c i , i � 1, 2, . . . , k , where x (with n dimensions) is an original data/vector, C is a pretrained code set, and c i is a codeword in the codebook C. e objective of a VQ function is to quantify the original real number vector to the nearest codeword with the lowest VQ loss. Here, the VQ loss of vector x is given by (1)
Product Quantization.
Product quantization (PQ) is an optimization of vector quantization. Firstly, the feature space is divided into m mutually exclusive subspaces, and each subspace is then quantized separately using VQ. at is, the coding of each subspace forms a small codebook C 1 , C 2 , . . . , C m , and small codebooks form a large codebook C by the Cartesian product. In this method, a high-dimensional data can be decomposed into m low-dimensional spaces and can be processed in parallel. Suppose an object x is represented as a combination of m codewords c 1 , c 2 , . . . , c m , the loss of the product quantization of vector x is given by (2) 2.3. Random Fourier Feature. Traditional dimensionality reduction methods, such as PCA, map the data to the independent feature space and compute the main independent features. is method ignores the nonlinear information of the sample distribution and cannot apply to the actual data well. Based on the feature mapping method of random Fourier feature (RFF), data are mapped to the characteristic space under the approximate kernel function, and the inner product of any two points under the feature space is approximated by their kernel function values. Compared with the PCA method, RFF can maximize the data distribution information and obtain the dimensional characteristic by reducing the dimension or raising the dimension. is kind of characteristic is suitable for the characteristic compression processing. SKLSH [18] is a kind of classical hashing algorithm based on RFF, which has a good experimental result under the long bit digit coding. e length coding hash learning algorithm firstly maps the sample points from the original n dimension real space to the n dimension of the approximate kernel function feature space by RFF. Because of the convergence of RFF consistency, the kernel function similarity between any two sample points can be maintained.
Specifically, for two points x, y, the translation invariant kernel function [12] K(x, y) � E(Φ w,b (x), Φ w,b (y)) satisfies the following equation: satisfies the uniform distribution between [0, 2π], w obeys the probability distribution P K induced by the translation invariant kernel function, and η is a constant parameter.
us, the mapping from the n dimensional space to the feature space of the d dimensional approximation kernel function can be obtained by the following equation: where w 1 , w 2 , . . . , w d is for the same-sense sampling subject to the probability distribution P K and b 1 , b 2 , . . . , b d obeys the same-distribution sampling which is uniformly distributed between the obedience [0, 2π]. When the translation invariant kernel function is a Gaussian kernel function, K(x − y) � e − (y/2)‖x− y‖ 2 , P K is a Gaussian distribution, i.e., P K ∼ Normal(0, cI n×n ).
Orthogonal Procrustes
Problem. An orthogonal Procrustes problem is to solve an orthogonal transforming matrix O, so that PO is as close to Q as possible, i.e., where O T O � I. is formula is not easy to be solved directly, and it can be optimized by alternating optimization. Namely, the matrix P is first to be fixed, and the matrix Q is optimized to make the target function value reduced. en the matrix Q is fixed, and the orthogonal transforming matrix O is optimized to make the target function value reduced.
Algorithm Framework.
For a general hash learning algorithm, the length of the hash code by learning is always fixed. AVBH uses the idea of asymmetric hashing algorithm, that is, the hash code for the dataset is short and unfixed, and the query point of the code is long and fixed. e steps of the AVBH hashing algorithm are shown in Figure 1, which mainly includes the dataset encoding steps ①-③ and the query point encoding step ④. e dataset encoding section consists of two phases: random Fourier feature encoding (RFF encoding) and variable bit encoding (AVBH encoding). First, step ① uses the random Fourier feature (RFF) to map the dataset and get RFF encoding. After RFF coding, considering the difference of RFF coding frequency, the RFF coding frequency is sorted in step ②. According to the requirement, the original dataset can be divided into the subset by the RFF code as the length of k 1 , k 2 , . . . , k L shown in the figure. As shown in step ③, the AVBH subset encoding of the length of k 1 , k 2 , . . . , k L can be reproduced by duplicating (n/k 1 ), (n/k 2 ), . . . , (n/k L ) times sequentially and then the Hamming code of n dimension is formed.
In the query point encoding section, the query point quantization is encoded into RFF encoding of length n by step ④.
Objective Function.
e target of the AVBH method is to get L groups of hash encoding with the length of k 1 , k 2 , . . . , k L through the hash function G(x), namely, where N � L l�1 N l , n � L l�1 k l . is divides the dataset B (1) , B (2) , . . . , B (L) into subset according to the RFF encoding frequency. By cascading (n/k 1 ), (n/k 2 ), . . . , (n/k L ) times, respectively, we can get L group n bit hash code. For example, en combine B (1) , B (2) , . . . , B (L) to get a n bit long hash e AVBH method calculates the similarity by calculating the Hamming distance between the hash code of the query point and the concatenated dataset during the query process. erefore, for the dataset, we need to construct the hash mapping function, so that the L group hash code obtained with the length of k 1 , k 2 , . . . , k L , respectively, can preserve the original information as much as possible. erefore, the AVBH method obtains the hash mapping function by the reconstruction error (8) between the minimum cascading encoding B and n dimension sample vector Y: where R is an orthogonal rotation n × n matrix, namely, Combining the properties of associative matrices and the definition of F-norm of matrices, we can get the following equation: As the unknown variable B, R in formula (8) is the product relation, the expanded formula (9) contains two items of unknown variables, so it is difficult to solve. After further simplification, we can get the following formula: As B ∈ +1, − 1 { } n×N , it is easy to get tr(B T B) � nN. As R T R � I, we can get that Y is unrelated to B and R. As a result, tr(Y T Y) � c, where c is unrelated to B. So formula (10) is simplified as follows: us, minimizing the reconfiguration error (8) equals minimizing the quantization error (11). e objective function of AVBH to encode the dataset is to minimize the reconstruction error of the concatenated encoding of the n bit by finding the orthogonal rotation matrix R. In extreme cases of the dataset which is uniformly distributed, there is no significant difference in the frequency of the hash encoding of the dataset, and the AVBH method then degenerates into the ACH algorithm [16]. Compared with the ACH hashing algorithm, the AVBH hashing algorithm is more adaptable to the real data because it can adapt to the data of various distributions and the generalization ability is stronger.
Optimization Algorithm.
e objective function (11) can be optimized by alternating optimization. Namely, the rotation matrix R is first to be fixed, and the encoding matrix B is optimized to make the target function value reduced.
en the encoding matrix B is fixed and the rotation matrix R is optimized to make the target function value reduced. In this way, the value of the target function decreases until it converges. e following is a discussion of how to tune and optimize the value of the target function.
(1) Fix the rotation matrix R, and optimize the encoding matrix B. Given V � R T Y, V lm is a matrix consisted of (m − 1) × k + 1 row to m × k row, (l − 1) × k + 1 column to l × k column of V. From formula (11), we can get the following equation: As n, N, c are unrelated to B, for a fixed R, the problem of minimizing (12) is equal to the problem of maximizing the following formula: As B (l) ij ∈ +1, − 1 { }, optimal analytic solution of formula (13) is given by (2) Fix the encoding matrix B, and optimize the rotation matrix R.
Under R T R � RR T � I, the problem of minimizing formula (11) is equal to an Orthogonal Procrustes problem [9]. e optimal solution of such problems with R is as follows: Hence, the problem of optmizing R to get the minimum value of formula (15) is equal to the problem of maximizing the following formula: By calculating the SVD of YB T , we can get the following formula: where U is a matrix which consists of left singular value vector, C is a matrix which consists of right singular value vector, Ω is a diagonal matrix which consists of corresponding singular value vectors, and the diagonal elements of which is Ω ii ≥ 0, i ∈ [1, n]. By combining formulas (16) and (17), we can get the following equation: Given A � (RC) T U, R � RC, A ii is the diagonal element of A, and R i , U i , respectively, represent the i − th row of R, U. By Cauchy-Schwarz inequalities [11], the following equation is obtained: So formula (18) can be written as formula (20): Combining formula (19), when R i � U i , formula (20) takes the maximum value. For R i � U i , we can get the following formula: Scientific Programming when R � UC T , formula (16) takes the maximum value, and formula (15) takes the minimum value. As a result, we can get the optimal result by formula (21).
Dataset Encoding.
When the objective function value converges, we can get the mapping function G(y 1 ) of AVBH to the dataset according to formula (14), where y is the random Fourier feature (RFF) obtained by the mapping stage of the sample point x:
Query Point
Encoding. e optimal rotation matrix R can be obtained from the training process of dataset coding. For the data q in the query set, the main goal of encoding is to keep as much accurate information as possible, so the query set encoding does not need to be compressed and mapped to the hash code of the length bit. Combining formula (14), we can get the mapping function of AVBH to the query set:
Convergence Analysis of AVBH.
According to the objective function (8), we can get the following formula: where D is a n × N constant matrix and satisfies the following two conditions: (1) signs of each element, i.e., positive or negative, in D are the same as that in R T Y, and (2) each element in D is not greater than the corresponding position element in R T Y. erefore, the optimization goal for Loss(B, R) is transformed into the two suboptimization problems, i.e., Loss 1 (B) and Loss 2 (R). Specifically, for the subproblem Loss 1 (B), formula (14) gives the optimal solution. erefore, it can be guaranteed that the updated value of formula (14) is less than or equal to the value obtained before. For the subproblem Loss 2 (R), formula (21) gives the optimal solution. erefore, it also can be guaranteed that the updated value of formula (21) is less than or equal to the value obtained before.
Combining the two parts, the combination of (14) and (21) can guarantee that the updated value is less than equal to the value obtained before the update. We can conclude the AVBH algorithm is convergence.
Experimental Datasets
It is a set of 60,000-32 × 32 colour images in 10 categories with each category containing 6,000 images. In this experiment, 320-D gist features were extracted for each image in the dataset. We randomly selected 1,000 images as the test data and the remaining 59,000 as the training data. In the training data, the closest 50 data points (based on the Euclidean distance) from a test data point were regarded as its nearest neighbours.
SIFT 2 .
It is a local SIFT feature set containing 1,000,000 128-D images. 100,000 of these sample images were randomly selected as the training data and 10,000 of other sample images as the test data. 3 . It is a global GIST feature set containing 1,000,000 960-D images. 500,000 of these sample images were randomly selected as the training data and 1,000 sample images as the test data.
Performance Evaluation.
e performance of AVBH was evaluated mainly by the relationship between the accuracy of the query (precision) and the recall rate (recall). We define For the sake of fairness, the average encoding length of AVBH was set to be the encoding length of other methods under a given dataset. Figures 2-4 show precision-recall curves for Euclidean neighbour retrieval for several methods on CIFAR-10, SIFT, and GIST with Euclidean neighbour ground truth.
Scientific Programming
Our method, AVBH, can get a better precision performance on the four datasets. As the AVBH algorithm uses the variable bit code, the total code length is less than other algorithms. As a result, our method effectively reduces the cost of storage and improves the accuracy of query.
Conclusion
In this paper, an asymmetric learning to hash with variable bit encoding algorithm was proposed. By the frequency statistics of the random Fourier feature encoding for the dataset, we compress high-frequency hash codes into longer encoding representations and low-frequency hash codes into shorter encoding representations. For a query data, we quantize to a long bit hash code and compare with the same length cascade concatenated data point to retrieve the nearest neighbours. is ensures that the original data information can be preserved as much as possible while the data are compressed, which maximizes the balance between coding compression and query performance. Experiments on open datasets show that the proposed algorithm effectively reduces the cost of storage and improves the accuracy of the query. As we use a two-stage algorithm framework for the hash codes generating, the training stage costs a lot of time. In future work, we will work on simplifying the training process.
Data Availability
e datasets for the experiment of this paper are as follows.
(1) CIFAR-10 (available at http://www.cs.toronto.edu/∼kriz/ cifar.html): it is a set of 60,000-32 × 32 colour images in 10 categories with each category containing 6,000 images. In this experiment, 320-D gist features were extracted for each image in the dataset. We randomly selected 1,000 images as the test data and the remaining 59,000 as the training data. In the training data, the closest 50 data points (based on the Euclidean distance) from a test data point were regarded as its nearest neighbours. (2) SIFT (available at http://corpustexmex.irisa.fr): it is a local SIFT feature set containing 1,000,000 128-D images. 100,000 of these sample images were randomly selected as the training data and 10,000 of other sample images as the test data. (3) GIST (available at http://corpus-texmex.irisa.fr): it is a global GIST feature set containing 1,000,000 960-D images. 500,000 of these sample images were randomly selected as the training data and 1,000 sample images as the test data.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2020-01-23T09:06:23.131Z
|
2020-01-21T00:00:00.000
|
{
"year": 2020,
"sha1": "0028a9350a032d08d6096f3dd03cb7bae8ee1cab",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/sp/2020/2424381.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7771940b4a5c31b3d2c6370da8a967727d1caf67",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
72177269
|
pes2o/s2orc
|
v3-fos-license
|
Effect of erythrina indica on stress induced alteration on lipid profile in rats
Modern life is full of hassles, deadlines, frustrations, and demands. In that many people affected by Stress, it is so commonplace that it has become a way of life. Stress is a normal physical response to events that make you feel threatened or upset your balance in some way[1]. Stressful stimuli may influence the onset and progression of a number of disorders in human being leading to hypertension, diabetes, stroke, cancer depression etc[2]. There are various reports that stress can change the level of certain hormones like insulin, cortisol and epinephrine[3-4]. These entire hormones affect lipid profile of the body to a great extent. In animal studies it was found that stress raises serum and tissue cholesterol level of rats on normal diet[5-6]. Last few decades the reputation of medicinal plants and its herbal remedies has increased globally due to its therapeutic efficacy and safety. Bioactive richness of the active constituents of leaves of Erythrina Indica has promoted us to undertake the present study and the lipid profile of individuals showed that it’s significantly reduce the lipid level (cholesterol and triglyceride) and remarkable stress induced alteration on lipid profile in rat.
Introduction
Modern life is full of hassles, deadlines, frustrations, and demands. In that many people affected by Stress, it is so commonplace that it has become a way of life. Stress is a normal physical response to events that make you feel threatened or upset your balance in some way [1] . Stressful stimuli may influence the onset and progression of a number of disorders in human being leading to hypertension, diabetes, stroke, cancer depression etc [2] . There are various reports that stress can change the level of certain hormones like insulin, cortisol and epinephrine [3][4] . These entire hormones affect lipid profile of the body to a great extent. In animal studies it was found that stress raises serum and tissue cholesterol level of rats on normal diet [5][6] . Last few decades the reputation of medicinal plants and its herbal remedies has increased globally due to its therapeutic efficacy and safety. Bioactive richness of the active constituents of leaves of Erythrina Indica has promoted us to undertake the present study and the lipid profile of individuals showed that it's significantly reduce the lipid level (cholesterol and triglyceride) and remarkable stress induced alteration on lipid profile in rat.
Animal
Albino rats of either sex (150-200 g) were used. They were housed in plastic cages at an ambient temperature of 26+2 0 C and 45% to 55% relative humidity with a standard 12 h light or dark cycle. They had free access to food and water and were acclimatized for at least one week before experimentation. Each experimental group consisted of minimum of 6 animals. National Research Council guidelines for the care and use of laboratory animals were followed throughout the study.
Objective: The study was undertaken to evaluate the effect of stress induced alteration on lipid profile in rats by different fractions (Petroleum ether, ethyl acetate and chloroform) of ethanolic extract of Erythrina Indica an indigenous plant used in ayurvedic medicine in India. Methods: The study was carried on albino rats (150-200g) of either sex, divided into four groups of 6 each. Group I served as control, Groups II, III and IV was treated with different fractions (Petroleum ether, ethyl acetate and chloroform) of ethanolic extract of Erythrina Indica 150mg/kg, p.o. in a single daily dose from day 1 to day 22. Physical stress of 5 hours swimming was given to all the groups on day 22. Blood samples were withdrawn in group I on day O (blank control) and on day 22 after stress (positive control). Blood samples were withdrawn in group II, III and IV on days 3,7,14 and 21 and on day 22 after stress. Results: All the blood samples were analyzed for total cholesterol (TC) HDL, cholesterol (HDLC) triglyceride (TG) by enzymatic method and LDL & VLDL cholesterol was calculated by on the basis of Friedwalds equation. After 21 days of treatment changes the serum lipid levels in rats insignificantly. In control group stress increased the lipid levels in rats significantly except HDL cholesterol which reduced insignificantly. When Erythrina indica treated rats were subjected to stress on day 22, their serum lipid levels increased significantly except HDL cholesterol which reduced insignificantly. Conclusions: study indicates that various fractions (Petroleum ether, ethyl acetate and chloroform) of the ethanolic extract of Erythrina Indica is effective in attenuating stress induced dislipidemia in rats.
Preparation and fractionation of crude extracts
Crude extract was obtained through cold extraction process. The coarse powder was submerged in ethanol and allowed to stand for 10 days with occasional shaking and stirring. When the solvent became concentrated the alcohol content was filtered through cotton and then through filter paper (What man filter paper no. 1). Then the solvent was allowed to evaporate using rotary evaporator at temperature 40-45 o C. Thus, the highly concentrated crude extract was obtained. That was then fractionated using petroleum ether, ethyl acetate and CHCl 3 . The solvents of these fractions were evaporated by rotary evaporator and then dried under mild sun. The dried fractions of extract were then preserved in the freeze for the experimental uses [7] .
Drugs
Erythrina indica leaf extract was used in the present study. Erythrina indica (150mg/kg) was administered once daily, orally for 22 days in suspension form through a rat feeding cannula on empty stomach in the morning. Dose was extrapolated from human dose on body surface area basis [7] .
Stress
A 5 hours swimming stress1 [3,8] was given to all the animals on day 22. This includes active swimming and immobility period. For swimming, a plastic tub (24" in height and 40" in diameter) half filled with tap water and maintained at room temperature of 28 o C was used.
Blood Sample
The animals were anaesthetized with ether rapidly within 2 minutes according to stress free procedure. This does not cause stress to the animal. The blood sample was collected from retro bulbar plexus immediately after anaesthetization. Serum was separated and was kept at 4 0 C until use.
Procedure
Experiment was designed to study the effect of stress on lipid profile and effect of Erythrina indica on stress induced alteration on lipid profile in rats. For this experimental study rats were divided into four groups of 6 each. The different treatment scheduled was as follows: Group I --Control. Group II -Petroleum ether fraction (150mg/kg body wt.) Group III --Ethyl acetate fraction (150mg/kg body wt.) Group IV --Chloroform fraction (150mg/kg body wt.) In control group animals were treated with 0.5 ml normal saline (p.o.) daily from day 1 to day 22. Serum lipid levels were estimated on day O (blank control) and on day 22 after stress (positive control). Animals in group II was treated with Erythrina indica (150mg/kg p.o.) daily from day 1 to day 22. Serum lipid levels were estimated on day 3,7,14 and 21 and on day 22 after stress.
Statistical Analysis
Results were expressed as Mean + SEM and student't' was applied for analysis of data. P value<0.05 was considered as significant.
Discussion
S wimming and force water swimming in small laboratory animals has been widely used for studying the physiological changes and the capacity of the Values are Mean 依SEM Statistical analysis = Student T Test. n=6; * P <.05, ** P <.01 *** P <.001, when compared to before stress (same group), @ P <.001 when compared to control group after stress; NS = Not significant Values are Mean 依SEM Statistical analysis = Student T Test. n=6; * P<.05, ** P <.01 *** P <.001, when compared to before stress (same group), @ P <.001 when compared to control group after stress; NS = Not significant organism in response to stress. Swimming is not always a simple exercise stress because emotional factors are difficult to eliminate [10] . Change in lipid levels to stress is rather contradictory and may depend on situational, environmental and inter individual factors. Stress affect hypothalamic-pituitaryadrenal axis and affect various neurotransmission system like dopamenergic, cholinergic, 5-Hydroxydrytaminergic gabaminergic and benzodiazepine. Various hormone secretion also altered by stress like CRH, GH, Insulin, epinephrine and cortisol. The physiological mechanism of stress induced changes in lipid levels remains largely unelucidated. It appears like that the hypothalamicpituitary-adrenal (HPA) axis contributes to the stress induced cholesterol changes. Stress increased serum lipid levels significantly in the entire group (i.e. control and Erythrina indica treated) except HDL cholesterol which reduced insignificantly. When the rise in serum lipid levels after stress in group II, III, and IV (Erythrina indica treated rats) were compared with positive control ( Table 2) the rise in serum lipid levels in Erythrina indica treated rats were significantly less then control rats, except HDL cholesterol which reduced insignificantly. It can be concluded that Erythrina indica (150mg/kg p.o.) is effective in attenuating stress induced dislipidemia in rats. The present study is a preliminary attempt in evaluating the stress induced alteration on lipid profile using Erythrina indica leaf extract. Further phytochemical and pharmacological investigation is warranted in this direction for establishing its detailed mechanism of action and for substantiating its traditional and folk claims.
Conflict of interest statement
We declare we have no conflict of interest.
|
2019-03-09T14:17:00.822Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "403cf9dc4a7695b689f1f55beae5c37bcf946612",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s2222-1808(12)60298-9",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "0f9dee6f6d8038824492667cf23d75bdf14a1e53",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
247301057
|
pes2o/s2orc
|
v3-fos-license
|
Theoretical Modeling of Absorption and Fluorescent Characteristics of Cyanine Dyes
: The rational design of cyanine dyes for the fine-tuning of their photophysical properties undoubtedly requires theoretical considerations for understanding and predicting their absorption and fluorescence characteristics. The present study aims to assess the applicability and accuracy of several DFT functionals for calculating the absorption and fluorescence maxima of monomethine cyanine dyes. Ten DFT functionals and different basis sets were examined to select the proper theoretical model for calculating the electronic transitions of eight representative molecules from this class of compounds. The self-aggregation of the dyes was also considered. The pure exchange functionals (M06L, HFS, HFB, B97D) combined with the triple-zeta basis set 6-311+G(2d,p) showed the best performance during the theoretical estimation of the absorption and fluorescent characteristics of cyanine dyes.
Introduction
Fluorescent dyes are widely used for the detection and quantification of nucleic acids (NA) and proteins and are applied in real-time PCR, gel electrophoresis, flow cytometry, microscopy, etc. [1][2][3]. This is due to the ability of specific fluorescent dyes to bind to various target biomolecules in a mostly noncovalent mode, leading to changes in the fluorescent properties of the respective dye. Any significant changes in the photophysical properties of the dye would be useful, but the most used one is the increase in the emission intensity of the dye upon binding. Cyanine dyes are a wide class of cationic compounds that have been proven to be efficient probes for nucleic acid detection [4] due to the fact that they have very low fluorescence intensity before binding, and this intensity increases significantly after binding to NA. Two types of nucleic acid binding have been demonstrated for these compounds: intercalation and minor groove binding [5][6][7][8][9]. Cyanine dyes are known to extend over the visible and near infrared spectrum due to changes in the length of the central polymethine bridge or the heterocycles [4,10]. The dye spectrum can be fine-tuned by introducing substituents into the aromatic heterocycles [11]. Because of their importance as fluorogenic acid probes, the cyanines have been the subject of versatile research in the last decade [7,[11][12][13][14][15][16][17][18]. The development of new fluorescent probes for dsRNA is even more important nowadays [18,19]. The broad application of cyanine dyes in medicine and diagnostics [20][21][22][23] fuels the interest in them. The rational design of new functional materials requires a deep understanding of the driving forces behind the changes in the photophysical properties of the synthesized dyes as well as the binding mode toward the biomolecular targets. The rational approach to that problem requires a body of knowledge about the properties of the dye itself and the changes that these properties undergo upon binding. Undoubtedly, this knowledge includes experimental synthetic and spectroscopic studies as well as theoretical considerations of the dyes' properties for resolving the factors governing the binding and selectivity towards nucleic acids. Quantum chemical computations provide information that allow us to have a deeper understanding of the binding to the DNA mechanism, structure of the ligands and complexes, and photochemical and spectral characteristics of the dyes [11,14,[24][25][26][27][28]. The present study aims to assess the applicability and accuracy of several DFT functionals for calculating the absorption and fluorescent maxima of monomethine cyanine dyes. It is generally accepted that DFT provides adequate description of the geometry and physico-chemical properties of organic compounds in their ground state [29]. Time-Dependent Density Functional Theory (TDDFT) formalism is considered to be an adequate and robust tool for the computation of the electronic structure and geometry in the excited state for various organic compounds [14,30,31]. In some earlier studies, the TDDFT approach was considered to have poor performance when studying the photophysical properties of cyanines due to the overstimulation of the electron transition energies for this class of compounds [11,26,[31][32][33]. It has been discussed that TDDFT's poor performance in cyanines is due to the multi-reference nature of the electronic states of the dyes, especially when compared to the results of the CASPT2 method [34,35]. The applicability of the TDDFT approach for characterizing cyanine dyes has recently been reconsidered [14,24]. A comparison between the performance and accuracy of several Minnesota and PBE functionals with respect to Quantum Monte Carlo and CASPT2 calculations for cyanines is published from D. Truhlar and co-authors [14]. The work of Send et al. [24] also recognizes TDDFT by considering the electronic excitation of simple cyanine dyes compared to QMC, CASPT2, and the coupled cluster method (up to CC3). A conclusion was drawn that TDDFT is an adequate and reliable tool for the calculation of the excited-state electronic structures of organic molecules [32,36], including the class of cyanine dyes [14]. The combination of the functional and the basis set within the TDDFT method must be chosen for the specific fluorophore carefully. Thus, based on the already settled important issue that TDDFT does not show any shortcomings in the resolution of cyanine functionality, the aim of the present study concerns the TDDFT applications for a specific class of cyanines with more complex structure. The objective is to find functional-basis set combinations that are suitable for describing the specific chromophore for a group of monomethinecyanine dyes that are studied. We examined ten DFT functionals and different basis sets to select the proper theoretical model for calculating the electronic transitions of cyanine dyes. The theoretical results were validated against experimental data for the dyes studied.
Computational Methods
Quantum chemical computations were applied to simulate the geometry, electronic structure, and spectral properties for a series of cyanine dyes.
The geometry optimization and photophysical properties of the monomers of thiazole orange (TO) and the seven analogues that were studied were computed using the G16 software package [37]. The minimum energy structures in the ground state were optimized at the DFT [38] theory level using the B3LYP hybrid function in conjunction with the 6-31G(d,p) [39] basis set, and for the iodide counterions, only the SDD basis set and effective core potential were used [40][41][42][43]. The plausible TO dimers were optimized at M062X/6-31G(d,p) (SDD basis set used for the iodide counterions).
The effect of the medium was taken into account at each step by means of PCM formalism [44,45]. All of the computations were performed in a water medium to reproduce the experimental conditions (TE buffer). In order to verify that each optimized structure is a minimum of the potential energy surface, an analysis of the harmonic vibrational frequencies was performed using the same method/basis, set and no imaginary frequency was found. To determine the absorption wavelengths, the lowest energy absorption transitions were evaluated by TDDFT calculations of the vertical excitations. Ten different functional basis sets: B3LYP [46], PBE0 [47,48], M062X, M06, M06L [49], BH and HLYP [50], CAM-B3LYP [51], HFS [52,53], HFB [54], and B97D [55] with 6-311+G(2d,p) [56], were used to assess the accuracy of the functionals in predicting the spectral properties of cyanine dyes. The procedure for vertical absorption and emission computations is described in our previous work [30]. The following steps were included in the computations: (i) geom-etry optimization of the ground-state structure at the DFT level and the computation of vibrational frequencies using the same method/basis set to verify the optimized structure. (ii) TDDFT calculations of vertical excitations to estimate the lowest energy absorption transition. Six excited singlet states were considered, and the lowest energy transition with non-zero oscillator strength was taken into account for each of the monomer dyes from the series. Comparisons were made for two of the cyanine dyes using calculations that considered 12, 20, and 24 excited states. As a result, no significant difference in the vertical excitation energy was obtained. Twenty excited singlet states were considered for the dimer absorption spectra computations. (iii) After the excited state of interest was identified, TDDFT geometry optimization with equilibrium linear response solvation was performed. The optimization of the excited state at the TDDFT level starting from the ground state geometry was determined. Frequency calculations and the absence of imaginary frequencies confirm the equilibrium of the excited-state geometry. (iv) Fluorescence electronic transitions-TDDFT calculations of the vertical de-excitations based on the optimized geometry of the excited state. Vertical excitation and de-excitation energies were calculated without state-specific correction. The computed absorption and emission transitions in solution were compared to the experimental spectral data.
Geometry Optimization
A series of eight asymmetric cyanine dyes (Scheme 1) were modelled with the help of DFT and TDDFT calculations to gather deep insights into the geometry and electron density distribution in the ground and excited states and to achieve a better understanding of the electronic spectra of the dyes. The cyanine dyes that were used in the present study were previously synthesized by our group [18,57,58]. The geometries of the cis-and trans-TO conformers optimized at the B3LYP/6-31G(d,p) theory levels are presented in Figure 1. For all of the studied cyanine dyes, the trans conformer is more stable than the cis conformer. Geometry parameters and some spectral characteristics of several of the conformational TO states are provided in Table 1. The energy difference between the cis and trans TO conformers is 5.4 kcal/mol in favour The geometries of the cisand trans-TO conformers optimized at the B3LYP/6-31G(d,p) theory levels are presented in Figure 1. For all of the studied cyanine dyes, the trans conformer is more stable than the cis conformer. Geometry parameters and some spectral characteristics of several of the conformational TO states are provided in Table 1. The energy difference between the cis and trans TO conformers is 5.4 kcal/mol in favour of the trans conformer. The main difference between the two conformational states, cis and trans, is the dihedral angle between the quinoline and benzothiazole heterocycles-τ(SC 2 C 4 C 6 ). The cis conformer is not planar, and the dihedral angle τ(S 1 C 2 C 4 C 6 ) is 125.7 • . Due to some steric hindrance between the two heterocycles, the trans conformer is not fully planar. In the trans conformer, the angle τ(S 1 C 2 C 4 C 6 ) is 17.8 • , thus leading to a conjugation between the two heterocycles through the methine bridge. These theoretical results are in agreement with the results of other conformational NMR studies [59]. Scheme 1. Thiazole orange (TO) analogues. Dye labelling is the same as in the original papers.
The geometries of the cis-and trans-TO conformers optimized at the B3LYP/6-31G(d,p) theory levels are presented in Figure 1. For all of the studied cyanine dyes, the trans conformer is more stable than the cis conformer. Geometry parameters and some spectral characteristics of several of the conformational TO states are provided in Table 1. The energy difference between the cis and trans TO conformers is 5.4 kcal/mol in favour of the trans conformer. The main difference between the two conformational states, cis and trans, is the dihedral angle between the quinoline and benzothiazole heterocyclesτ(SC2C4C6). The cis conformer is not planar, and the dihedral angle τ(S1C2C4C6) is 125.7°. Due to some steric hindrance between the two heterocycles, the trans conformer is not fully planar. In the trans conformer, the angle τ(S1C2C4C6) is 17.8°, thus leading to a conjugation between the two heterocycles through the methine bridge. These theoretical results are in agreement with the results of other conformational NMR studies [59]. Furthermore, due to variations in the counterion position and the influence of the counterion position on some photophysical TO characteristics, different trans conformations were considered. The geometry parameters and spectral characteristics of three trans conformers with different counterion positions are provided in Table 1. The calculations show that the position of the counterion has little influence on the Gibbs free energy of the conformers and their absorption maxima.
Modelling Spectroscopic Properties
Computing the molecular properties of organic compounds in the ground state is more or less systematic, while the theoretical calculation of excited state properties, such as absorption and fluorescence maxima, is not so trivial. A careful calibration of the theoretical model applied to perform computational studies for a series of molecules with a particular chromophore is of critical importance.
The absorption maxima of the dyes that were studied were computed using the B3LYP/6-31G(d,p)-optimized geometry of the molecules. Vertical excitations were ob- Furthermore, due to variations in the counterion position and the influence of the counterion position on some photophysical TO characteristics, different trans conformations were considered. The geometry parameters and spectral characteristics of three trans conformers with different counterion positions are provided in Table 1. The calculations show that the position of the counterion has little influence on the Gibbs free energy of the conformers and their absorption maxima.
Modelling Spectroscopic Properties
Computing the molecular properties of organic compounds in the ground state is more or less systematic, while the theoretical calculation of excited state properties, such as absorption and fluorescence maxima, is not so trivial. A careful calibration of the theoretical model applied to perform computational studies for a series of molecules with a particular chromophore is of critical importance.
The absorption maxima of the dyes that were studied were computed using the B3LYP/6-31G(d,p)-optimized geometry of the molecules. Vertical excitations were obtained from single point TDDFT computations on the optimized geometries using ten different functionals: B3LYP, PBE0, M062X, M06, M06L, BH and HLYP, CAM-B3LYP, HFS, HFB, and B97D, to find an accurate description of the electronic excitations as well as to predict the absorption and emission spectral characteristics. The vertical excitations/absorption maxima were computed in the ground-state geometry. All of the computations were made in a water medium to mimic the experimental conditions. The solvent effect was modeled using the SCRF formalism IEFPCM.
TDDFT allows transition energies as well as excited-state properties such as dipole moments and emitting geometries to be computed [60,61]. Despite its huge popularity, the reliability of TDDFT results depends significantly on the selected exchange-correlation (XC) functional. The accepted accuracy of TDDFT computations is 0.2-0.3 eV [62]. The chemical accuracy of the 0.1 eV difference between the calculated and measured absorption maxima has not yet been reached.
In the literature, one of the ways to overcome this is to use range-separated hybrids (RSH) [62][63][64][65]. These functionals incorporate a growing fraction of exact exchange with increasing inter-electronic distance and allow the charge-transfer phenomena to be modelled accurately. Range-separated functionals (RSF) are a subgroup of hybrid functionals. While conventional (global) hybrid functionals such as PBE0 or B3LYP use fixed Hartree-Fock and DFT exchange, RSFs mix the two contributions based on the spatial distance between two points.
The predicted absorption wavelengths of the lowest electronic energy transitions and respective oscillator strengths for the cis and trans TO conformers are listed in the last two columns of Table 1. Vertical excitation energies computed at the PBE0/6-311+G(2d,p) level of theory without any state-specific correction are reported in Table 1. It can be seen that the calculated absorption maximum of the cis conformer (477 nm) is batochromicaly shifted compared to the trans absorption maxima (447-449 nm). The transitions of the two conformers show different oscillator strength, which is lower for the cis conformer.
The molecular orbitals involved in the S 0 →S 1 transitions for the cis and trans conformers were calculated. Figure 2 shows the ground-state orbital energy levels of the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO), and the energy gap for the cis TO and the trans TO in water. The trends in electron density change can be illustrated through the molecular orbitals shape analysis. The HOMO has a higher electron density on the benzothiazole ring. As seen from Figure 2, the electron density is redistributed from the benzothiazole and moves toward the quinoline heterocycle of the cis and trans TOs. Strong charge transfer is observed from the benzothiazole ring toward the quinoline portion, accompanying the electronic transition in the cis TO conformer. The strong charge transfer observed for the cis conformer explains the red shift of the transition. For the other compounds being studied, only the most stable conformer was considered.
ring toward the quinoline portion, accompanying the electronic transition in the cis TO conformer. The strong charge transfer observed for the cis conformer explains the red shift of the transition. For the other compounds being studied, only the most stable conformer was considered. The influence of the basis set on the absorption maxima calculations was considered. The theoretical vertical excitation energies were computed with six Pople-type basis sets. The results that were obtained are summarized in Table 2. As seen from Table 2, the addition of one diffuse function to the 6-31G(d,p) basis set instantly increases the value of the calculated absorption maximum, thus improving the results obtained with 6-31+G(d,p). The best results were obtained with the large 6-311+G(2d,p) basis set (2.71 eV(458 nm) the predicted maximum, 2.47 eV(502 nm)-experimental value). The difference between the theoretical estimation and the experimental value was 0.24 eV, which is an acceptable accuracy. Benchmark calculations on the absorption properties of various systems have demonstrated that the expected TDDFT accuracy is between 0.2 and 0.3 eV [30,62,66]. Further calculations of vertical absorptions for a series of eight cyanine dyes were carried out with different functionals combined with the triple-zeta basis set 6-311+G(2d,p). Table 3 summarizes the vertical absorption energies calculated with ten different TDDFT functionals: B3LYP, PBE0, M062X, M06, M06L, BH and HLYP, CAM-B3LYP, HFS, HFB, and B97D. The applied functionals differ in the form of their exchange functional. This functional has various percentage HF exchange (0% to 54%). A comparison between the theoretical results obtained with the ten functionals combined with the triple-zeta basis set 6-311+G(2d,p) in a water medium and the experimental spectral data are presented in Table 3. The performance of the different functionals can be assessed by the mean absolute Table 3. All of the DFT functionals follow the experimental tendencies ( Figure 3) and qualitatively describe the changes in the absorption maximum in the studied series. Such correlation between the experiment and theory indicates that the entire group of functionals is good enough for picturing the trends in the series. The deviation in the theoretically calculated absorption energies from the experim tally observed values for the electronic transitions (in eV) can be seen in Figure 4. I be seen from Table 3 and Figure 4 that the hybrid functionals B3LYP, PBE0, and M06 27% Hartree-Fock exchange) performed adequately for this class of dyes and can be as a tool for calculating the electronic structures of monomethine cyanine dyes. Altho they overestimate the transition energies, MAD is in the admissible range of 0.2-0. The performance of the range-separated functional CAM-B3LYP is the worst in our although it has been recommended for the calculation of electronic charge transfer tr tions [65,67,68]. The deviation in the theoretically calculated absorption energies from the experimentally observed values for the electronic transitions (in eV) can be seen in Figure 4. It can be seen from Table 3 and Figure 4 that the hybrid functionals B3LYP, PBE0, and M06 (25-27% Hartree-Fock exchange) performed adequately for this class of dyes and can be used as a tool for calculating the electronic structures of monomethine cyanine dyes. Although they overestimate the transition energies, MAD is in the admissible range of 0.2-0.3 eV. The performance of the range-separated functional CAM-B3LYP is the worst in our case, although it has been recommended for the calculation of electronic charge transfer transitions [65,67,68].
Photochem 2022, 2, FOR PEER REVIEW 9 Figure 4. Deviations in the theoretical TDDFT vertical excitation energies (in eV) computed with different functionals combined with 6-311+G(2d,p) basis set in a water medium from the experimental data from spectra in TE buffer for the series of cyanine dyes.
In Gaussian 16, HFS stands for the Slater exchange. HFB is Becke's 1988 functional, which includes the Slater exchange with corrections involving the gradient of the density. The pure exchange functionals (М06L, HFS, HFB, B97D) show the best performance for the case study. All of these functionals have excellent performance and good predictability, the best being B97D with MAD 0.01 eV.
Based on these studies, we recommend that pure DFT functionals (М06L, HFS, HFB, B97D) be used to calculate the absorption properties of cyanine dyes.
Fluorescence
As mentioned earlier, the fluorescent response of cyanine dyes is sensitive to the environment. TO and its analogs have almost no fluorescence in organic solvents and exhibit a serious enhancement in the fluorescence intensity in viscous solutions (such as glycerin) and in DNA/RNA. The nature of fluorescence quenching in asymmetric cyanine dyes has been elucidated in several studies [66,69,70] and has been attributed to intramolecular torsion in the excited state. Easy rotation in solvent quenches the fluorescence. The fluorescence quantum yield increases when this rotation is obstructed. The geometry of the first TO excited state was optimized at B3LYP and PBE0 /6-311+G(2d,p) and is presented in Figure 5. The optimized ground-state geometry is planar, while the excited state is highly twisted. The vertical excitation leads to a locally excited state with a weak CT character ( Figure 2) that has the same geometry as the ground state and has a planar structure. This local excited state has a transition energy of 2.51 eV (494nm). The optimization of the first excited state S1 leads to a twisted geometry where the donor (quinolone moiety) and the acceptor (benzthiazole part) are perpendicular to each another ( Figure 5 In Gaussian 16, HFS stands for the Slater exchange. HFB is Becke's 1988 functional, which includes the Slater exchange with corrections involving the gradient of the density. The pure exchange functionals (M06L, HFS, HFB, B97D) show the best performance for the case study. All of these functionals have excellent performance and good predictability, the best being B97D with MAD 0.01 eV.
Based on these studies, we recommend that pure DFT functionals (M06L, HFS, HFB, B97D) be used to calculate the absorption properties of cyanine dyes.
Fluorescence
As mentioned earlier, the fluorescent response of cyanine dyes is sensitive to the environment. TO and its analogs have almost no fluorescence in organic solvents and exhibit a serious enhancement in the fluorescence intensity in viscous solutions (such as glycerin) and in DNA/RNA. The nature of fluorescence quenching in asymmetric cyanine dyes has been elucidated in several studies [66,69,70] and has been attributed to intramolecular torsion in the excited state. Easy rotation in solvent quenches the fluorescence. The fluorescence quantum yield increases when this rotation is obstructed. The geometry of the first TO excited state was optimized at B3LYP and PBE0 /6-311+G(2d,p) and is presented in Figure 5. The optimized ground-state geometry is planar, while the excited state is highly twisted. The vertical excitation leads to a locally excited state with a weak CT character ( Figure 2) that has the same geometry as the ground state and has a planar structure. This local excited state has a transition energy of 2.51 eV (494nm). The optimization of the first excited state S 1 leads to a twisted geometry where the donor (quinolone moiety) and the acceptor (benzthiazole part) are perpendicular to each another ( Figure 5). With the change in the dihedral angle between the two heterocycles (τ 2 - Figure 6) a fully twisted dark state is formed in accordance with previous findings [11][12][13]. This conformational changes in the structure of the fluorophore lead to the formation of a twisted intramolecular chargetransfer (TICT) excited state (S 1 ) [71]. The computed Stokes shift for TO is 1613 cm −1 , which agrees with the experimentally measured 1193 cm −1 . The fluorescence computations at B3LYP and PBE0 are in line with the experiment although the transition energies are slightly overestimated.
Photochem 2022, 2, FOR PEER REVIEW 10 in the dihedral angle between the two heterocycles (2- Figure 6) a fully twisted dark state is formed in accordance with previous findings [11][12][13]. This conformational changes in the structure of the fluorophore lead to the formation of a twisted intramolecular chargetransfer (TICT) excited state (S1) [71]. The computed Stokes shift for TO is 1613 cm −1 , which agrees with the experimentally measured 1193 cm −1 . The fluorescence computations at B3LYP and PBE0 are in line with the experiment although the transition energies are slightly overestimated. Figure 5. Schematic representation for ground (S0) and excited state (S1) geometries of TO. Figure 5. Schematic representation for ground (S 0 ) and excited state (S 1 ) geometries of TO.
Photochem 2022, 2, FOR PEER REVIEW 10 in the dihedral angle between the two heterocycles (2- Figure 6) a fully twisted dark state is formed in accordance with previous findings [11][12][13]. This conformational changes in the structure of the fluorophore lead to the formation of a twisted intramolecular chargetransfer (TICT) excited state (S1) [71]. The computed Stokes shift for TO is 1613 cm −1 , which agrees with the experimentally measured 1193 cm −1 . The fluorescence computations at B3LYP and PBE0 are in line with the experiment although the transition energies are slightly overestimated. The conformational change in the excited state and the fact that the fluorescence intensity changes after binding of the dye to NA, thus leading to the fixation of the planar geometry, are the origin of the TO sensing mechanism and its analogs.
Aggregation of the Dyes
Monomethine cyanine dyes tend to aggregate in aqueous solution due to hydrophobic interactions. It has been suggested that the hydrophobicity and polarizability of dyes favor π-stacking interactions [72]. The self-aggregation of both TO and Benzthiazole Orange dyes has been found to occur but with deferent strength and other features depending on the dye itself [73]. An additional absorption band at 471 nm is observed in the absorption spectrum of TO alongside the dye's intrinsic absorption at 501 nm [73][74][75]. The band at 501 nm prevails at low concentrations, while at higher concentrations, the band at 471 predominates. This behavior is associated with the self-aggregation of the dye molecules and the formation of H dimers [74]. H-dimers are formed by interactions between the heterocyclic systems of two dye molecules located one above the other (the molecules are superimposed), while in the case of J dimers, molecule slipping relative to each other is observed. To address this behavior, we modeled four different π-stacked TO H-dimers, and their optimized geometry is shown in Figure 7.
During TO dimer optimization, the main challenge is maintaining π-stacking. The use of global hybrids such as B3LYP and PBE0 is not possible since the structure of the dimers falls apart during optimization. The dimer structures were optimized using the M062x functional. The presence of the iodide counterions was explicitly considered when modelling the structures. Mooi and Heyne [76] studied the effect of various counterions on aggregation. Counterions have been shown to play a significant role in terms of structural dimer organization and specific ionic effects.
The Gibbs free energy resulting from dimer formation was calculated at M062X/6-31G(d,p) from the following reaction: 2TO→(TO) 2 . The optimized geometries of the Hdimers are shown in Figure 7, and the theoretical estimations are provided in Table 4. The most stable H-dimer is Dimer 2, where the donor part of the first molecule is above the acceptor part of the second TO molecule. The calculated Gibbs free energy resulting from the formation of the most stable dimer is −6.10 kcal/mol. K D has been experimentally measured in two different TO dimerization studies, with the following results being achieved: −3.1 × 10 4 M −1 from [73] and 2.5 × 10 4 M −1 [77]. The Gibbs free energy of formation of the dimers was computed from the experiment using the equation ∆G= −RTlnK. The respective values were −6.12 kcal/mol and −6.00 kcal/mol. Thus, a very good agreement between the experiment and theory (−6.10 kcal/mol) was obtained in the present study.
The predicted vertical absorption values for the dimers are blue shifted according to the monomer absorption. The absorption maxima of the lowest energy TO dimer (Dimer 2) were calculated with HFS and HFB functionals determined to be the most accurate for the monomer absorption prediction. The obtained theoretical values are provided in Table 5. As seen from the table, the absorption maxima computed with HFB and HFS functionals are in very good agreement with the experimental values [73,75]. For comparison, the PBE0-calculated values are also provided in the table.
Dimer formation can be used as a model for the aggregation characteristics of TO analogs as well as analysis of the change in the fluorescence of the dye after binding to DNA. It can be stated that aggregation-induced fluorescence is observed during the dye aggregation process and upon dye-binding to NA. The predicted vertical absorption values for the dimers are blue shifted according to the monomer absorption. The absorption maxima of the lowest energy TO dimer (Dimer 2) were calculated with HFS and HFB functionals determined to be the most accurate for the monomer absorption prediction. The obtained theoretical values are provided in Table 5. As seen from the table, the absorption maxima computed with HFB and HFS functionals are in very good agreement with the experimental values [73,75]. For comparison, the PBE0-calculated values are also provided in the table.
Conclusions
A series of eight asymmetric cyanine dyes was modelled using DFT and TDDFT calculations. The calibration of the theoretical model for performing computational studies for dye molecules containing a particular chromophore was performed by examining the accuracy of ten functionals and six Pople-type basis sets. Theoretical results were validated against the experimental data. It was shown that the addition of one diffuse function to the 6-31G(d,p) basis set instantly increased the value of the calculated absorption maximum, thus improving the results obtained with 6-31+G(d,p). The large triple-zeta basis set 6-311+G(2d,p) showed the best performance. The comparison between the theoretical results obtained with the ten functionals combined with the triple-zeta basis set in a water medium and the experimental spectral data evince that the pure exchange functionals M06L, HFS, HFB, and B97D had the best performance in the case study.
|
2022-03-09T16:09:12.796Z
|
2022-03-04T00:00:00.000
|
{
"year": 2022,
"sha1": "45c5a892b07f5820c08426d4194d6f78ae025f56",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-7256/2/1/15/pdf?version=1646381257",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3657e8145532968c9aace2dea59cfa46c0da5b33",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": []
}
|
266262451
|
pes2o/s2orc
|
v3-fos-license
|
Genetic associations with longevity are on average stronger in females than in males
It is long observed that females tend to live longer than males in nearly every country. However, the underlying mechanism remains elusive. In this study, we discovered that genetic associations with longevity are on average stronger in females than in males through bio-demographic analyses of genome-wide association studies (GWAS) dataset of 2178 centenarians and 2299 middle-age controls of Chinese Longitudinal Healthy Longevity Study (CLHLS). This discovery is replicated across North and South regions of China, and is further confirmed by North-South discovery/replication analyses of different and independent datasets of Chinese healthy aging candidate genes with CLHLS participants who are not in CLHLS GWAS, including 2972 centenarians and 1992 middle-age controls. Our polygenic risk score analyses of eight exclusive groups of sex-specific genes, analyses of sex-specific and not-sex-specific individual genes, and Genome-wide Complex Trait Analysis using all SNPs all reconfirm that genetic associations with longevity are on average stronger in females than in males. Our discovery/replication analyses are based on genetic datasets of in total 5150 centenarians and compatible middle-age controls, which comprises the worldwide largest sample of centenarians. The present study's findings may partially explain the well-known male-female health-survival paradox and suggest that genetic variants may be associated with different reactions between males and females to the same vaccine, drug treatment and/or nutritional intervention. Thus, our findings provide evidence to steer away from traditional view that “one-size-fits-all” for clinical interventions, and to consider sex differences for improving healthcare efficiency. We suggest future investigations focusing on effects of interactions between sex-specific genetic variants and environment on longevity as well as biological function.
Introduction
In general, females live longer and are less susceptible to mortality from infectious and non-communicable diseases compared to males [1,2].The increased life expectancy of females as a phenomenon became apparent when data for people born in the late 19th century became available [3].This data also suggested that the improved medical and social status of females during the last century may have benefitted females more than males with respect to longevity.Additional studies showed that males have had significantly higher mortality rates than females during famines and other natural disasters, worldwide [4][5][6].During the European summer heat wave of 2003, among persons aged 65 or older, age-specific mortality rates of males were twice as high as those of females [7].In countries where the data are available (China, South Korea, Italy, Spain, Denmark, Mexico, Portugal, Greece, Norway, England, Philippines, Slovenia, Czech, South Korea, Italy, and Spain), age-specific fatality rates have been substantially higher among male COVID-19 patients than among their female counterparts, and the sex gap quickly increase with age after age 50 [8][9][10][11][12].
In general, the socioeconomic status, including education, income and occupation, as well as health status, as measured by functional capacities in activities of daily living, cognition, and physical performance, are substantially worse in females than in males in China [13] and elsewhere [14,15].The phenomenon that females on average live significantly longer than males in spite of having poorer health and lower socioeconomic status is known as the male-female health-survival paradox [1].
The scope of the influence of genetics on this paradox is not known.Previously published genome-wide association studies (GWAS) of longevity adjusted for sex as a covariate, identified sex-independent genetic risk factors, such as APOE (Apolipoprotein E), FOXO1A (Forkhead Box O1A) and IL6 (Interleukin 6) [16][17][18][19], but did not investigate the sex-specific genesgenes [20].Other studies have revealed significant sex-specific associations of genetic variants with different diseases and health outcomes [21].The functional EXO1 (Exonuclease 1) gene promoter variant is associated with increased life expectancy in female centenarians only [22].FOXO3 is the second most-replicated gene with longevity [23][24][25][26], and several studies conducted using Western populations showed that it is a sex-specific longevity gene that favors males more than females [23,24].However, this male-specific effect may not be universally valid since a recent study showed that FOXO3 was protective in females and not males; the same study showed SIRT1 (Sirtuin 1) was associated with longevity in males [26].
EXO1, FOXO3 and SIRT1 are single genes and don't reveal whether polygenetic associations with longevity are on average statistically more significant in males or females, which is jointly determined by small effects of numerous genes.Based on sex-specific GWAS on the Chinese Longitudinal Healthy Longevity Study (CLHLS) datasets, four groups of sex-specific genes were found to be associated with longevity [20].However, the study did not investigate the potential statistically more significant effect of polygenetic associations with longevity in females compared to males and vice versa.
Although research on sex differences in mortality and the socioeconomic and behavioral determinants have proliferated recently [15], as yet no study has addressed whether polygenetic associations with longevity are on average more significant in one sex or the other.The present study aims to address this important scientific question to better understand the male-female health-survival paradox.
Y. Zeng et al.
Identification of eight groups of sex-specific genes jointly associated with longevity
We have identified eight groups of genes that are jointly associated with longevity at different significance levels.We identified these eight groups using datasets of CLHLS GWAS and CLHLS healthy aging candidate genes, and they are listed in SM-Table 3a-3b.Table 1 summarizes the criteria for identifying each of these sex-specific groups: the set of sex-specific longevity genes were grouped by jointly-significant association (P < 10 − 8 ) with longevity in one sex but in the other sex (P > 0.05) based on polygenic risk score (PRS) analysis [20].By using the PRSice software, the P T values were implemented for searching an ideal threshold P T to find the best-fit [27].Meanwhile, the approach used to analyze every individual sex-specific longevity associated gene at a pre-defined significant P value in one sex but not the other (P T > 0.05) was described in SM4.
Using these analytic approaches, we found that each of the eight groups of sex-specific genes listed in SM-Table 3a-3b are jointly and significantly associated with longevity in one sex (P = 8.7 × 10 − 165 ~1.5 × 10 − 37 ), but not jointly significant in the other sex (P > 0.05), while PRS-sex interaction effects are highly significant (P = 5.2 × 10 − 106 ~4.4 × 10 − 15 ) (SM-Table 5).To simplify the presentation of the eight groups of sex-specific genes, we only indicate individual P values for males and females of each group of genes throughout the text, and do not indicate the group's joint-significance P values which are presented in SM-Table 5.
Comparisons of the average probabilities of longevity between males and females
Table 1 also presents the average probabilities of longevity conferred by each of the eight groups of longevity genes (formula (4)).The average probability of longevity is the likelihood that an individual with a given PRS value is a centenarian, as estimated by the logistic regression (Methods section M3).Table 1 Section A summarizes the results of analysis of the CLHLS GWAS datasets, where the average probabilities for females were substantially higher than in males by a margin of 21.3 %-24.0 % for all significance thresholds (see panels (I)-(III) in Table 1, Section A).The analysis summarized in Table 1 Section B utilized the CLHLS healthy aging candidate gene datasets, and confirmed this result, namely, that the average probabilities of longevity are substantially higher in females than in males in each group of the sex-specific longevity genes.
The female to male ratio of relative benefits associated with the sex-specific longevity genes
To quantify the sex differences in the benefits due to genetic associations with longevity, we estimate and compare female-to-male ratios of relative benefits (compared with the other sex) associated with male-specific genes of longevity and female-specific genes of longevity, respectively, employing a novel bio-demographic method described in Methods section M4, that is based on logistic
Table 1
Comparisons of the average probabilities of longevity between males and females with various standardized PRS summarizing each of the eight groups of sex-specific genes jointly associated with longevity at different significance levels.regression analyses on standardized PRS.These results are summarized in Figs.1-4.
The female to male ratio of relative benefits due to male-specific genes of longevity
As shown in Fig. 1a, the probabilities of longevity associated with the positive joint effects of higher PRS (i.e., PRS > C) of the 11 male-specific genes are greater for males than for females, but when PRS < C, the probabilities of longevity for females is greater than for males because the female curve in Fig. 1a is above the male curve.Fig. 1a also indicates that the relative benefits attributed to the 11 male-specific top genes of longevity (P < 10 − 5 ) are higher for females than males.This assessment is based on the relative areas between the respective PRS curves on either side of the cross-over point C. The male relative benefit when PRS > C of these 11 male-specific top genes associated with longevity (P < 10 − 5 ) is 34.71, i.e., the triangle area A(1) in Fig. 1a.This is the sum of differences in probabilities of longevity between males and females when PRS > C. By contrast, relative benefits accrue to females when PRS < C of the 11 male-specific top longevity-associated genes because the sum of differences in probabilities of longevity between females and males, the triangle area B(2) in Fig. 1a, is 105.59.Consequently, the female-to-male ratio of relative benefits accounts for these 11 male-specific top genes of longevity is 3.04 (FM1 = B(2)/A(1) = 105.59/34.71), and the PRS-sex interaction effects are highly significant (P = 9.7 × 10 − 19 ), as described in Fig. 1a legend.
The same pattern holds true for the 35 male-specific strong genes (Fig. 2a) or the 191 male-specific moderate genes (Fig. 3a), namely that the relative benefits are all substantially higher in females than in males).The female-to-male ratios of relative benefits due to 35 male-specific strong genes of longevity (10 − 5 ≤P < 10 − 4 , Fig. 2a) or 191 male-specific moderate genes of longevity (10 − 4 ≤P < 10 − 3 , Fig. 3a) are all substantially larger than one, and the PRS-sex interaction effects are all highly significant (see Figs. 2a-3a legends).
The female to male ratio of relative benefits due to female-specific genes of longevity
The female relative benefits due to the positive joint effects of higher PRS (i.e.PRS > C) of the 11 female-specific top genes of longevity (P < 10 − 5 ) are 71.50 (area A(2) in Fig. 1b), which is the sum of differences in probabilities of longevity between females and males, all with PRS higher than C. On the other hand, when PRS < C, the male relative benefits of the 11 female-specific top genes of longevity are 13.21 (area B(1)).Thus, the female-to-male ratio of relative benefits due to the 11 female-specific top genes of longevity is 5.41 (FM2 = A(2)/B(1) = 71.50/13.21),and the PRS-sex interaction effects are highly significant (P = 9.1 × 10 − 8 ) (Fig. 1b legend).These results indicate that the relative benefits due to the 11 female-specific top genes of longevity (P < 10 − 5 ) are more apparent in females.
The estimates presented in Figs.2b-3b, based on the CLHLS GWAS datasets, reveal the same pattern as in Fig. 1b, namely, the relative benefits due to the 25 female-specific strong genes of longevity (10 − 5 ≤P < 10 − 4 ) or the 311 female-specific moderate genes of longevity (10 − 4 ≤P < 10 − 3 ) are all substantially much higher in females than in males (Figs.2b-3b legends).
Confirmation analyses using the CLHLS healthy aging candidate genes datasets
To further confirm the results based on the CLHLS GWAS datasets described above, we conducted additional South-North evaluation/replication PRS analyses on 30 male-specific and 74 female-specific genes of longevity (P < 9.0 × 10 − 3 ; Fig. 4a-b.These genes were identified and replicated individually, based on totally independent datasets of the CLHLS participants, which included 2972 centenarians and 1992 middle-age controls (SM3).The female to male ratios of relative benefits due to the 30 male-specific genes of longevity or 74 female-specific genes of longevity are all substantially larger than one, and the PRS-sex interaction effects are highly significant, confirming that the relative benefits due to sex-specific genes of longevity are much higher in females than in males.Fig. 1a and b.The PRS analyses on the 11/11 male/female-specific top genes of longevity (P < 10 − 5 in one sex, but P > 0.12 in another sex), using the CLHLS GWAS datasets Fig. 1a.11 male-specific top genes of longevity Fig. 1b.11 female-specific top genes of longevity, (P < 10 − 5 in males, but P > 0.16 in females)(P < 10-5 in females, but P > 0.12 in males) Notes: (1) P(x,s) in the vertical axis denotes the probability of longevity among persons with sex s (s = 1, male; s = 2, female) and the standardized polygenic risk score (PRS) value x which summarizes propensity to longevity of a group of sex-specific genes; P(x,s) is estimated based on the logistic regression model described in the Method section M3; (2) x in the horizontal axis represent the PRS values which is estimated by the procedure described in the Methods section M2; C is the PRS value at which the probabilities of longevity in males and females are equal to each other (i.e., the male and female curves crossover at x(PRS) --C); (3) The A(1), B(2), FM1, A(2), B(1) and FM2 are estimated using the method described in the Methods section M4.
Y. Zeng et al.
Additional PRS analysis of randomly assigned female centenarian/control samples that have exactly the same size as that of male centenarians/controls
It is common in studies of centenarians to have more female than male participants.To test whether our findings are biased by this difference in sample sizes, we conducted additional analyses based on a sample constructed of equally sized groups stratified by sex: 564 male centenarians and 564 randomly selected female centenarians, and 773 male middle-age controls and 773 randomly selected female middle-age controls, using both the CLHLS GWAS and CLHLS healthy aging candidate genes datasets.The results of this analyses were consistent with our prior results, namely, genetic associations with longevity are on average stronger in females than in males (SM-Table 7 and SM-Figs.1-4).This indicates that our current results are not affected by the larger sample size of female centenarians than their male counterparts.The higher death rate among males may increase the statistical power of the male centenarian sample [20].
Y. Zeng et al.
Analyses of all genes individually associated with longevity in both sexes or in one sex only
The genetic contributions to longevity in males and females are not attributed only to groups of sex-specific genes jointly associated with longevity (as outlined in section 2.1), but genes that are individually associated with longevity in both sexes or in one sex only also contribute.Thus, to obtain a complete picture of whether genetic associations with longevity are stronger in males or females, we further conducted logistic regression analyses on all genes that are individually associated with longevity in both sexes or in one sex only, and we used P < 0.05 as the individually significant P threshold.
Genes individually associated with longevity in both sexes
Using the CLHLS GWAS and the CLHLS healthy aging candidate genes datasets, we identified three groups of the genes that are individually associated with longevity in both sexes at different significance levels (SM Tables 8a-8c).These groups differed by the stringency of the significance of association with longevity.One group, including 17 individual genes, had P < 10 − 4 in one sex and P < 0.05 in the other (SM Table 8a).The second group, composed of 92 genes, exhibited 10 − 4 <P < 10 − 3 in one sex and P < 0.05 in the other (SM Table 8b).The third group, composed of 128 genes, exhibited P < 0.05 in both sexes (SM Table 8c).For each group, the PRS values favor females over males SM Fig. 5ã5c).The average probabilities of longevity of these three groups of genes are higher in females than in males by 23.4 %-32.9 %, and the sex differences are highly significant (Table 2 A, panel I).
Genes individually associated with longevity in one sex only
We also identified three groups of the genes that are individually associated with longevity in one sex only at different significance levels (SM Tables 8d-8f), using the CLHLS GWAS datasets and the CLHLS healthy aging candidate genes datasets.The probabilities of longevity of various PRS values for each of these groups of genes also showed that genetic associations with longevity are on average stronger in females than in males (SM Fig. 6a-c), Table 2, section A(II) and B (II) demonstrate that in females the average probabilities of longevity of the groups of genes individually associated with longevity in one sex only are higher than those in males by 9.3 %-31.1 %, and the sex differences are highly significant, with one exception (P = 0.095).
Sex-based pathways associated with longevity
Next, we used the FUMA tool to map this significant longevity associated genes to the KEGG and REACTOME databases [28].(SM Fig. 7a, P < 0.05), and neuroactive ligand receptor interaction Notably, the male-specific genes were mainly enriched in the extracellular matrix-related pathways, including glycosaminoglycan biosynthesis-heparan sulfate, NABA ECM glycoproteins, and cell-cell junction organization (SM Fig. 7a, P < 0.05).The association of altered extracellular matrix with age-associated diseases, such as Alzheimer's disease, has been widely documented [29].By contrast, the female-specific genes were mainly enriched in the pathway of
Table 2
Comparisons of the average probabilities of longevity between males and females with various standardized PRS summarizing each of the groups of genes individually associated with longevity in both sex or in one sex only.neuroactive ligand receptor interaction (SM Fig. 7b, P < 0.05).
Analyses of genetic heritability of longevity in males and females using all SNPs and the genome-wide Complex Trait Analysis (GCTA) method
Using the GCTA method (section 2.5) [30], we estimated the genetic heritability of longevity (h 2 ) for each sex, based on analyzing all SNPs of the CLHLS GWAS and healthy aging candidate genes datasets.The male and female h 2 are 0.172 and 0.187 respectively based on the CLHLS GWAS dataset; the male and female h 2 are 0.199 and 0.255 respectively based on the CLHLS healthy aging candidate genes dataset.Clearly, the genetic heritability of longevity is stronger in females than that in males.These results indicate that the GCTA analyses using all SNPs of the CLHLS GWAS datasets and healthy aging candidate genes datasets reconfirm that genetic effects on longevity are on average stronger in females than in males.
Discussion
Based on the statistical and bio-demographic analyses of CLHLS GWAS datasets (including 2178 centenarians and 2299 middle-age controls), we discovered that average probabilities of longevity, associated with each of the six exclusive groups of sex-specific genes at tiered significance levels, are all significantly higher in females than in males.Our bio-demographic analyses demonstrate that the female-to-male ratios of relative benefit due to each of the six exclusive groups of sex-specific genes associated with longevity are all substantially in favor of females, and the PRS-sex interaction effects are all highly significant.This discovery that females are genetically 'advantaged' with respect to polygenetic associations with longevity was replicated across different datasets of South and North regions of China.Additional analyses of both sex-specific and non-sex-specific genes confirmed that polygenetic associations with longevity are on average stronger in females more than males.
We further confirmed this conclusion using independent datasets of CLHLS healthy aging candidate genes (including 2972 centenarians and 1992 middle-age controls).Moreover, the GCTA analyses, using all SNPs of both the CLHLS GWAS datasets and healthy aging candidate genes datasets, respectively, add an additional level of confirmation, that genetic effects on longevity are on average stronger in females than in males.Interestingly, a recent study based on results from a large GWAS on human longevity (N ≈ 390,000) demonstrated that a higher genetic predisposition for longevity had a stronger association with behavioral phenotypes (such as education, smoking, body mass index (BMI) and depression) in females than in males [31], which is concordant with our findings.
Our results beg the question of why are genetic associations with longevity on average stronger in females than in males?The fact that females take much more care for childbearing and offspring than males may shed light on answering this question.Studies related to age-specific manifestation of genetic load suggest that fertility serves as the major factor of Darwinian natural selection for the accumulation of genetic mutation driving population survival and growth [32].The grandmother hypothesis [33] proposed that postmenopausal longevity in human evolved from grandmothers' assistance with childcare, which prolonged females' lifespan.
A study reported that female centenarians were four times more likely to have children in their forties than females lived only to age 73 [34].Other studies (including analyses based on the CLHLS datasets) also found that females' late childbearing after ages 35 or 40 is positively and significantly associated with longevity [35,36].A study indicated that the longevity advantage of females over males may be a by-product of genetic evolution that maximizes the length of time during which females could bear and take care of children and contribute to human reproduction [37].
The reproductive function of females might serve as a driving force for positive selection on the human genome and the related physiological features, such as immune response and metabolism.During periods of stress such as starvation, females use available amino acids to create deposits in the liver to support reproduction; conversely males slow down anabolic pathways and reserve carbohydrate stores for eventual use by the musculature [38].Sex differences in genetics also affect innate and adaptive immunity [39].Various studies have reported a more progressive decline in immunity and dysregulated inflammatory response with increase of age in males than in females [40,41].In the current study, our pathway analysis revealed neuronal system, glycosaminoglycan biosynthesis-heparan sulfate, NABA ECM glycoproteins, and cell-cell junction organization are male-specific pathways, and neuroactive ligand receptor interaction is the female-specific pathway.Interestingly, it is previously reported that reductions of heparan sulfate biosynthetic gene function increased lifespan in Drosophila parkin mutants [42].
The results of our study lead us to conclude that genetic associations with longevity are on average stronger in females than in males.This novel finding contributes to understanding the "male-female health-survival paradox".We believe the genetic associations with longevity that are on average stronger in females than in males discovered in the present study are not driven by the fact that female centenarians and middle-age controls on average live longer than their male counterparts.This is because all GWAS of longevity (including present study) and our replication/reconfirmation analyses are not based on sex differences in lifespan among cases of centenarians and middle-aged controls.Instead, all male and female centenarians are counted as longevity "cases", regardless of their lifespan differences, and all middle-aged male and female individuals are counted as "controls", disregarding their lifespan differences.As explained in eAppendix section S1 of the article [19], all prior GWAS of longevity [16][17][18] and the present study investigate genetic associations with longevity by comparing the cross-sectional frequencies of carrying the genetic variants between centenarian cases and middle-aged controls, with no effects of the survival time with respect to each of the male and female centenarians and middle-age controls.
Our study has limitations.Similar to all other case/control association studies, including GWAS, this study could not empirically reveal the causalities of our findings, warranting further in-depth investigations.Moreover, we could not test the hypothesis that females' genetic advantage of longevity may be partially due to their XX chromosomes compared to males' single X chromosome [43, Y. Zeng et al. 44], because the CLHLS GWAS of longevity did not include SNPs on the X and Y chromosomes and mitochondria, due to genotyping technical reasons, which are the same as for other GWAS [16,17].In addition, sex-specific longevity might also be influenced by other potential confounding factors, such as inherent gender inequality, hormonal differences, environmental exposures, and behavioral factors, which are not covered in this study.Despite these limitations, the new findings of this study warrant further investigation by interdisciplinary collaborations, such as validation using datasets from other countries, international meta-analysis with much larger sample sizes, and laboratory tests on biological functions.
Nevertheless, the findings of this study provide evidence supporting the notion that significant contributions of genetic factors to sex-biased lifespan and healthspan, and also might be helpful to develop prevention and treatment strategies for both male and female elderly patients with chronic diseases in this forthcoming era of precision medicine.Additionally, our study provides invaluable insight into further understanding of molecular mechanisms underlying sex differential aging related diseases as well as regulatory networks.
Conclusion
Our study of sex-specific GWAS and novel bio-demographic analyses have contributed to a better understanding of sex differences in genetic variants that may very likely lead to different reactions to the same vaccine, drug treatment and/or nutritional intervention, and also has potential application in studies of sex-specific differences in Alzheimer's disease [45,46].Thus, steering away from the traditional view that "one-size-fits-all", and considering sex differences may likely improve the healthcare efficiency [47].We suggest that further investigations should focus on effects of interactions between sex-specific genetic variants and environmental factors on healthy aging and the biological mechanisms, which will substantially contribute to more effective precise healthcare for males and females.
Role of funding source
The funding agencies provided financial support to the data and DNA samples collections and analyses, but they did not play any role in writing, interpreting the results and submission for consideration of publication of this manuscript.We are not paid by anyone to write this article.Yi Zeng, as the 1st corresponding author, has final responsibility for the decision to submit for publication.
M1. The genetic datasets analyzed in present study
We firstly analyze the CLHLS GWAS datasets, which include 5.6 million single nucleotide polymorphisms (SNPs) for each of 2178 centenarians and 2299 middle-age controls (ref.Supplementary Materials (SM) sections SM1~SM2 and SM-Table 1).Secondly, to replicate the results, we analyzed independent datasets of CLHLS healthy aging candidate genes of 2972 centenarians and 1992 middle-age controls aged <65, who are CLHLS participants but not included in the CLHLS GWAS (SM-Table 2).Our healthy aging candidate genes datasets include 287,898 SNPs for each of the CLHLS participants.These 287,898 SNPs associated with longevity and chronic diseases were selected based on GWAS datasets of CLHLS and U.S. Health and Retirement Surveys as well as other published genetic databases.The present study is based on genetic datasets of in total 5150 centenarians and compatible middle-age controls, which has worldwide largest sample of centenarians so far, 3.9 times as large as worldwide second largest genetic datasets sample of 1320 centenarians [20].Note that a wide variety of internationally published studies have confirmed that age reporting of Han Chinese centenarians is reasonably accurate [48,49].
To increase statistical rigor, we adopt a replication framework of South and North regions of China as discovery and evaluation/ replication samples (SM-Tables 1 and 2), following most published case/control genetic studies that use Chinese nationwide datasets and based on analyses of principal components, genetics, anthropology, and linguistics, reported in the literature [50].
The Research Ethics Committees of Peking University and Duke University granted approval for the Protection of Human Subjects of CLHLS, including collections of the data and DNA samples as well as productions of the de-identified genotype and phenotype data used for present study.The survey respondents gave informed consent before participation.The uses of the genotype and phenotype data in this study are carried out in accordance with national and International legislative and institutional guidelines and regulations.
M2. The standardized polygenic risk score (PRS) analyses
It is impossible to address the present study's research question based solely on sex-specific single genes analysis, because each locus has a small effect and combinations of many genes contribute to the association with the phenotype [27,51].Thus, we conducted analyses of PRS, which summarizes the joint effects of propensity for longevity of each of the groups of identified sex-specific genes, to explore whether the genetic associations with longevity are stronger in males or females.
Following conventional methodology, we constructed PRS of each of the identified groups of sex-specific genes for each of the centenarians and middle-age controls, using the odds ratios estimated from the discovery sub-dataset (South region) as the coefficients to construct PRS for the target (North region) [27,50].Details on selection of sex-specific genes are given in Supplementary Materials (SM) Section SM4.Calculation of the PRS for each of the sex-specific genes was made using the PLINK software.First, we extracted all independent genes associated with longevity of one sex only in the discovery dataset.Second, we removed those genes which are significantly associated with longevity in the other sex in the discovery dataset via PLINK.Each of the genes retained are sex-specific genes; significantly associated with longevity in one sex but not associated with longevity for the other sex.Third, following the method for constructing the standardized PRS developed by Purcell et al. [51], these sets of selected sex-specific genes, which were weighted by their log odds ratios (ORs) from the discovery dataset and summarized into the PRS for each individual in the replication dataset.
To make the PRS more comparable across genders and to improve the interpretability of the results of the logistic regression analyses, we standardized the PRS for each of the identified exclusive groups of genes associated with longevity at different significance levels following the standard approach as expressed in formula (1) below [52].
where PR S i (s) is the standardized PRS for individual i with sex s (s = 1 for males and s = 2 for females), IPRS i (s) is the initial unstandardized PRS for individual i with sex s, μ and sd are the mean and the standard deviation of the initial PRS values of all of the male and female individuals of the sample.After rescaling of the z-scores transformation, the standardized PRS have a mean of 0 and a standard deviation of 1.0; this standardization substantially improves comparability across genders [51].We abbreviate "standardized PRS" as "PRS" in this article to simplify the presentations.
M3. The logistic regression model employed in present study
Note that regression analyses of PRS for males and females separately are not appropriate for adequately quantifying sex differences in genetic associations with longevity, because they use different reference groups for males and females, making the sex-specific estimates not fully compatible.Thus, we estimated logistic regression models including both males and females, in which the binary dependent variable is being a centenarian or middle-age control, and the independent variables are sex, the genetic variant measured by the continuous PRS, and the PRS-sex interaction term, as expressed in Equations ( 2) and (3) below: where P(x,s) i is the probability of longevity (likelihood of being a centenarian) of the individual i with sex s (s = 1, male; s = 2, female) and genetic propensity for longevity measured by standardized continuous PRS value x; s and x for individual i are denoted as S i and X i ; (X i *S i ) represents the PRS-sex interaction effects; β 0 is the intercept; β 1 , β 2 , and β 3 are the regression coefficients of the independent variables S i and X i , and (X i *S i ) interaction term, respectively.
We first estimate the P(x,s) i 1− P(x,s) i , which is the odds(x,s), based on the logistic regression expressed in Equation (2): odds(x,s) We then derive P(x, s) i as follows, based on the estimated odds(x,s) as expressed above: We use 0.01 as the increment to transfer the continuous PRS into discrete numbers of x, which keep two decimal points.Let Hx denote the highest PRS value included in the regression analysis; Lx, the lowest PRS value included in the regression analysis; P(x,s), the probability of longevity among persons of sex s and with the PRS value x, estimated based on the logistic regression model expressed in Equations ( 2) and (3); AP(s), the average probabilities of longevity among persons of sex s with various PRS values.
The strength of our logistic regression analyses is that the sex-specific estimates are fully compatible, and can be used to quantify sex differences in genetic associations with longevity.Furthermore, given the available datasets, this regression model which includes all individuals of males and females provides substantially more statistical power than performing regression analyses for males and females separately.
Consistent with other published case-control studies of GWAS on longevity based on cross-sectional datasets of centenarians and middle-age controls, we did not include socioeconomic covariates (e.g.education, occupation, etc.) in our logistic regression models, because the socioeconomic factors of the two birth cohorts of centenarians and middle-age controls born 40-60 years apart are not compatible.Instead, in our GWAS regression models, we adjusted for the top two eigenvectors to minimize the effects of population stratification, following the approach adopted in the GWAS literature [53].
Mood [54] demonstrated that, compared with probabilities (P(x,s)), the sex-specific odds(x,s) (=P(x,s)/(1-P(x,s))) and Y. Zeng et al. males/females odds ratio (x) (=Odds(x,1)/Odds(x,2)) cannot be interpreted as absolute effects, nor can they be accurately compared across male and female groups, although they are widely used to measure the degree and direction of the associations between two categories within one group.For example, assuming probability of longevity of a good genotype is 0.55 and 0.40 among males and females, respectively, the males/females relative risk ratio is 1.375 (=0.55/0.40),which means that the genotype's effects in males are 37.5 % higher than that in females.However, the sex-specific odds(x,s) calculated using the probability of longevity are 1.222 (=0.55/(1-0.55))for males and 0.667 (=0.40/(1-0.40))for females.The males/females odds ratio (x) of females to males is 1.83 (=1.222/0.667),which means that the genotype's effects in males are 83 % higher than that in females, exaggerating the sex differences in the effects of the genotype by 2.22 folds (=83.3 %/37.5 %).Norton et al. (2018; Page 84) [55] stated (we cite their original statements here): "Although for rare outcomes odds ratios approximate relative risk ratios (which is the ratio of probabilities of the outcome for two groups such as males and females), when the outcomes are not rare, odds ratios always overestimate relative risk ratios, a problem that becomes more acute as the baseline prevalence of the outcome exceeds 10 %.For example, an odds ratio (men/women) of 2.0 could correspond to the situation in which the probability for some event is 1 % for males and 0.5 % for females.An odds ratio (men/women) of 2.0 also could correspond to a probability of an event occurring 50 % for males and 33 % for females, or to a probability of 80 % for males and 67 % for females."The SM-Table 6 presents the numerical calculations and comparisons between the males/females relative risk ratios and males/females odds ratios of the three examples given by Norton et al. [55] as outlined above and four examples of the PRS values of the 11 male-specific top genes of longevity (P < 10 − 5 in males, but P > 0.16 in females) estimated using the CLHLS GWAS datasets (see Fig. 1a).
What Norton et al. [55] stated "… when the outcomes are not rare, odds ratios always overestimate relative risk ratios" can be mathematically derived in connection with present study as follows.Denote P(x,s) (s = 1: males, s = 2: females) as probability of longevity associated with PRS value x; males/females relative risk ratio is: P(x,1)/P(x,2); the sex-specific Odds(x,s) is: P(x,s)/( 1 formula (5) indicates that the odds ratio (x) Odds(x,1) Odds(x,2) biases the males/females relative risk ratios P(x,1) P(x,2) by a factor of (1− P(x,2)) (1− P(x,1)) , if the odds ratio is used to quantify the differences between males and females.Thus, the odds ratio cannot be used to accurately quantify the differences in genetic associations between males and females, although they are widely used to measure the degree and direction of the associations between two categories within one group (e.g., within male group or within female group) [54][55][56].Therefore, referring to the relevant literature [54][55][56], we estimated and compared the probabilities of longevity (rather than the odds ratios) of various sex-specific and non-sex-specific genes between males and females, in order to accurately quantify the sex differences in genetic associations with longevity.
To address the concern about whether our analyses using logistic regression may bias the estimates of probability of longevity due to the larger sample size of female centenarians compared to their male counterparts, which is common in all studies including centenarians, we conducted additional analysis based on randomly selected female centenarian/control samples with exactly the same size as the male centenarian/control samples.As presented and discussed in details insection 3.5, our additional analyses, with exactly the same sample sizes of the male and female centenarians and controls demonstrated similar results as was found from the analyses using the actual total female and male samples.Thus, our analyses using logistic regression did not bias the estimates of probability of longevity.
M4. Bio-demographic analyses on female to male ratios of relative benefits due to sex-specific genes of longevity
Referring to Fig. 1a and b for an intuitive understanding, let A(s) denote the sex-specific relative benefits (compared with the othersex) among individuals of sex s (s = 1: males; s = 2: females), due to gaining the positive joint effects of higher PRS (x > C) of the sexspecific genes of longevity; B(s) denotes the sex-specific relative benefits among individuals of sex s, due to avoiding the negative joint effects of lower PRS (x < C) of the other-sex-specific genes of longevity; where C is the PRS value at which the probabilities of longevity in males and females are equal to each other (i.e., the male and female curves cross at x = C).The formulas for estimating the A(s) and B (s) are presented and discussed below.
Female to male ratio of relative benefits due to a group of male-specific genes of longevity.
Referring to Fig. 1a, the triangle shaped area A(1) above the female line and below the male line illustrates the male relative benefits in probability of longevity due to gaining the positive joint effects of higher PRS (x > C) of a group of male-specific genes of longevity; A(1) is the sum of differences in probabilities of longevity between males and females, all with higher PRS (x > C) of the male-specific genes of longevity: On the other hand, triangle area B(2) in Fig. 1a denotes female relative benefits in probability of longevity due to avoiding the negative joint effects of lower PRS (x < C) of a group of male-specific genes of longevity; B(2) is the sum of differences in probabilities of longevity between females and males, all with lower PRS (x < C) of the male-specific genes of longevity: FM1 is the female to male ratio of relative benefits due to a group of male-specific genes of longevity, namely, the ratio of females' relative benefits due to avoiding the negative joint effects of lower PRS (x < C) of the male-specific genes of longevity (B(2)) to the males' relative benefits due to gaining the positive joint effects of higher PRS (x > C) of the same group of male-specific genes of longevity (A(1)).If FM1 is larger (or smaller) than one, it indicates that the relative benefits in probability of longevity due to a group of male-specific genes of longevity are higher (or lower) in females than in males.
Female to male ratio of relative benefits due to a group of female-specific genes of longevity.
Referring to Fig. 1b, the triangle area A(2) denotes female relative benefits in probability of longevity, due to gaining the positive joint effects of higher PRS (x > C) of female-specific genes of longevity; A(2) is the sum of differences in probabilities of longevity between females and males, all with higher PRS (x > C) of the female-specific genes of longevity: Referring to Fig. 1b, the triangle area B(1) denotes male relative benefits in probability of longevity due to avoiding the negative joint effects of lower PRS (x < C) of the female-specific genes of longevity, B(1) is the sum of differences in probabilities of longevity between males and females, all with lower PRS (x < C) of the female-specific genes of longevity: FM2 is the female to male ratio of relative benefits due to a group of female-specific genes of longevity, namely, the ratio of females' relative benefits due to gaining the positive joint effects of higher PRS (x > C) of the female-specific genes of longevity (A(2)) to the males' relative benefits due to avoiding the negative joint effects of lower PRS (x < C) of the same female-specific genes of longevity (B (1)).If FM2 is larger (or smaller) than one, it indicates that the relative benefits due to the female-specific genes of longevity are higher (or lower) in females than in males.
M5. The genome-wide Complex Trait Analysis (GCTA) method used in present study
Chip-based heritability was estimated for each sex based on the CLHLS GWAS and healthy aging candidate genes datasets respectively in present study, using the GCTA restricted maximum likelihood (REML) method as implemented in GCTA version 1.92.0beta3[30].SNP variants were linkage disequilibrium pruned using PLINK2 and the flag -indep-pairwise 50 10 0.1.The genetic relationship matrix (GRM) was constructed and the heritability was calculated with flags, using the GCTA.In order to transform the estimated heritability from the observed scale to liability scale, we specified the flag of prevalence 0.0002 for males and prevalence 0.0006 for females, according to the life expectations at birth, as reported in "World Population Prospects 2019" released by United Nations Population Division.
Data availability statement
The data reported in this paper have been deposited in the OMIX, China National Center for Bioinformation/Beijing Institute of Genomics, Chinese Academy of Sciences [57,58] (https://ngdc.cncb.ac.cn/omix:accessionnoOMIX003054).All data requests should be submitted to the corresponding author for consideration.
Data sources: The sex-specific genes outlined in (1)-(4) were identified using CLHLS GWAS datasets, and their more detailed information are referred to Tables 1-2 and eTables 4-5 of the reference 23 .The sex-specific genes outlined in ( 5)-( 6) are identified in present study using the CLHLS GWAS dataset and their more detailed information are presented in SM-Table 3a and SM-Table 3b.The sex-specific genes outlined in ( 7)-( 8) are identified in present study using the CLHLS healthy aging candidate genes datasets and their more detailed information are presented in SM-Table 4a and SM-Table 4b.Notes: (a) Each of the eight groups of sex-specific genes of longevity listed in this Table reaches a jointly-significant level of P < 10 − 8 in one sex but not-jointly-significant in the other sex (P > 0.05) 23 2) and (3) in the Methods section M3); (c) The sex-specific average probabilities of longevity among persons of sex s with various PRS values presented in this table are estimated using formula (4) in Methods section M3; (d) The P values of the sex differences presented in the last column of this table are estimated using the bootstrap method (the number of bootstrap replications is set to 2000) of the statistical software STATA 12.0.
Analyses based on the CLHLS healthy aging candidate genes datasets (IV) Sex-specific genes jointly associated with longevity
A. Analyses based on the CLHLS GWAS datasets (I) Sex-specific top genes jointly associated with longevity(1) 11 male-specific top genes jointly associated with longevity (each of them has P < 10 − 5 in males, but P > 0.16 in females)(2) 11 female-specific top genes jointly associated with longevity (each of them has P < 10 − 5 in females, but P > 0.12 in males)B.
The genes individually associated with longevity in one sex only 152
genes associated with longevity in one sex only (P < 0.05 in one sex but P ≥ 0.05 in the other sex) A. Analyses based on the CLHLS GWAS datasets (I)The genes individually associated with longevity in both sex 17 genes individually associated with longevity in both sex (P < 10 − 4 in one sex and P < 0.Notes: (a) The estimates presented in this table are based on the logistic regressions including continuous standardized PRS, sex and (PRS x sex) interaction term (ref.Equations (2) and (3) in the Methods section M3); (b) The sex-specific average probabilities of longevity among persons of sex s with various PRS values presented in this table are estimated using formula (4) in Methods section M3; (c) The P values of the sex differences presented in the last column of this table are estimated using the bootstrap method (the number of bootstrap replications is set to 2000) of the statistical software STATA 12.0.Y.Zeng et al.
(ref.SM-Table 5 for details); (b) The estimates presented in this table are based on the logistic regressions including continuous standardized PRS, sex and (PRS x sex) interaction term (ref.Equations (
|
2023-12-16T16:21:13.526Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "b1f44adbca92ec551f463c5befbddc5e9bf50172",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "794eb74178b153f30059fafff9e6998f39b1abf5",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
205962477
|
pes2o/s2orc
|
v3-fos-license
|
Peripheral blood Th9 cells are a possible pharmacodynamic biomarker of nivolumab treatment efficacy in metastatic melanoma patients
ABSTRACT Although nivolumab is associated with a significant improvement in overall survival and progression-free survival, only 20 to 40% of patients experience long-term benefit. It is therefore of great interest to identify a predictive marker of clinical benefit for nivolumab. To address this issue, the frequencies of CD4+ T cell subsets (Treg, Th1, Th2, Th9, Th17 and Th22), CD8+ T cells, and serum cytokine levels (IFNγ, IL-4, IL-9, IL-10, TGF-β) were assessed in 46 patients with melanoma. Eighteen patients responded to nivolumab, and the other 28 patients did not. An early increase in Th9 cell counts during the treatment with nivolumab was associated with an improved clinical response. Before the first nivolumab infusion, the responders displayed elevated serum concentrations of TGF-β compared to non-responders. Th9 induction by IL-4 and TGF-β was enhanced by PD-1/PD-L1 blockade in vitro. The role of IL-9 in disease progression was further assessed using a murine melanoma model. In vivo IL-9 blockade promoted melanoma progression in mice using an autochthonous mouse melanoma model, and the cytotoxic ability of murine melanoma-specific CD8+ T cells was enhanced in the presence of IL-9 in vitro. These findings suggest that Th9 cells, which produce IL-9, play an important role in the successful treatment of melanoma patients with nivolumab. Th9 cells therefore represent a valid biomarker to be further developed in the setting of anti-PD-1 therapy.
Introduction
Effective immune checkpoint blockades have improved the overall survival of patients with metastatic melanoma. The monoclonal antibody nivolumab blocks programmed cell death 1 (PD-1), an inhibitory immune checkpoint receptor expressed on activated T cells. 1 Nivolumab is associated with a significant improvement in overall survival and progression-free survival, and 20 to 40% of patients experience long-term benefit. [1][2][3] Although the expression of programmed cell death 1 ligand 1 (PD-L1) in tumor cells has been associated with responsiveness to the blockade of this immune checkpoint, 4 the objective measurement of PD-L1 protein levels reveals heterogeneity within tumors and prominent interassay variability or discordance. 5 The pharmacodynamic biomarkers of nivolumab, however, remain unknown to date. It is therefore of great interest to find a predictive marker of clinical benefit for nivolumab and a parameter that can be validated as a surrogate marker of response or survival benefit.
Recently, interleukin (IL)-9-producing CD4 C T helper cells (Th9) have been identified as a new subset of CD4 C T helper cells mediating both proinflammatory events and the induction of tolerance. 6,7 Although Th1 cells, which produce interferon (IFN)-g, have a clear role in cancer immune surveillance and in promoting antitumor responses, 8 reports on the role of Th9 cells in tumor development remain contradictory. 9 For instance, IL-9 is known to promote the proliferation, migration, and adhesion of human lung cancer cells, 10 but IL-9 seems to have the opposite effect on melanoma proliferation and migration in the B16 melanoma murine model. 11 Herein, we analyzed the immunological profile of peripheral blood in patients receiving nivolumab treatment in order to identify clinically useful biomarkers. We discovered that Th9 cells but not Th1 cells are increased in melanoma patients who were successfully treated with nivolumab. Using the autochthonous mouse melanoma model, we found that IL-9 treatment suppressed melanoma progression and increased granzyme B and perforin in CD8 C T cells. We therefore propose that elevated levels of Th9 cells may represent a valid biomarker in the setting of anti-PD-1 therapy. Our data support the idea that boosting IL-9 in itself might be a therapeutic avenue.
Th9 cell frequency is increased in responders to nivolumab treatment
Forty-six melanoma patients who received nivolumab were prospectively included in this study ( Table 1). The group contained 18 males and 28 females. The median age of the patients was 66 y (ranging from 34 to 89 y). Patients were divided into two groups, responders (SD, PR, and CR) and non-responders (PD) to nivolumab treatment. Eighteen patients responded to the treatment, and the other 28 patients did not. We found no difference in the total number of whole blood cells and lymphocytes, and in serum lactate dehydrogenase levels between responders and nonresponders (Table S1).
To investigate the candidates for biomarkers related to treatment response, we first compared the frequencies of CD8 C T cells, CD4 C T cells, and Tregs in the peripheral blood between responders and non-responders. We found no significant differences among these cell populations before and after treatment (Fig. 1A). We further compared CD4 C T cell subsets in responders and non-responders. In addition to Th1 and Th2 subsets, we investigated Th9, Th17, and Th22 subsets since recent studies showed that these Th subsets might also play some roles in tumor immunity. [12][13][14] Although there was no significant difference in Th1, Th2, Th17, and Th22 cells, Th9 cells were significantly increased in the responder group (Fig. 1B). IFNgproducing CD8 C T cell numbers before and after nivolumab treatment were comparable between responders and nonresponders (Fig. 1B). These results suggest that Th9 cells may play some role in nivolumab-induced antitumor immunity.
PD-1/PD-L1 blockade promotes Th9 differentiation
Our data showed that Th9 cells and serum TGF-b levels are increased in responders to nivolumab treatment. We therefore hypothesized that nivolumab enhanced tumor immunity by promoting Th9 differentiation. To this end, we evaluated the effect of anti-PD-1 antibody on Th9 induction in vitro. Human PBMCs were stimulated with recombinant IL-4 and recombinant TGF-b in the presence or absence of anti-PD-1 blocking antibody. After 48 h of stimulation, the frequency of Th9 cells was evaluated by flow cytometry. IL-4 and TGF-b-induced Th9 differentiation was enhanced by anti-PD-1 antibody in a dose-dependent manner (Fig. 2C). In addition, anti-PD-L1 antibody also enhanced Th9 differentiation (Fig. 2C). These results suggest that PD-1 signaling blockade by nivolumab may promote IL-4 and TGFb-dependent Th9 differentiation: Anti-IL-9 neutralizing antibody downregulates granzyme B and perforin expression in CD8 C T cells in vitro Since our data suggest that Th9 cells may promote antitumor immunity, we next investigated the effects of IL-9 on immune cells. A previous report demonstrated that Th9 cells promote the recruitment of dendritic cells (DCs) to the tumor tissues. 11 We hypothesized that Th9 cells also promoted the recruitment of CD8 C T cells to melanoma tissues. We evaluated the effect of IL-9 on the expression levels of chemokine receptors, which are responsible for CD8 C T cells. Because CXCR3 and CCR5 are reportedly important chemokine receptors for the recruitment of T cells to melanoma, 16,17 we investigated the expression levels of these receptors on CD8 C T cells cultured with or without anti IL-9 neutralizing antibody. However, we found that CXCR3 and CCR5 expression on CD8 C T cells was not affected by anti-IL-9 neutralizing antibody (Fig. 2D).
Granzyme B and perforin are cytolytic molecules produced by cytotoxic CD8 C T cells, and they have activity against a variety of tumors. 18 We therefore evaluated next the effect of IL-9 on granzyme B and perforin expression in CD8 C T cells in vitro. Human PBMCs were cultured with or without anti-IL-9 neutralizing antibody, and their expression levels were evaluated analyzed via flow cytometry. The expression levels of granzyme B and perforin in CD8 C T cells were reduced in the presence of anti-IL-9 neutralizing antibody (Fig. 2E). These results suggest that IL-9 promotes the expression of granzyme B and perforin in CD8 C T cells.
In vivo IL-9 blockade augments tumor progression in melanoma-bearing mice To investigate the effect of IL-9 on anti-melanoma immunity in vivo, we next evaluated melanoma progression through the administration of anti-IL-9 neutralizing antibody to melanoma-bearing mice. We found that anti-IL-9 neutralizing antibody administration promoted tumor progression in a melanoma cell line, the B16 melanoma cell line injection model (Fig. 3A). The subcutaneous inoculation of B16 cells mirrors human disease development poorly because tumor cells are an artificially inoculated and so it has low immunogenicity. 19 We next used the Baf/Pten autochthonous mouse melanoma model, in which melanoma develops de novo within the murine skin. 20 Consistent with the B16 injection model, the administration of anti-IL-9 neutralizing antibody also promoted tumor progression in the Braf/Pten model (Fig. 3B, Table S2). To exclude the possibility that IL-9 directly inhibits melanoma progression but does not modulate tumor immunity, we next performed a tumor proliferation assay. B16 melanoma cells were cultured with or without recombinant murine IL-9.
We found that there was no significant difference in the proliferation of B16 cells between the two groups (Fig. S1). These results support the notion that IL-9 suppresses melanoma progression via immune modulation.
IL-9 blockade leads to the downregulation of granzyme B and perforin in CD8 C T cells but not in NK cells in mice
To further investigate the mechanism of IL-9, we used the Braf/Pten melanoma model and analyzed the immune cells infiltrating into the tumor in mice treated with or without anti-IL-9 neutralizing antibody. First, the expression of granzyme B and perforin in the whole melanoma tissues was investigated by means of real-time polymerase chain reaction (RT-PCR). We found that granzyme B and perforin expression were reduced in mice treated with anti-IL-9 neutralizing antibody (Figs. 3C and D), suggesting that IL-9 promotes the expression of granzyme B and perforin in the melanoma tissues. Since both CD8 C T cells and NK cells produce granzyme B and perforin, we next analyzed the effect of IL-9 on granzyme B and perforin expression in CD8 C T cells and NK cells by flow cytometry. We already demonstrated that the expression levels of chemokine receptor responsible for tissue infiltration were not changed by anti-IL-9 treatment with human samples (Fig. 2D), suggesting that lymphocytes infiltration into the skin is not regulated by IL-9. Consistent with the above findings, there was no significant difference in the frequency of CD8 C T cells (Fig. 3E, left) or NK cells (Fig. 3F, left) infiltrating into murine melanoma tissues treated with or without anti-IL-9 antibody. Next, we evaluated the expression levels of granzyme B and perforin in CD8 C T cells and NK cells in melanoma tissues. The mean fluorescence intensity (MFI) levels of granzyme B and perforin in CD8 C T cells were significantly lower after IL-9 blockade (Fig. 3E, right), whereas MFI levels of granzyme B and perforin in NK cells were unaltered (Fig. 3F, right). These results suggest that IL-9 causes an increase in granzyme B and perforin in tumor-infiltrating CD8 C T cells.
IL-9 enhances cytotoxicity of tumor-specific mouse CD8 C T cells We evaluated the effect of IL-9 on the cytotoxic ability of tumor specific CD8 C T cells in vitro. First, we co-cultured B16 murine melanoma cells stably transduced with OVA (termed MO4 cells) and whole lymph node cells from OT-I mice (which have OVA-specific CD8 C T cells), in the presence or absence of recombinant IL-9. The MFI levels of granzyme B and perforin in CD8 C T cells increased significantly after recombinant IL-9 exposure (Fig. 4A). In addition, the cytotoxic assay showed that immune cells from inguinal lymph nodes of MO4 tumor-bearing OT-I mice killed melanoma cells more effectively when cultured with recombinant IL-9 (Fig. 4B). Furthermore, we prepared purified CD8 C T cells from inguinal lymph nodes of MO4 tumor-bearing OT-I mice, and confirmed that the tumorspecific cytotoxic ability of CD8 C T cells increased by recombinant IL-9 (Fig. 4C), while IL-9 did not directly affect the proliferation of B16 (Fig. S1). We next evaluated that the effect of IL-9 on the ability of proliferation and antigenicity of human melanoma cell lines using two human melanoma cell lines, A375 and SK-MEL-28. We found that IL-9 did not affect the proliferation of these cell lines (Fig. S2). The expression levels of HLA-ABC and HLA-DR were quantified by means of flow cytometry to evaluate the antigenicity. The expression of PD-L1 and IL-9 receptor (IL-9R) was also assessed to evaluate the effect of IL-9 on these cell lines. The results showed that IL-9 did not affect these expression levels (Fig. S2).
IL-9 is highly expressed in human melanoma lesions
Finally, we assessed 10 melanoma samples by immunohistochemistry to evaluate the localization of IL-9 C cells and CD8 C T within the tumor before nivolumab treatment. We analyzed sequential sections for these two stains using 10 samples selected from both responders and non-responders. All samples showed that high IL-9 expression and CD8 C T cell infiltration were observed in the peritumoral lesion (Fig. 4D, Fig. S3). These findings suggest that IL-9 may be related to some extent to antitumor immunity by CD8 C T cells in the lesional area of human melanoma.
Discussion
In this study, we demonstrated that Th9 cells in peripheral blood were significantly increased in the responders to nivolumab treatment. In addition, the serum level of TGF-b, which contributes to the development of Th9, was significantly higher in responders compared to non-responders before nivolumab treatment. Moreover, anti-PD-1 antibody enhanced Th9 differentiation in vitro. Using melanoma-bearing mice, we showed that anti-IL-9 antibody decreased granzyme B and perforin in CD8 C T cells. Furthermore, in vitro experiments revealed that IL-9 enhanced the cytotoxic ability of tumor-specific CD8 C T cells in mice. Finally, we demonstrated that IL-9-positive cells existed near CD8 C T cells in human melanoma tissues. These results suggest that Th9 cells may play an essential role in antimelanoma immunity and that anti-PD-1 antibody may elicit an antitumor effect through upregulating Th9 differentiation in melanoma patients successfully treated with nivolumab. This is the first report showing that Th9 cells in peripheral blood can be a pharmacodynamic biomarker for nivolumab efficacy in melanoma patients. The literature suggests the diverse effects of anti-PD-1 antibody treatment on antitumor immunity. Anti-PD-1 antibody caused an increase in the frequency of CD8 C T cells in the melanoma lesion of patients who responded to the treatment. 21,22 In addition, anti-PD-1 antibody mediates antitumor effects through augmented T cell proliferation, increased IFNg, and IFNg inducible chemokine production at the tumor site. 23 We also demonstrated that nivolumab increases Th9 cells both in vivo and in vitro. It has been reported that the PD-1 blockade in itself enhanced T cell migration into the tumor lesion, 23 and our study proposes the possibility that nivolumab may promote antitumor immunity in the melanoma lesion via CD8 C T cell activation by Th9 cells. This is supported by our observation that IL-9 C cells and CD8 C T cells co-localized in melanoma lesion. We are currently working to evaluate whether IL-9 C cells are increased in the melanoma lesion in responders using available samples, such as in-transit metastases.
Recent studies have shown that genetic markers might be useful as biomarkers for the effectiveness of immune checkpoint blockades in the treatment of melanoma. [24][25][26] However, no serum cytokines have been reported as useful and effective biomarkers. We demonstrated that serum TGF-b levels were significantly higher in responders compared to non-responders before nivolumab treatment. Our study therefore suggests that TGF-b may also be a good biomarker for nivolumab treatment. Several human cancer cells express high levels of TGF-b, which influences the microenvironment and promotes tumor growth, invasiveness, and metastases. 27 Increased expression and secretion of TGF-b in melanoma cell lines have been reported. 28,29 Interestingly, the expression levels of TGF-b in melanoma are different in among melanoma cell lines and patient disease stages. 29,30 Although the mechanisms that caused high serum TGF-b levels in responders remain unclear in our study, one hypothesis is that TGF-b expression may be related to mutations in the melanoma cells. In human hepatocellular carcinoma, Kras mutation deregulates the TGF-b signaling pathway. 31 Above all, the differences in TGF-b expression in melanoma may have a genetic basis. Further studies are needed to investigate the relationship between TGF-b expression and mutation burden.
The role of Th9 cells has remained controversial in tumor immunity. It has been reported that IL-9 acts directly to drive tumor growth and contributes to the establishment of an immunosuppressive environment. 9 For example, IL-9 promotes the proliferation of human lymphoid tumors, such as Hodgkin's lymphoma, diffuse large B-cell lymphoma, and NK T-cell lymphoma. [32][33][34] Furthermore, IL-9 is known to inhibit adaptive immunity and promote tumor progression using colon carcinoma and breast cancer in mice. 9 Other studies have suggested the beneficial roles of IL-9 in preventing tumor progression through a multivariate effector response. 11,12 In this study, we observed an antitumor effect of IL-9 against melanoma in two different murine melanoma models, the B16 injection model and the Braf/Pten model. This is the first report demonstrating the beneficial roles of IL-9 for tumor immunity using the autochthonous mouse melanoma Braf/Pten model. Consistent with our study, some recent reports refer to the importance of IL-9 and Th9 cells in suppressing melanoma progression. For example, one report showed that exogenous rIL-9 inhibited melanoma growth in Rag1 ¡/¡ mice but not in mast-cell-deficient mice, suggesting that mast cells are essential for IL-9-mediated antitumor immunity. 12 Another report demonstrated that Th9 cells elicited strong cytotoxic T cell responses by promoting the recruitment of DCs to the tumor tissues. 11 As for the origin of Th9 cells, one study showed that IL-1b induced Th9 cells, which in turn exerted potent anticancer functions in an interferon regulatory factor 1 (IRF1)-and IL-21-dependent manner. 35 Our study proposes a new mechanism whereby IL-9 directly enhances tumorspecific cytotoxic activity of CD8 C T cells by increasing their levels of granzyme B and perforin.
In addition, we observed that PD-1/PD-L1 blockade promotes Th9 differentiation in the present study. Several studies report that PD-1 modulates the metabolic program of T cells. 36 For example, PD-1 ligation is known to prevent T cell development by altering metabolic reprogramming of cells. 37 Since glycolytic activation is required for the differentiation of Th9 cells, 38 the PD-1/PD-L1 blockade might promote Th9 differentiation by modulating the cell metabolism. Further investigation is required on this subject.
In conclusion, we present data to show that the frequency of Th9 cells can serve as a pharmacodynamic biomarker for anti-PD-1 therapy. Of note, we found an increase in Th9 cells after the third infusion of nivolumab and provided evidence that Th9 cells promote anti-melanoma immunity. Therefore, we propose that Th9 cells may represent a biomarker to be further developed in the setting of anti-PD-1 therapy.
Patients, treatment, and clinical evaluation
This observational immunomonitoring study included 46 metastatic melanoma patients receiving nivolumab (Ono Pharmaceutical) at Kyoto University Hospital and other collaborating hospitals in Japan. This study was approved by the ethic committee of the Kyoto University Graduate School of Medicine (R0251). Patients were included if they (i) had a confirmed diagnosis of stage IV melanoma according to the 2009 American Joint Committee on Cancer (AJCC) melanoma staging and classification, (ii) were alive 12 weeks after the first nivolumab perfusion, and (iii) were receiving at least four courses of nivolumab over 90 min at a dose of 2 mg/kg of body weight every 3 weeks. 39 Other inclusion criteria were: at least 20 y of age and no specific melanoma therapy during the previous 28 d. All histological types of melanoma, including mucosal and uveal melanoma, were eligible for inclusion. Exclusion criteria were the presence of an autoimmune disease, HIV, hepatitis B or C, pregnancy, or concomitant systemic therapy or any history of prior immunotherapy for melanoma. Treatment efficacy was assessed using contrast-enhanced computed tomography (CT), magnetic resonance imaging (MRI), or positron emission tomography-CT (PET-CT) after the third nivolumab infusion and clinical response was defined based on response evaluation criteria in solid tumors (Response Evaluation Criteria in Solid Tumors, Version 1.1 (RECIST, v1.1)). A clinical response was defined as complete response (CR), partial response (PR), or stable disease (SD).
Collection of human samples and analysis of serum
Peripheral blood was taken one to 7 d before the first nivolumab infusion (pre) and within one to 3 weeks after the third infusion (post). In most cases, nivolumab was administered every 3 weeks in Japan in conformity to the national health insurance and thus the PBMCs after treatment were obtained from 10 to 12 weeks after the first infusion. Peripheral blood mononuclear cells (PBMCs) were obtained from venous blood anticoagulated with EDTA by density gradient centrifugation using lymphocyte separation solution (Nacalai Tesque). Isolated cells were cryopreserved in Bambanker (Nippon Genetics) at ¡80 C or washed with FACS buffer (PBS containing BSA and sodium azide) for flow cytometry as previously described. 40,41 Serum levels of IFNg, IL-4, IL-9, IL-10, IL-17, and transforming growth factor (TGF)-b were measured by enzyme-linked immunosorbent assay (ELISA) kits for quantitative detection of each cytokine (eBioscience, San Diego). All measurements were made in duplicate and mean values were obtained.
Analysis of peripheral blood samples
The following fluorescent-labeled monoclonal antibodies were used for surface or intracellular staining: TCRab-PerCP/Cy5. To detect intracellular protein, cells were permeabilized, fixed, and stained according to the manufacturer's instructions using the Cytofix/Cytoperm kit (BD Biosciences) for cytoplasmic targets or the Foxp3 Staining Buffer Set (eBioscience, San Diego, CA) for the nuclear target FoxP3. Tregs were defined as FOXP3 C CD45RO C C D4 C TCRab C T cells.
Acquisition was performed by eight-color flow cytometry using FACS Fortessa with FACS Diva software (both from BD Biosciences). The compensation control was performed with BD CompBeads (BD Biosciences). FlowJo software (Tree Star) was used for analysis. Data were expressed as dot plots.
B16 melanoma cells were injected into C57/B6N mice subcutaneously at each lateral region by 10 £ 5 cells in 100 mL of PBS per place. The tumor volume was measured every 3 d. Braf/Pten mice were treated topically with 10 mL of 1.9 mg/mL (5 mM) 4-Hydroxytamoxifen (4-HT, 70% Z-isomer, Sigma) in acetone at 6 to 8 weeks of age on both ears, the abdomen, and back. The tumor size (length £ width, in mm 2 ) was first measured at week 2 of tamoxifen treatment and every subsequent 5 d. Change in tumor size was expressed as percentage change compared to the first measurement. Anti-IL-9 neutralizing antibody (9C1, Bio X Cell, NH, USA) administration was started one day before the application of tamoxifen. It was injected intraperitoneally at 200 mg per mouse every 3 d for 2 weeks. Braf/Pten mice receiving control IgG or a-IL-9 every 2 d were analyzed on day 15 followed by a previous study using IL-9-neutralized melanoma mice. 11
Human IL-9 blocking assay and mouse cytotoxicity assays with recombinant IL-9 PBMCs from healthy donors were cultured with anti-CD3 antibody (0.5 mg/mL, BioLegend) for 48 h with or without anti-IL-9 neutralizing antibody (10 ng/mL, MH9D1, BioLegend). The expression levels of CXCR3, CCR5, Granzyme B, and Perforin of CD8 C T cells were evaluated by flow cytometry. Mouse T cell cytotoxicity with or without recombinant IL-9 was assayed using a Cytotox 96 nonradioactive kit (Promega) following the instructions provided.
OT-I mice, MHC class I-restricted TCR transgenic line specific for ovalbumin (OVA), 44 were inoculated with 1 £ 10 6 MO4 tumor cells into the skin of both lateral regions. On day 10 after tumor inoculation, draining lymph nodes (inguinal) were excised, mashed, and washed with RPMI in order to obtain single-cell suspensions. Purified CD8 C T cells were obtained from the lymph node cells using CD8a C T cell isolation kit (Miltenyi Biotec KK, Tokyo, Japan). Whole lymph node cells or purified CD8 C T cells were plated as the effector cells in 96-well plates at the effector/target ratio of 10/1 using 5 £ 10 3 MO4 target cells per well in RPMI lacking phenol red for 4 h at 37 C. Lactate dehydrogenase release was subsequently assessed by incubation of the supernatants with the provided substrate for 30 min and the absorbance was read at 490 nm using a Thermomax plate reader (Molecular Devices). Percentage cytotoxicity was calculated as follows: (experimental effector spontaneous ¡ target spontaneous/target maximum ¡ target spontaneous) £ 100. All cytotoxicity assays were reproducible in at least three separate assays.
Immunohistochemistry
Ten paraffinized human primary melanoma samples from 10 melanoma patients were cut into 5-mm-thick sections. Their clinical information is shown in Table S3. Antigens were retrieved by boiling in citrate buffer, pH 6.0, using a microwave. Non-specific binding of immunoglobulin G was blocked by normal goat serum (Vector Laboratories, Burlingame, CA). The sections were incubated with rabbit anti-IL-9 antibody (polyclonal, Abcam, Tokyo, Japan) overnight at 4 C. Then, they were incubated with biotinylated goat-anti-rabbit secondary antibody (Vector Laboratories, Burlingame, CA). Secondary antibodies were visualized using the Vectastain ABC-AP kit (Vector Laboratories, Burlingame, CA).
Statistical analysis
Unless otherwise indicated, data are presented as the means § standard deviation (SD) and are a representative of three independent experiments. p-values were calculated with the twotailed Student's t-test or one-way analysis of variance (ANOVA) followed by the Dunnett multiple comparison test. p-values less than 0.05 were considered to be statistically significant and are denoted by asterisks ( Ã ) in the figures.
Disclosure of potential conflicts of interest
No potential conflicts of interest were disclosed.
|
2018-04-03T00:33:35.500Z
|
2016-10-18T00:00:00.000
|
{
"year": 2016,
"sha1": "6c7a1b469875f27bf256400c0fcd9d986b60a1f8",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/2162402X.2016.1248327?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "177f6d0329e5736f712baa83a485589cb70b859e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
104445111
|
pes2o/s2orc
|
v3-fos-license
|
Fatty acid characterization of indigenous cyanobacterial strains isolated from five hot springs in indonesia
. Cyanobacteria have been known to produce lipids that are potential for biodiesel. Cyanobacteria isolated from Indonesia are called Indonesia indigenous cyanobacteria. This study was conducted to determine the characterization of fatty acids contained in cyanobacteria originating from Indonesia which were isolated from 5 hot springs in Indonesia. For some 29 strains of cyanobacteria consisting of 8 genera have performed the analysis of fatty acids (FA) by extraction method via protocol in SHERLOCK Microbial Identification (Midi) System version 4.0, 2001 MIDI, Inc. The resulting data is as follows. All strains of 8 genera ( Synechococcus , Merismopedia , Thermosynechococcus , Stanieria , Leptolyngbia , Westiellopsis , Mastigocladus , and Nostoc ) have saturated fatty acids (SFA) and unsaturated fatty acids (MUFA and PUFA). The content of saturated fatty acids ranged from 27.77 to 50.56%, while the content of unsaturated fatty acids ranged from 7.58 to 63.31%. All strains have SFA Palmitic acid (16:00) which ranges from 23.23 to 42.64%. Meanwhile, unsaturated fatty acids Palmitoleic acid (16:1 wc7) are owned by almost all strains except Westiellopsis which range from 1.75 to 51.78%. Content of unsaturated fatty acids Oleic acid (18: w9c) ranges from 1.43 to 35.78% mainly in Leptolyngbia , Westiellopsis , and Mastigocladus . All strains have MUFA ranging from 7.58 to 63.31%, whereas PUFA is only owned by filamentous strains ( Leptolyngbia , Westiellposis , Mastigocladus , and Nostoc ). From the results of the research can be seen that 29 strains of cyanobacteria of 8 genera have potential fatty acids as raw materials of biodiesel under certain conditions.
Introduction
Concerns about petroleum supplies, high energy prices, and energy-related environmental security drew much attention to finding renewable biofuels. One renewable biofuel is biodiesel. Biodiesel is a type of liquid fuel in the form of Fatty Acid Methyl Ester (FAME) or methyl ester compound with long fatty acid chain [1]. One of the benefits of biodiesel is environmentally friendly, because it comes from renewable biological raw materials and does not increase CO2 emissions into the atmosphere due to the mechanism of carbon recirculation.
Biodiesel is generally derived from various types of vegetable oils. One of the biodiesel materials comes from microalgae. Microalgae are a group of autotrophic microorganisms that have no organs with apparent functional differences. Microalgae can be an appropriate alternative feedstock for next generation biofuels because certain species are known to contain lipids including high fatty acids in their biomass. Microalgae biomass can be extracted, processed, and refined into transportation fuels using the technology currently available. In addition, the microalgae can reduce the amount of carbon in the air simultaneously, have fast growth rates, allow unused cultivation and unreachable water, their production is not seasonal and can be harvested daily.
Microalgae are categorized into eukaryotic and prokaryotic microalgae. One of the prokaryotic microalgae is cyanobacteria. Cyanobacteria also commonly referred to as Cyanophyta, blue green algae, or Myxophyta [2,3] that have variations of form, i.e. unicellular, filament, or colony. Cyanobacteria as well as other microalgae are also known to produce lipids which can then be extracted and utilized. Some cyanobacteria have been known to produce lipids that are potential for biodiesel, i.e. Spirulina, etc.
In Cyanobacteria, the fatty acid properties of cyanobacteria cells are important for comparing the biochemical diversity of cyanobacteria. So far, the analysis shows members of a unique and varied cyanobacteria group regarding fatty acid composition. As with bacteria, fatty acid composition in cyanobacteria can be used as a marker in taxonomy [4,5,6]. For example, there is a significant correlation between the complexity of the species morphology and its fatty acid composition [7].
Fatty acid on the entire cell composed of saturated fatty acids (SFA), unsaturated fatty acids (MUFA/monounsaturated FA and PUFA/polyunsaturated FA), branched fatty acids and hydroxy-substituted fatty acids. Saturated fatty acids that are common to the entire cyanobacteria including bacteria is palmitic acid (16:0). Some cyanobacteria have a fatty acid composition as species of bacteria, some of the types of chloroplasts, and some of the type is not yet clear whether in bacteria or in chloroplasts. The fatty acid composition of bacteria is relatively common among unicellular cyanobacteria, whereas polyenoic fatty acid is most common filamentous cyanobacteria [8].
Cyanobacteria are microorganisms that can live anywhere or commonly called cosmopolitan microorganisms. Indonesia as one tropical country that always has a warm climate all year long cause often experiences blooming Cyanobacteria in freshwater. The study of microalgae community structures in some lakes and rivers in Indonesia indicates that many strains of Cyanobacteria appear to dominate certain waters. Some research has shown that Cyanobacteria is easily found in UI waters [9,10,11] or some waters of lakes and rivers in the Jakarta area [12,11,13], and at sources hot water in Indonesia [14,15,16]. Cyanobacteria found and isolated from Indonesia are called Indonesian indigenous cyanobacteria.
For the purposes of the use of cyanobacteria for biodiesel, cyanobacteria which may be used should be cultured cyanobacteria and can be grown in large quantities. In addition, strain characteristics also determine successful utilization. Optimal and stable strains can be used in observing genetic diversity, toxicity, and other benefits and applications.
Before being used as a biodiesel agent, the fatty acid content of cyanobacteria strains needs to be analyzed. Screening of fatty acids is needed to know which type of fatty acids are contained in these strains. This study was conducted to determine the characterization of fatty acids contained in cyanobacteria originating from Indonesia. The study was conducted on cyanobacteria isolated from hot springs in Indonesia.
Microorganisms and Growth Medium
Cyanobacteria strains were isolated from hot springs and grown on a medium suitable for the growth of each strain of cyanobacteria. The Cyanobacteria strain used in this study were cyanobacteria strains derived from 5 hot springs in West Java, Indonesia. Fatty Acid characterization performed on 29 strains of Indonesian hot springs cyanobacteria. Hot springs in Indonesia, namely Ciseeng, Mount Pancar, Rawa Danau Banten, Ciater, and Maribaya. Meanwhile, the growth medium used were CT (Cyanobacteria TAPs), MA (Microcystis Aeruginosa Medium), BG-11 (Blue Green no. 11) [17], and BBM (Bold Basal Medium) [18].
Research places
Researches were conducted at the Laboratory of Plant Taxonomy, Department of Biology, FMIPA, UI, and Laboratory Center of Excellence Indigenous Biological Resources Genome Studies (CoE IBR -GS) FMIPA UI. Maintenance of hot spring cyanobacteria strains was carried out in Algae Culture Room Laboratory of Plant Taxonomy, Department of Biology, FMIPA UI.
Culturing, preparing of starter cultures, and harvesting of cultures
Propagation of culture was begun with the inoculation of 15 mL of culture from the culture of work into Erlenmeyer flask containing 150 mL of medium. Work was done in aseptic technique. Further cultures were incubated in the rack culture. The intensity of light given was 3000-5000 lux, with photoperiodicity 12L/12D regulated by a timer. Incubation temperature which used were 20 ± 5 o C; 30 ± 5 o C; and 50 ± 5 o C. Cyanobacteria cultures in test tubes and Erlenmeyer flasks were shown at Fig.1.
Extraction of Fatty Acid
There were five (5) steps for extracting the samples. The steps were harvesting, saponification, methylation, extraction, and washing. Besides harvesting procedure, other steps procedure was done by protocols in SHERLOCK Microbial Identification (Midi) System version 4.0, 2001 MIDI, Inc. The extract preparation procedures were shown at Fig.2.
Harvesting was done when the cyanobacteria cultures reached 20 days old or already exist in the early stationary phase. Culture was put into centrifuge tubes, and then centrifuged at 4500 rpm for 20 min (modified [19]). Centrifugation process produced pellets (wet biomass) and supernatant. Wet biomass and the supernatant obtained was frozen in a freezer at a temperature of 4 o C.
Fig. 2. The fatty acid extract preparation procedures
After harvesting, the cells were saponified. There were 3 kinds of chemical compounds for saponify the cells, i.e. sodium hydroxide (certified ACS), methanol (HPLC grade), and deionized distilled water. Those compounds were mixed (150 mL deionized distilled water, 150 mL methanol, 45 gram sodium hydroxide/NaOH), and namely reagent 1 (the methalonic base). A strong methanolic base combined with heat kills and lysis the cells. Fatty acids were hydrolyzed form the cell lipids and were converted to their sodium salts (Midi: 18).
Samples (5 ± 10 mg cells) were transferred into a clean, dry 13 mm x 100 mm screw cap culture tube. Reagent 1 (1.0 ± 0.1 mL) was pipetted in to each of the culture tubes in the batch, and each tube was sealed tightly with a clean Teflon-lined screw cap. The tubes were vortexed for 5-10 seconds. The batched samples tubes rack was placed into a boiling or circulating water bath at 95 0 C -100 0 C for 5 minutes. After five (5) minutes, the tubes were removed from the boiling water, and cooling the tubes slightly. Then again, the tubes were vortexed for 5-10 seconds and then returned to the water bath for additional 25 minutes.
Next step was methylation. There were 2 kinds of chemical compounds for methylation, i.e. 6.00N hydrochloric Acid (HCl) and methanol (HPLC grade). The mixing of two chemicals was called reagents 2 or the methylation reagent (325 mL HCl and 275 mL methanol). Methylation converts the fatty acids (as sodium salts) to fatty acid methyl esters, which increases the volatility of the fatty acids for the Gas Chromatography (GC) analysis. The cap of tube from saponification steps was opened in the batch, the tubes then were added with 2.0 ± 0.1 mL of reagent 2 to each tube. Each tube was sealed tightly with a clean Teflon-lined screw cap, and the tubes were vortexed for 5-10 seconds. Because of an excess of reagents, a granular precipitate (salt) may form, and proceed with the following. The tubes were heated in an 80 ± 1 0 C water bath for 10 ± 1 minute. Then, the tubes were removed and quickly cool to room temperature by placing tubes in a tray of cold tap water. Ice-cold water was not necessary. The tubes were shaken to speed the cooling process.
Extraction was done using extraction solvent that comprises 2 kinds of chemical compounds, i.e. hexane (HPLC grade) and methyl tert-butyl ether/MTBE (HPLC grade). MTBE was added to the hexane and stirred well. Fatty acid methyl esters were removed from acidic aqueous phase and transferred to an organic phase with a liquid-liquid extraction procedure. The cap of tube from methylation steps was opened. Each tube was then added with 2.0 ± 0.1 mL of reagent 3. Each tube was sealed tightly with a clean Teflon-lined screw cap. Batch of tubes in a laboratory rotator was placed and gently mix end-over-end for 10 minutes. The cap of tube was opened again. The aqueous (lower) phase was removed and discarded using a clean Pasteur pipette for each sample.
Final step was called Base Wash. Base wash was done using reagent 4 which were included sodium hydroxide/NaOH (certified ACS) and deionized distilled water. Sodium hydroxide (NaOH) was added to stirring water until the pellets were dissolved. A dilute base solution was added to the sample preparation tubes to remove free fatty acids and residual reagents from the organic extract. Residual reagents will damage the chromatographic system, resulting in tailing and loss of the hydroxy fatty acid methyl esters.
Each tube then was added with 3.0 ± 0.1 mL of reagent 4. Each tube was sealed tightly with a clean Teflon-lined screw cap. The tubes were rotated gently end-over-end for 5 minutes. Brief centrifugation (3 minutes at 2000 rpm) was recommended to clarify the interface between the phases when an emulsion was present.
Fatty Acid Analysis
Furthermore, the upper solvent phase was removed and placed in a sample vial suitable for the automatic sampler mounted on the gas chromatograph. The extract was transferred to sample Vial and analyzing of samples according to the procedure of SHERLOCK MIS (Microbial Identification System) version 4, MIDI, Inc.
The growth of cyanobacteria strains
Twenty-nine (29) strains were characterized as cellular fatty acids. The process of analysis of fatty acids (FAs) was very dependent on the ability to grow and the speed of growth of cyanobacteria strains.
Strains were analyzed and grouped into three (3) groups. The first group was strains that grew fast enough or normal, which entered a period of peak growth of about 20 days (20 days of age culture). The second group were strains that had slow growth (30 to 40 days), while the third group were the strains that had very slow growth (more than 40 days). The first group (fast enough/normal growth) consists of strains of Synechococcus (HS-1
On the other hand, strains of hot spring filamentous cyanobacteria did not show specific groupings as well as the grouping system of Kenyon and Stanier 1972. This was likely due to the data of hot spring strains that exists only in di-unsaturated fatty acid (especially 18:2), while the Kenyon and Stanier system (1972) using Triunsaturated fatty acids (especially 18:3) and Tetraunsaturated fatty acids (especially 18:4) as distinguishing characteristics among filamentous cyanobacteria [21].
In this study, it could only differentiate between unicellular coccoid cyanobacteria (genus Synechococcus, Merismopedia, Thermosynechococcus, and Stanieria) with filamentous cyanobacteria (genus Leptolyngbia, Westiellopsis, Mastigocladus, and Nostoc) based on total polyunsaturated. In the Table 2, it shows that the total PUPA in the group of unicellular cyanobacteria strains coccoid lower than in the group of strains of filamentous cyanobacteria. Table 2 showed the total of each type of fatty acid.
Conclusions
The research showed that fatty acids were analyzed from 29 strains of Indonesian indigenous cyanobacteria are vary widely and independent of the genus cyanobacteria. The same genus may not necessarily have the same fatty acids. All strains have saturated fatty acids of 16
|
2019-04-10T13:12:42.046Z
|
2018-11-26T00:00:00.000
|
{
"year": 2018,
"sha1": "c242baf750e9b3b9c42d19f5506afa23eb178c1f",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2018/42/e3sconf_i-trec2018_02021.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "246df84472bef56b2673c63295d9e1e96b9ddff0",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
231849709
|
pes2o/s2orc
|
v3-fos-license
|
Cost-effectiveness of the latent tuberculosis screening program for migrants in Stockholm Region
Introduction The majority of tuberculosis (TB) cases in Sweden occur among migrants from endemic countries through activation of latent tuberculosis infection (LTBI). Sweden has LTBI-screening policies for migrants that have not been previously evaluated. This study aimed to assess the cost-effectiveness of the current screening strategy in Stockholm. Methods A Markov model was developed to predict the costs and effects of the current LTBI-screening program compared to a scenario of no LTBI screening over a 50-year time horizon. Epidemiological and cost data were obtained from local sources when available. The primary outcomes were incremental cost-effectiveness ratio (ICER) in terms of societal cost per quality-adjusted life year (QALY). Results Screening migrants in the age group 13–19 years had the lowest ICER, 300,082 Swedish Kronor (SEK)/QALY, which is considered cost-effective in Sweden. In the age group 20–34, ICER was 714,527 SEK/QALY (moderately cost-effectives) and in all age groups above 34 ICERs were above 1,000,000 SEK/QALY (not cost-effective). ICER decreased with increasing TB incidence in country of origin. Conclusion Screening is cost-effective for young cohorts, mainly between 13 and 19, while cost-effectiveness in age group 20–34 years could be enhanced by focusing on migrants from highest incidence countries and/or by increasing the LTBI treatment initiation rate. Screening is not cost-effective in older cohorts regardless of the country of origin.
Introduction
Tuberculosis (TB) is a global public health concern with about 10 million people falling sick and 1.4 million deaths worldwide annually [1]. Aside from the active form of the disease, an individual can have latent TB infection (LTBI), a latency state in which the person is infected but healthy, asymptomatic and non-infectious. LTBI can activate to active TB at any time [2]. It is estimated that around onefourth of the world's population have LTBI, of which about 10% activate to symptomatic disease at any point in life. The risk of activation is highest soon after infection and is elevated by comorbidities such as HIV, diabetes, undernourishment, chronic kidney diseases and immunocompromising treatments [3][4][5][6]. LTBI-screening tools as well as efficacious preventive treatments are available, and therefore, LTBI screening and management in risk groups is an important element of TB control [5,7].
In many low-incidence countries, domestic transmission is low and incident TB cases tends to be dominated by activation among immigrants from high-incidence countries who have acquired LTBI outside the host country (in the home country or during transit) [8,9]. LTBI screening and management in key populations such as migrants is, therefore, an important part of TB elimination strategies in many low-incidence countries [7]. However, there is a lack of evidence about the most effective and cost-effective screening strategies in terms of which migrants to screen (e.g., based on age and TB incidence in country of origin) and which screening algorithm to use [10,11]. Consequently, there is large variation in TB/LTBIscreening policies for migrants globally and across European countries [9].
In Sweden, about 90% of the TB cases are among foreign born. Recent migrants from high-incidence countries are more likely to have had a recent contact with a TB case, and therefore, are the groups with highest TB incidence. Asylum seekers, quota refugees and some reunified family members are offered a post-arrival, voluntary, free-of-charge health examination (HE). All attending HE are screened for TB symptoms and TB exposure risk factors (having been in refugee camp or prison or having had recent contact with a case of active TB) for all. All individuals with a TB exposure risk factor or coming from a country with TB incidence higher than 100/100,000 are systematically offered LTBI testing with Interferon-Gamma Release Assays (IGRA) or Tuberculin Skin Test (TST). Chest X-ray (CXR) is done for all who have TB symptoms or are IGRA/TST positive [12,13]. Preventive treatment after LTBI diagnosis (and exclusion of active TB) is offered depending on age and risk factors. The regional guidelines in Stockholm country recommend treatment for all persons below the age of 20, women with recent pregnancy and people with an immunosuppressing condition or treatment. Treatment for patients 20 years of age or older is recommended only if a risk factor for progression is present. IGRA is used almost exclusively in Stockholm. The most commonly prescribed treatment regimens in Stockholm are: 4-month daily rifampin or 3-month daily combination of isoniazid and rifampin [14].
Although in place since the 1990s, the Swedish migrant TB-screening strategy has never been evaluated in terms of its cost-effectiveness. Therefore, the aim of this study was to determine the cost-effectiveness of the current LTBIscreening strategy compared to a scenario of no screening program for subgroups based on age and country of origin.
Study population
A cohort of all migrants in Stockholm region attending HE and being tested for LTBI between January 1st, 2015 and December 31st, 2018 was established through extracting eligible persons from a registry called VeraAsyl, which is based on data from the Swedish Migration Agency. The cohort consisted of 5470 screened individuals who could be followed through screening, CXR, referral and visit to specialist care, LTBI treatment initiation and completion through linkages between VeraAsyl and electronic medical records in primary (screening data) and secondary care (treatment data). Details of the screening algorithm, the cohort, the methodology for ascertaining completion of each step of the screening and treatment cascade are reported elsewhere [15]. There was a statistically significant increase in LTBI positivity in higher age groups and among people from countries with higher TB burden (> 200/100,000) [15]. The empirical values for the parameters used in the present model are listed in appendix 1 in the support material.
Cost-effectiveness analysis overview
A societal-perspective Markov model with a 50-year time horizon was developed to assess incremental cost in relation to incremental improvements in long-term health outcomes-TB cases prevented and quality-adjusted life years (QALY) gained, respectively-for two scenarios (arms of comparisons): the current migrant LTBI-screening strategy as implemented in Stockholm 2015-2018 versus a hypothetical scenario of no systematic migrant LTBI screening and treatment.
The model was run for different age groups and groups by TB incidence in country of origin (using WHO national incidence estimates), with current screening vs. no screening as arms of comparison in all analyses. The reason for running the analysis by subgroups rather than for the whole cohort is that prevalence of LTBI correlate with both age and TB epidemiology in country of origin. Moreover, recommendations and practice concerning treatment indication for persons with LTBI varies by age, especially since older persons have a higher risk of adverse drug reactions and harm of preventive treatment may outweigh benefit. Assessing cost-effectiveness for specific subgroups can help identify if and how targeting of screening affects cost-effectiveness and hence inform strategy adjustments.
The methodology used in this study follows a theoretical framework that has developed through a systematic review of methods in published CEA studies on LTBI screening in migrants [16].
Local cost data and Health-Related Quality of Life (HRQoL) data were discounted by 3% per annum according to the current Swedish recommendations [17]. The results are presented in terms of incremental cost-effectiveness ratios (ICER): the incremental cost of achieving one additional QALY or one additional prevented case with screening vs. no screening for each subgroup.
The analysis adopted a societal perspective as recommended in Sweden by The Dental and Pharmaceutical Benefits Agency (TLV) [17]. This perspective implies considering the costs and effects for the society including costs (and savings) for the health care services, costs for the Migration Agency (which funds the HE), out-of-pocket payment by the patient, and the productivity loss due to disease [18].
ICERs were judged against the cost-effectiveness thresholds recommended by the National Board of Health and Welfare [19], which considers an intervention very costeffective if the ICER is below 100,000 SEK/QALY, costeffective if the ICER is between 100,000 and 500,000 SEK/ QALY, moderately cost-effective if the ICER is between 500,000 and 1,000,000 SEK/QALY, and not cost-effective if the ICER is above 1,000,000 SEK/QALY.
Decision analytic model design
The Markov model structure was developed in Excel. Individuals could reside in one of five mutually exclusive health states ( Fig. 1): (a) "Healthy" referring to a state of no TB and no LTBI; (b) "LTBI state" referring to undiagnosed LTBI or diagnosed but untreated or unsuccessfully treated LTBI; (c) "Treated LTBI state" referring to a successfully treated LTBI; (d) "Active TB state" referring to active TB disease; and (e) "Dead state" referring to death.
Movements between these were determined by probabilities that change according to age and the time of the cycle. Age and country of origin were the main determinant of a positive IGRA test, and age was the main determinants of initiating LTBI treatment after a positive IGRA as reported elsewhere [16]. The cycle length was chosen to be 6 months to accommodate for the most common treatment periods. To simplify the model and avoid having additional health states for active TB, the TB state was averaged to include sensitive TB (87%), mono-resistant TB (10%) and multi-drug resistant TB (MDR-TB) (3%), reflecting the epidemiological situation in this cohort.
Sensitivity analyses
Epidemiological parameters that were not based on local empirical data-reactivation rate, death due to TB, treatment efficacy and secondary transmission-were varied separately within ranges shown in Table 1.
Cascade of care
Cascade of care data were based on empirical data from the study site using the cohort described above and summarized in annex 1 [16].
Activation rate
Activation rate of LTBI infection to active TB is higher in the first 2 years of exposure. It was assumed to be 2.5% per year for the first 2 years, and 0.1% per year for all following years [20,21].
Secondary transmission
A fixed rate of 0.1 active secondary case per active case due to secondary transmission was assumed based on Swedish surveillance data including comprehensive whole-genome sequencing and epidemiological information for the period 2016-2018.
LTBI treatment efficacy
The treatment efficacy, meaning the reduction of risk of activation, varies in the literature. It was chosen as 77.5% for the 4-month rifampin regimen, which is the most commonly prescribed treatment within this patient group in Stockholm [22].
Death due to TB
Persons treated for active TB have increased the yearly risk of death by 7% according to Canadian data, which was assumed to be the same in Stockholm due to the lack of national data [23].
Death
Death from general causes has been taken from the official registry of general mortality rates in Sweden [24].
HRQoL data
Local HRQoL data were collected from persons treated for LTBI [24] and TB [25], respectively, at Karolinska University hospital in the period 2017-2018. These HRQoL decrements were used in the present analysis, meaning that no decrements were associated with the diagnosis or treatment of LTBI group as the analysis showed no statistically significant difference in HRQoL between LTBI patients and Stockholm population [25], while a decrement of 0.28 per year (0.14 per 6 months) was used for patients diagnosed and treated for TB [26]. Neither persons treated for LTBI nor TB patients included in these studies reported any severe adverse drug reactions. Nausea, dizziness and body pain were the main side effects reported [24,25]
Cost data
The societal perspective costs included: direct medical costs of screening, treatment, and management (including auxiliary services such as translators); direct non-medical costs including transportation costs for patients and companions; and indirect costs in terms of productivity loss. Ingredient cost data were derived from different local databases and references summarized in Table 2. For the present screening program, costs were calculated by multiplying ingredient costs with the number of persons in the cohort completing each step in the LTBI screening and treatment cascade, based on empirical cascade data reported elsewhere [16]. Costs included: (1) the cost of screening during the HE, including IGRA and CXR for those doing these tests and 20% of the HE staff cost for all entering the cohort (HE includes several other elements and 20% of staff time was estimated, based on observations, to be attributed to TB screening and related information); (2) cost of subsequent TB clinic visit for those referred and treatment costs for those initiating treatment (doctor visit, nurse visit, medicines, tests, transportation, translator and productivity loss). The no screening scenario was not associated with any LTBI screening or treatment cost.
Costs (and savings) of treating active TB were calculated based on ingredient costs for active TB, including diagnosis, treatment, management and contact investigation, multiplied by number of modelled incident active TB cases over the 50-year time horizon in each scenario.
Due to difficulties for asylum seekers to enter the Swedish labour market, the monthly cost of productivity loss was calculated based on the lowest 10th percentile monthly salary in 2016, which was 22,000 SEK, plus 31% social fees paid by employer to the state [27,28]. Therefore, the total monthly productivity loss was 28,912 SEK, divided into 21 workings days with 8 working h/day which leads to a cost of 172 SEK/h. This cost has been added to all ages, as for children it was assumed that one adult will miss work to assist the child.
Results
The present screening strategy applied during 4 years 2015-2018 was estimated to prevent 25 TB cases over the coming 50 years. The highest number of prevented cases, 18, was in the age group 13-19 (Table 3). Table 3 presents the average costs and effects per person and ICER for screening compared to no screening by different age groups. The lowest ICER was in the age group 13-19 years, and this was the only age group with an ICER of less than 500,000 SEK/QALY. ICER was in the range 500,000-1,000,000 SEK/QALY for the age groups 0-12 and 20-34 years, while it was above 1,000,000 for the older age groups. Incremental cost per prevented active TB case followed the same trend as cost per QALY. Table 3 also summarizes total costs, number of incident cases and total QALYs with and without the current screening program disaggregated by country TB incidence categories within each age group. Screening individuals aged 13-19 had ICER below 500,000 SEK/ QALY for all country TB incidence categories. Screening individuals from countries with incidence > 100/100,000 had ICERs < 500,000 SEK/QALY only in the 13-19 age group. In the 20-34 age group, the ICER approached but was still above 500,000 SEK/QALY for screening people from countries with incidence > 300 per 100,000. Screening people over the age of 34 had an ICER > 1,000,000 SEK/QALY regardless of country of origin. Figure 2 shows the results of the sensitivity analyses. Although variation in each parameter resulted in large ICER changes, the ICER remained above 500,000 SEK/ QALY in all variations for age groups 0-12 and 20-34 and above 1,000,000 SEK/QALY for all variations in those 35 years of age or older.
Scenario analysis
For the age group 20-34, we ran a scenario analysis to predict the effect of a hypothetical increase in the referral and treatment initiation rates. If the completion of these steps of the cascade would be the same as empirically observed for the age group 13-19 (70% visiting TB clinic, 65% of those starting treatment and 94% of them completing treatment),
Discussion
To our knowledge, this is the first study assessing the costeffectiveness of migrant LTBI screening in Sweden. Our model estimates that the implementation of the present screening approach between 2015 and 2018 will prevent 25 TB cases over 50 years.
Cost-effectiveness of LTBI screening is dependent on patient and provider adherence [16]. We have previously shown that the present screening strategy is implemented largely according to policy and that patient adherence is high [15]. Still, this screening strategy is only clearly costeffective for some subgroups, while it is not cost-effective for others. Despite the lack of absolute cost-effectiveness thresholds to guide health care prioritization in Sweden, the National Board of Health and Welfare usually considers an intervention cost-effective if the ICER is below 500,000 SEK/QALY. Our findings, therefore, suggest that screening individuals in the age group 13-19 is cost-effective and should continue to be recommended. With ICERs in the range of 500,000-1,000,000 SEK/QALY screening in children aged 0-12 and adults aged 19-34 falls in the category of high cost per QALY ("moderately cost effective"), but this is not an absolute reason for not recommending it. The reason for higher ICER in the age group 0-12 than in 13-19 is the lower prevalence of positive IGRA, which means a much small fraction of those screened are candidates for LTBI treatment. However, on the individual level, LTBI treatment may be especially valuable in young children.
Our results show that a change in the present strategy towards referring more LTBI patients in the age group 19-34 to TB clinic and initiating preventive treatment for them could increase the cost-effectiveness within this group. The same conclusion has been drawn concerning LTBI screening in Norway, another Scandinavian country with a similar TB epidemiology and profile of migrants, where the results emphasized the need to increase treatment initiation in IGRA-positive patients below the age of 35 years [29]. Therefore, more access to preventive treatment for this group may be recommended, especially since our local data did Fig. 2 Results of the one-way sensitivity analysis for the base case in different age groups using the upper and lower values of parameters reported in Table 1. ICER incremental cost-effectiveness ratio; SEK Swedish Krona; QALY quality-adjusted life years; TB tuberculosis not show hepatotoxicity or other major adverse effects during treatment [25]. Treating this age group is in line with international LTBI management guidelines and recommendations [1].
The restrictive policy for LTBI treatment for the age group 20-34 in Stockholm has been questioned and might be seen as conservative compared to other regions of Sweden where treatment is recommended for those up to the age of 35 [15,29] In addition, the cascade of care data collected in this cohort shows high rates of treatment completion rates [15] which is promising for the effectiveness and cost-effectiveness of LTBI screening. A recent review concluded that effectiveness of LTBI programs in the EU/EEU is largely limited by a weak care cascade when a minority of migrants who are screened complete preventive treatment [30], which seems not to be a problem in the Stockholm setting.
The ICERs for screening persons above the age of 34 was well above the 1,000,000 SEK/QALY threshold regardless of country of origin. The present screening approach in this age group is, therefore, definitely not cost-effective. This is mainly due to the fact that these groups are usually not eligible for LTBI treatment. Preventive treatment is generally not recommended for patients older than 35 in Sweden according to the public health agency guidelines, mainly due to the higher risk of adverse drug reactions that might outweigh the benefit of preventive treatment [29].
The main rationale for IGRA screening in this age group is to detect active TB, as it increases screening sensitivity compared to only screening for TB symptoms [29]. In our analysis, we have accounted for the reduction in secondary cases due to early detection of active TB and the population-level preventive effect is minimal. In the 4-year period we have assessed, only two active TB cases were detected through screening in this age group at HE. It is not known how many of them were detected through symptom screening vs. IGRA screening. Regardless, the added value of IGRA screening is probably very modest as long as preventive treatment is not offered. Whether increasing the age threshold for recommending preventive treatment or discontinuing IGRA screening in this age group and instead replace it with symptom screening only or symptom screening plus CXR requires further analysis.
The cost-effectiveness analysis performed by ECDC in four European countries (Spain, Portugal, Netherlands and Czech Republic) concluded that LTBI screening of migrants at entry is cost-effective when persons from high-incidence countries are targeted, which is in line with our results [10]. In addition, previous reviews and original studies [29][30][31][32] suggested that screening young migrants with IGRA is costeffective, especially those coming from countries with TB incidence > 150/100,000 [32]. However, interpretation and comparisons across settings and studies requires caution due to the methodological differences and the use of different approaches of modelling and assumptions [16].
One of the study strengths is that we have relied on recent local data on epidemiology, costs, cascade of care as well as HRQoL; in addition, our results are robust and supported by the sensitivity analysis with a reasonable level of certainty about the cost-effectiveness of the screening strategy for the age group 13-19. However, assumptions and simplifications such as the modelling of TB states and the costing approach have been made and need to be taken into consideration when interpreting the results. Other limitations are: (1) not directly considering the adverse drug reactions and its consequences in term of HRQoL, especially hepatotoxicity; (2) assuming a 100% sensitivity and specificity of IGRA; (3) assuming the same activation rate for all patients without differentiating in term of age, risk factors or time since infection; (4) assuming a 100% efficacy of TB treatment meaning that all patients that are treated for TB are assumed to return to "healthy" after finishing the treatment.
Another limitation of this analysis is the fluctuation in IGRA positivity rates by incidence rate categories in country of origin. This fluctuation makes it hard to reliably predict cost-effectiveness in different subgroups. Despite not influencing the general recommendations, the sensitivity analysis sheds light on the importance of IGRA positivity as a variable that can influence the results of LTBI screening. In sum, it should be kept in mind that migration patterns vary between and within countries and change over time and along with that the IGRA positivity in different groups. Consequently, the recommendations drawn through this analysis might not apply in another setting or in the future.
Conclusion
This study assessed incremental costs of LTBI screening among migrants in Stockholm in relation to potential future health benefits and concluded that screening young individuals from high-incidence countries is cost-effective, especially persons in the age group 13-19 years who have recently migrated from a country with an incidence > 100/100,000. Effectiveness and cost-effectiveness of the screening program for patients of age 19-34 years could be enhanced through targeting highest incidence countries and ensuring a high rate of referral and initiation of preventive treatment. For individuals over the age of 34, this study shows that LTBI screening with the current strategy is not cost-effective even for migrants from high-incidence countries. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will Table 4 Summary of epidemiological parameters used in the economic analysis *Only those with epidemiological risk factors are screened **Data reported by Nederby-Öhd et al. [15], Table 1 ***Data are obtained through the same database explained in Nederby-Öhd et al. [15], however, not reported in the article itself † Values are the product of "Referred to TB specialist clinic of those screened positive by IGRA" and "Visit at TB specialist clinic of those referred" in Nederby-Öhd et al. [15], Table 2 ‡ Data reported by Nederby-Öhd et al. [15], Table 2 § Data reported by Nederby-Öhd et al. [15], Table 3 Age groups disaggregated by incidence in country of origin + IGRA from the cohort** Chest X-ray referral from the screened cohort *** need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2021-02-09T15:14:38.071Z
|
2021-02-09T00:00:00.000
|
{
"year": 2021,
"sha1": "875f35167b7f9de6780a1cf0679a10f9f2a6a8f2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10198-021-01265-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "875f35167b7f9de6780a1cf0679a10f9f2a6a8f2",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213949805
|
pes2o/s2orc
|
v3-fos-license
|
Demand Side Management: Demand Response, Intelligent Energy Systems and Smart Loads
Request side administration is a helpful and vital device in savvy network vitality the executives framework to decrease absolute power request amid pinnacle request periods and henceforth, upgrading lattice supportability and lessening by and large expense. The proposed load planning approach dependent on guage power costs and pre-booked burdens. It essentially utilizes the technique for load moving of move capable and interruptible loads and can be constrained by the brought together controller of things to come savvy network. This methodology advances the utilization bends of family unit, business and mechanical shoppers. The proposed calculation in this methodology limits the expense caused by clients while considering clients' individual inclinations for the heaps by setting needs and favored time interims for load planning
INTRODUCTION
The demand for electricity has been growing rapidly around the world for the last decade. To cope with the growing demand and evolving governmental and environmental regulations, it is necessary to question if the current level of consumption can continue in the long run. Conventional methods have focused on improving the efficiency and methods of power generation and distribution. However, this improvement is limited by existing infrastructure of power stations. Hence, instead of altering the generation side of the power systems, much attention has been shifted on the demand or the consumer side. Demand side management strategies generally focus on improving efficiency and reducing cost in load scheduling.
These strategies are not newthe idea has been explored since the 1970s. In the USA, a few utility companies have tried introducing incentives for load shifting and demand response using different electricity rates by time of the day, which garnered a little support. Even back then its potential was recognized and studies were performed on demand side management using different technologies and control methods such as clock-based controls, communication via power lines, telephone lines and radio.
Energy demand management, also known as demand-side management (DSM) or demand-side response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education. The goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours. A newer application for DSM is to aid grid operators in balancing intermittent generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. DSM refers to initiatives and technologies that encourage consumers to optimise their energy use. The benefits from DSM are potentially two-fold; first, consumers can reduce their electricity bills by adjusting the timing and amount of electricity use. Second, the energy system can benefit from the shifting of energy consumption from peak to non-peak hour.
Particle swarm optimization (PSO) is a population based stochastic optimization technique developed by Dr. Eberhart and Dr. Kennedy in 1995, inspired by social behavior of bird flocking or fish schooling. PSO shares many similarities with evolutionary computation techniques such as Genetic Algorithms (GA). The system is initialized with a population of random solutions and searches for optima by updating generations. However, unlike GA, PSO has no evolution operators such as crossover and mutation. In PSO, the potential solutions, called particles, fly through the problem space by following the current optimum particles. Each particle keeps track of its coordinates in the problem space which are associated with the best solution (fitness) it has achieved so far. (The fitness value is also stored.) This value is called pbest. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This location is called lbest. When a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest.
The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest locations (local version of PSO). Acceleration is weighted by a random term, with separate random numbers being generated for acceleration toward pbest and lbest locations. In past several years, PSO has been successfully applied in many research and application areas. It is demonstrated that PSO gets better results in a faster, cheaper way compared with other methods.
Another reason that PSO is attractive is that there are few parameters to adjust. One version, with slight variations, works well in a wide variety of applications. Particle swarm optimization has been used for approaches that can be used across a wide range of applications, as well as for specific applications focused on a specific requirement. Particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. It solves a problem by having a population of candidate solutions, here dubbed particles, and moving these particles around in the search-space according to simple mathematical formulae over the particle's position and velocity. Each particle's movement is influenced by its local best known position, but is also guided toward the best known positions in the search-space, which are updated as better positions are found by other particles. This is expected to move the swarm toward the best solutions. PSO is a met heuristic as it makes few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, met heuristics such as PSO do not guarantee an optimal solution is ever found. Also, PSO does not use the gradient of the problem being optimized, which means PSO does not require that the optimization problem be differentiable as is required by classic optimization methods such as gradient descent and quasi-newton methods.
METHODOLOGY
Proposes load scheduling approach based on forecasted electricity prices and pre-schedule loads. It mainly uses the method of load shifting of shift-able and interruptible loads and can be controlled by the centralized controller of the future smart grid. The main objective of this demand side management approach is to minimize cost for the consumers while maintaining their specified preferences for the pre-scheduled loads, and in turn, maximizing their satisfaction. The demand side management can be performed daily in smart grids. the demand side management controller asks for the user inputs for the pre-scheduled loads. For each load, the user will input the starting time the deadline the load duration and the ideal time for the load to run. Furthermore, the loads are classified into following three classes.
Class 1 includes uncontrollable loadsthese loads will run at ideal times specified by the users and will not be considered for load shifting. Class 2 includes controllable but un-interruptible loads. These loads will run continuously for the given load duration, within the time interval specified by the users but will be shifted to achieve minimum cost. Class 3 includes controllable and interruptible loads, which can be cut off while running, and resumed separately within the time frame specified by the users. The class types of the loads also contribute to minimize total cost of the load scheduling.
DEMAND SIDE MANAGEMENT
Energy demand management, also known as demand-side management (DSM) or demandside response (DSR), is the modification of consumer demand for energy through various methods such as financial incentives and behavioral change through education. The goal of demand-side management is to encourage the consumer to use less energy during peak hours, or to move the time of energy use to off-peak times such as nighttime and weekends. Peak demand management does not necessarily decrease total energy consumption, but could be expected to reduce the need for investments in networks and/or power plants for meeting peak demands. An example is the use of energy storage units to store energy during off-peak hours and discharge them during peak hours. A newer application for DSM is to aid grid operators in balancing intermittent generation from wind and solar units, particularly when the timing and magnitude of energy demand does not coincide with the renewable generation. DSM refers to initiatives and technologies that encourage consumers to optimize their energy use. The benefits from DSM are potentially two-fold; first, consumers can reduce their electricity bills by adjusting the timing and amount of electricity use. Second, the energy system can benefit from the shifting of energy consumption from peak to non-peak hour.
Energy efficiency
Using less power to perform the same tasks. This involves a permanent reduction of demand by using more efficient load-intensive appliances such as water heaters, refrigerators, or washing machines.
Demand response
Any reactive or preventative method to reduce, flatten or shift demand. Historically, demand response programs have focused on peak reduction to defer the high cost of constructing generation capacity. However, demand response programs are now being looked to assist with changing the net load shape as well, load minus solar and wind generation, to help with integration of variable renewable energy. Demand response includes all intentional modifications to consumption patterns of electricity of end user customers that are intended to alter the timing, level of instantaneous demand, or the total electricity consumption. Demand response refers to a wide range of actions which can be taken at the customer side of the electricity meter in response to particular conditions within the electricity system (such as peak period network congestion or high prices), including the aforementioned IDSM.
Dynamic demand
Advance or delay appliance operating cycles by a few seconds to increase the diversity factor of the set of loads. The concept is that by monitoring the power factor of the power grid, as well as their own control parameters, individual, intermittent loads would switch on or off at optimal moments to balance the overall system load with generation, reducing critical power mismatches. As this switching would only advance or delay the appliance operating cycle by a few seconds, it would be unnoticeable to the end user. In the United States, in 1982, a (nowlapsed) patent for this idea was issued to power systems engineer Fred Schweppes. This type of dynamic demand control is frequently used for air-conditioners. One example of this is through the Smart AC program in California.
Distributed Energy Resources
Distributed generation, also distributed energy, on-site generation (OSG) or district/decentralized energy is electrical generation and storage performed by a variety of small, grid-connected devices referred to as distributed energy resources (DER). Conventional power stations, such as coal-fired, gas and nuclear powered plants, as well as Hydroelectric dams and large-scale solar power stations are centralized and often require electric energy to be transmitted over long distances. By contrast, DER systems are decentralized, modular and more flexible technologies that are located close to the load they serve, albeit having capacities of only 10 megawatts (MW) or less. These systems can comprise multiple generation and storage components; in this instance they are referred to as hybrid power systems. DER systems typically use renewable energy sources, including small hydro, biomass, biogas, solar power, wind power, and geothermal power, and increasingly play an important role for the electric power distribution system. A grid-connected device for electricity storage can also be classified as a DER system, and is often called a distributed energy storage system (DESS). By means of an interface, DER systems can be managed and coordinated within a smart grid. Distributed generation and storage enables collection of energy from many sources and may lower environmental impacts and improve security of supply. editor@iaeme.com
CONCLUSION
Demand side management strategy proposed in this paper proves to be effective in producing substantial cost savings while reducing the peak demand. The simulation also shows that it can be used to find an optimal load schedule for a large number of different devices over a day. This method primarily works with load shifting techniques, and hence, further studies can be conducted on using load curtailing when the algorithm produces a peak demand in less expensive time intervals when the user-specified loads are too flexible.
|
2019-09-15T03:13:57.937Z
|
2019-01-31T00:00:00.000
|
{
"year": 2019,
"sha1": "c7990a66e4624d53188cbb6c64d2f0f873bf3940",
"oa_license": null,
"oa_url": "https://doi.org/10.34218/ijeet.10.1.2019.003",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "f0d54a54ea6fe1eb9625b6023d71758750643223",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Business"
]
}
|
55310489
|
pes2o/s2orc
|
v3-fos-license
|
A Comparative Study of Energy Consumption for Residential HVAC Systems Using EnergyPlus
Energy conservation and sustainability have become an attractive field for research due to the growth in population and continuing search for better living standards. Heating, Ventilation, and Air Conditioning (HVAC) systems account for almost half of consumed energy in buildings and around 10 to 20% of total energy consumption in developed countries. In general, the trend of installing central HVAC systems increases in residential and commercial buildings. In this research, a study of energy consumption of HVAC systems in residential buildings has been conducted with the aim to compare those systems from an energy consumption point of view. The final goal of this research is to reduce energy requirements of residential buildings sector to save energy and reduce carbon emission. A medium size residential building in the city of Tripoli, Libya, was selected as a case study. EnergyPlus building simulation software along with OpenStudio software were used to model the house and HVAC systems. The results show that the virtual component “ideal air loads” used in EnergyPlus is very easy to use, however, its calculated energy consumption is overestimated compared to other models. Therefore, using that component can be misleading and may result in high monthly and annually energy consumption results. The results also show that in a residential building, unitary systems consume the least annual energy consumption compared to other models. It was concluded that variations in energy consumption of the considered HVAC systems decrease as the coefficient of performance (COP) increases and visa verse.
Introduction
One-third of the world's energy consumption is associated with Buildings [1]. Air conditioning systems consume more energy than any other devices used in building services, that was estimated to be about half of the energy consumed in buildings, and between 10% to 20% of total energy consumed in developed countries [2]. In Libya, the consumed energy used to cool and heat residential buildings is about 18% of the domestic energy consumption and about 6% of the total energy [3] as shown in Figure 1. Hence, reduction in energy requirements by HVAC systems in buildings may lead to an improvement in building energy saving and efficiency.
A significant amount of research has been conducted recently in reviewing different HVAC systems, their energy consumption, and methods used in modeling and simulating those systems. Reviews and comparisons of modeling methods for HVAC systems have got a considerable attention [4][5][6][7][8]. Vakiloroaya et al. [9] focused on various strategies used in saving energy required for operating HVAC systems and they carried out a comparative study between different approaches that improve the performance of HVAC systems. Albayyaa et al. [10] compared energy consumption for Air Conditioning systems among various residential buildings. Their results show that modern buildings require 53% less energy compared to old buildings. Zhou et al. [11,12] compared HVAC models in three building energy modeling programs (BEMPs). The conclusion of their work is that although the software is capable of modeling conventional HVAC systems, however, there were discrepancies in the results due to differences in input parameters and control strategies. They also found that EnergyPlus has more comprehensive component models than the other programs. Hasan et al. [13] used combined simulation and optimization for the minimization of life cycle cost of a detached house. They used the IDA ICE 3.0 simulation program and the GenOpt 2.0 optimization program to optimize five selected design variables in the building construction and HVAC system. Many types of research have been conducted using EnergyPlus software, a building simulation software that is created and updated by the USA Department of Energy [14]. The application of this software is numerous, for example, Shabunko et al. [15,16] used EnergyPlus for benchmarking while Alghoul et al. [17] used EnergyPlus to study energy consumption and energy saving through different types of double glazed windows. Fumo et al. [18] used EnergyPlus for energy consumption calculations in order to develop a simple methodology for energy consumption. It is important to mention that using EnergyPlus in energy-related research has been expanding and cannot be fully presented in this paper.
The effect of HVAC systems on the environment is significant in mainly two ways. Firstly, refrigerants create a greenhouse effect that leads to a global warming. Secondly, carbon dioxide generated from the energy used to power HVAC systems also leads to a greenhouse effect. Therefore, reducing energy requirements and size of HVAC systems might result in reducing the global warming.
The main goal of this study is to compare HVAC systems by using EnergyPlus software from energy point of view. Those HVAC systems are: Ideal air loads (a virtual component is used in EnergyPlus software), Variable Flow Refrigerant, Packaged Rooftop Heat Pump, Packaged Terminal Heat Pump, and Unitary systems. The study also aims to reduce energy consumption in residential buildings due to HVAC systems and to analyze the influence of related parameters.
Methodology
In this work energy consumption of HVAC systems in residential buildings has been studied. The building is located in the city of Tripoli, Libya which is described in details in next section.
EnergyPlus simulation software/engine with SketchUp and OpenStudio software were used to calculate the required cooling and heating capacity and the energy consumption of the whole building. SketchUp was used to draw and create the model geometry, while OpenStudio is used to modify model properties namely: constructions, materials, occupancy, internal loads, and schedules [19]. Then, EnergyPlus is used to perform an annual energy simulation [20,21] in order to estimate building's annual energy consumption. Finally, obtained results are presented in OpenStudio in SI units. EnergyPlus carries out a zone heat balance for the load calculations. Zone heat balance calculations are divided into surface and air components. TARP and DOE-2 algorithms were selected for inside and outside surface convection, respectively, and Conduction Transfer Function (CTF) solution algorithm was chosen for the calculations of the conduction through walls [20]. EnergyPlus calculates heating and cooling loads required to maintain the zone at preset setpoints conditions and therefore calculates annual energy requirements of HVAC systems and for the entire building.
The Ideal air loads component along with four HVAC systems namely: Variable Refrigerant Flow (VRF), Packaged Rooftop Heat Pump (PRHP), Packaged Terminal Heat Pump (PTHP) and Unitary system should be modeled. The results of total energy, cooling energy, and heating energy consumption are estimated and compared. The influence of the variation of the coefficients of performance (COP) related to such systems is also investigated.
The Case Study
A house located in the city of Tripoli, Libya, was modeled using EnergyPlus-OpenStudio plugin. The house is a twofloor building as shown in Figure 3 with a total floor area of 280 m 2 (net conditioned building area). The ground floor contains a kitchen, bathrooms, guest rooms and living area while the first floor is a sleeping floor that contains bedrooms and bathrooms. This type of houses is considered as a contemporary house widely found in Libya. The whole window to wall ratio (WWR) of the building is 15.64% distributed as 11.75%, 0.0%, 18.64%, and 16.42% of North, East, South, and West faced walls, respectively. The eastern walls of the house are considered adiabatic walls as they are adjacent to neighbors' walls.
The values of the overall heat transfer coefficients of external walls, roof, windows and doors of this house are described in Table 1.
HVAC Models and Simulation Parameters
This section presents the main design parameters and the specifications of HVAC systems/models considered in this work. Table 2 contains the main assumptions related to the design parameters that were applied in the simulation of all HAVC models.
Some types of HVAC systems are available in EnergyPlus. These are selected for this study and they are described below. Settings and characteristics of those systems are listed within each related section. Cooling and heating supply air temperature set to be equal to 14°C and 40°C, respectively, for all studied systems, while air ventilation was set to equal to zero for all considered systems.
Ideal Air Loads Component
Ideal air loads component is a practical component built in EnergyPlus that represents an ideal HVAC system. Energy consumption for ideal loads air systems is reported in the results as district heating and cooling, and does not appear as cooling and heating loads [20].
The component is assumed to supply cooling or heating air to the related zone to meet the zone load or up to specified limits that should be provided by the user. This is usually used when users have no interest in modeling the HVAC system and plant, or do not have a good knowledge of the types of HVAC systems. That allows achieving preliminary results without neither specifying some operating parameters nor specifying certain HVAC system.
Unitary System
A unitary system model is defined as a single unit that coordinates the operation of HVAC components. The model used here comprises of a fan, direct expansion heating coil, direct expansion cooling coil and electrical supplement heating coil. The fan can be operated in cycling or continuous supply modes. The specification of the system is listed as follows:
Packaged Terminal Heat Pump (PTHP)
Packaged Terminal Heat Pumps are through-the-wall units. They represent an easy way to heat and cool small spaces. The specifications of the PTHP model used in this study are as follows: Fan efficiency 70% Pressure rise 250 Pa Motor efficiency 90% Rated heating COP=5 Rated cooling COP = 3
Packaged Rooftop Heat Pump (PRHP)
The type of Packaged Rooftop Heat Pump used in this study comprises of a direct expansion heating coil, direct expansion cooling coil, electrical heating coil and constant flow fan. More details about the system are listed below: Electrical heating coil efficiency 90% Constant volume fan Fan efficiency = 70% Pressure rise = 500 Pa Motor efficiency = 90% Rated heating COP=5 Rated cooling COP = 3
Variable Refrigerant Flow System (VRF)
VRF systems are considered light weighted and flexible. Each component can be transported and fitted easily. Several modules are used to cover high loads of cooling and heating capacities. VRF systems are known for their precise temperature control. Although Coefficients of the performance of VRF systems are practically superior to those for other systems, they have been assigned the same values that assumed previously for other systems. Some of the operating parameters for the system are listed here: VRF zone terminal Fan efficiency 60% Pressure rise = 300 Pa Motor efficiency = 80% Rated heating COP=5 Rated cooling COP =3
Results and Discussion
The results presented below focuses on the electricity consumption by the proposed HVAC systems/models. Those models are frequently applied in simulating buildings' annual energy consumption using EnergyPlus.
The total building energy consumption is shown in Figure 4 for the various HVAC models. It is in the range of 44009 kWh to 71213 kWh. PTHP and Unitary systems require low energy consumption of about 44000 kWh while VRF and Packaged Rooftop Heat Pump require high energy consumption of 59725 kWh and 54508 kWh, respectively.
The Ideal air loads component/model has recorded the maximum energy consumption of 71213 kWh, with an increase of about 61% compared to the Unitary system. This energy consumption value is high since the Ideal air loads model operates at 100% efficiency, i.e. its COP equal to 1 whereas the other models have COPs equal to 3 and 5 for cooling and heating, respectively. In this way, the Ideal Air load can be considered as a space energy needed to heat or cool the air but not the energy consumption of HVAC systems.
To look further into the details of the energy consumption of the HVAC systems, Figure 5 compares different categories of the energy consumption among the selected systems. It is evident from the figure that although Ideal Air Load has no equipment, it shows the maximum energy consumption for heating and cooling processes. Fans energy consumption is higher for central systems than the other units as expected. In general, the energy required for cooling is much greater than that needed for heating. For example, the share of cooling energy, heating energy and fans energy requirements of the total Unitary system energy requirement are 62%, 31%, and 7%, respectively. Figure 6 presents the monthly HVAC energy consumption for the selected systems. The trend of the load is almost the same for all systems. The maximum energy consumption occurs in the month of July except for the VRF system where it takes place in the month of August. Minimum energy consumption due to the HVAC systems occur in the months of April and November with comparable values for the central systems and Ideal Air Loads component. That is because of the decrease of the HVAC performance due to low external thermal loads. Figure 7 represents a comparison between the energy consumption for the different models. In this case, all models have been assigned a unit value for the cooling and heating energy performance. The Ideal Air Loads provided the lowest energy consumption among the models. Energy consumption of Unitary and PTHP models are very close to that of the Ideal air loads whereas high energy consumption resulted in other systems.
Finally, the effect of changing cooling COP and heating COP on cooling and heating energy consumptions is studied for all models. As the Ideal Air Loads model is assumed to have 100% efficiency, it is represented in Figure 8 and Figure 9 by a single energy consumption point, diamond mark (♦). In Figure 8 the relation between cooling energy consumption and cooling COP is presented. In this case, energy consumption obtained using Ideal Air Loads are less than other loads at COP equal to 1. However, by increasing the COP of other models the values of energy consumption decrease drastically and the difference in energy consumption between different models becomes smaller. Figure 9 shows heating energy consumption versus heating COP. In this case, the trend of the change in energy consumption with COP for various models is similar to the previous case. However, energy consumption estimated assuming an Ideal Air Loads component is high for COP equal to 1.
Conclusions
HVAC systems consume a considerable amount of nationally produced energy. There is substantial research that focuses on improving the selection and operation of HVAC systems. This paper presents detailed study on the energy consumption of five HVAC models namely: Ideal Air Loads, Variable Flow Refrigerant, Packaged Rooftop Heat Pump, Packaged Terminal Heat Pump, and Unitary. These are frequently used by EnergyPlus users.
A medium size residential house in the city of Tripoli, Libya has been taken as a case study. It consists of two floors with 280 m 2 area and has a total WWR of 16%. Materials and constructions selected for the residential building model that is used in this study are the most commonly used materials in Libya today.
The results showed that (1) the virtual component "Ideal air Loads" used in EnergyPlus is very easy to implement. However, it is not an HVAC system, and the resulted energy consumption is very high as reasonable values of COP are not included. Therefore, its results can be misleading when the estimation of energy consumption is needed. (2) Although central air conditioning is becoming more popular in residential and commercial buildings due to aesthetic, comfort and ease of use issues, Unitary and PTHP systems consume minimum energy compared to other systems. (3) The difference between energy consumption of different HVAC systems decreases as the coefficient of performance increases for both heating and cooling use.
|
2019-04-15T13:11:09.962Z
|
2017-01-24T00:00:00.000
|
{
"year": 2017,
"sha1": "4ee595001941470824886564a2eaf55339ae3bdd",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajmie.20170202.16.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "af6839c39c3140b55b31324714a2da88826fcaf2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
232216416
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Decentralization Policy in Improving Community Welfare Regional Government of Special Yogyakarta-Indonesia
The Special Region of Yogyakarta is a Special Region at the provincial level in Indonesia which is the fusion of the Sultanate of Yogyakarta and the Paku Alaman Duchy. The Special Region of Yogyakarta is located in the southern part of the Indonesian Island of Java, and is bordered by the Provinces of Central Java and the Indian Ocean. Specialties of the Special Region of Yogyakarta must also play a role as an autonomous region implementing decentralization. However, the problem is that the policies, implementers and implications of decentralization have not been able to improve the welfare of the people. This can be seen from the level of income gap in the Special Region of Yogyakarta getting higher. Even the gap in the Special Region of Yogyakarta is above the national figure. The Government of the Special Region of Yogyakarta is expected to be able to encourage and run activities programs that are focused on people's welfare. Development must be directed to create jobs and increase community income so that people's welfare increases. This study aims to measure the effect of decentralization policies on improving the welfare of the people in the Yogyakarta Special Region Government.
Introduction
Special Region of Yogyakarta (DIY) is a Special Region at the provincial level in Indonesia which is the fusion of the Sultanate of Yogyakarta and the Paku Alaman Duchy. The Special Region of Yogyakarta is located in the southern part of the Indonesian Island of Java, and is bordered by the Provinces of Central Java and the Indian Ocean. The Special Region which has an area of 3185.80 km 2 consists of one city and four districts. According to the 2010 population census it has a population of 3,452,390 people with a proportion of 1,705,404 men, and 1,746,986 women, and has a population density of 1,084 people per km 2 . (Note 1) In the procedure for filling the position of governor, and vice governor one of the conditions that must be fulfilled by candidates for governor, and deputy governor is enthroned as Sultan Hamengku Buwono for the candidate for Governor, and enthroned as Duke of Paku Alam for the candidate for Deputy Governor. (Note 2) In the context of the unitary state, the policy of granting autonomy to the regions is a typical policy that happens as much as in Indonesia. Here the granting of authority is based on several justifications: First, in a unitary state, the granting of autonomy is actually a manifestation of people's sovereignty as a unity of the nation, not as the sovereignty of various independent national community groups.
Second, the fact that the central government will not be able to adequately regulate and manage the interests of a society that is very diverse and in a very far geographical range.
Based on these, then, the central government decentralizes its authority to the regions so that the regions can meet the interests and aspirations of the local people more effectively and efficiently.
In addition to these reasons, Bird and Vaillancourt (1998) state that decentralization policy has become a popular policy lately because this policy model promises to occur: economic efficiency, program cost effectiveness, accountability, increased resource mobilization, reduced disparity (disparity), increased political participation, and strengthening democracy and political stability (Note 8). Through decentralization, local governments play a greater role in development because they now have the authority and responsibility to carry out community development in their areas.
Community Welfare
The term community welfare itself should not be defined in a narrow sense, which only uses the amount of Gros Regional Domistic Product (GRDP) per capita as an approach but must involve several other indicators that are considered to be supporting elements of the concept of public welfare in a broad sense. In the Republic of Indonesia Law No. 6 of 1974 concerning Basic Provisions for Social Welfare, in the general explanation section stated that "the field of social welfare is very broad and complex, which includes among others, aspects of education, health, religion, labor, social welfare and others".
Judging from the definition, it can be concluded that the assessment of welfare has 2 (two) dimensions, namely the physical and psychological dimensions. Given the psychological dimension, its nature is very subjective and the measurement process is not easy, so most studies more often use physical dimensions as a measure of people's welfare. Therefore, there are currently many developing welfare measurements from the point of view of physical dimensions, for example the Human Development Index, the Physical Quality Life Index; Basic Needs, etc.
Theoretically, the level of community welfare in the regions should be better when the decentralization policy is implemented considering the quality of information and the level of transparency in the administration of governance in the era of decentralization is better than from the era of centralization. However, although decentralization seems to have been a "profitable" style of government management over the last few decades, Bardhan and Mookherjee (2005) mention that the effects of decentralization differ from country to country mas.ccsenet.org Modern Applied Science Vol. 15, No. 1;2021 97 (Note 9). This means that decentralization policies cannot guarantee they will always bring depends on local conditions. In other words, decentralization is not always effective in improving people's welfare. Elhiraika (2007) states that the reason behind this phenomenon is because of the "lack of commensurate revenue assignments, inadequate access to financial markets, and lack of necessary administrative capacity" (Note 10).
During this time, quantitative studies on decentralization have used more fiscal or financial approaches as approaches. But Bird, in Hong (2011), given the degree of autonomy of a positive influence due to different local governments, the effect would be highly inappropriate if measuring decentralization was only seen from the fiscal aspect (Note 11). Zimmerman states that decentralization should be measured not from a single index only, but by multiple interrelated indicators, which include 4 (four) dimensions, namely: dimensions of structure, finance, functions, and personnel (Note 12). Stephens (in Hong, 2011) also proposed three dimensions namely financial, functional, and personnel dimensions to measure the degree of decentralization of local government (state government) in the United States (Note 13).
Research Objects
In this study, the object of research as well as population is the Government of the Special Region of Yogyakarta consisting of Bantul Regency, Gunung Kidul, Kulon Progo, Sleman and Yogyakarta City. This study as a whole will use data from the Government of the Special Region of Yogyakarta since 2014-2019.
Decentralized Variable
By calculating the possibility of a measurement, the decentralization variable in this study is measured in 3 (three) dimensions: a. The degree of Fiscal Decentralization is measured using the percentage of the Original Local Revenue approach to total regional revenue.
b. The degree of Functional Decentralization is measured by the percentage of regional government expenditure out of the total national government expenditure. As explained by Hong (2011), in fact the most accurate approach to measure the degree of functional decentralization is the percentage of functions carried out by local governments to total government functions, however, time-series data from these approaches are difficult to collect, so this research is used expenditure approach (Note 14).
c. The degree of Decentralization of Personnel is measured by the percentage of regional Civil Servants towards the central Civil Servants.
Variable Control
Control variables consisting of Investment and Labor Variables. This control variable is made in order to see the possibility of other variables outside the studied variables, which are considered to affect the level of community welfare in an area. The variable referred to labor will use the percentage of labor force approaches that have worked. The investment variable used in this study is the ratio of total Domestic Investment and Foreign Investment to GRDP in a Regency / City in the Special Region of Yogyakarta. The selection of investment variables and labor variables as control variables is due to this variable allegedly having an influence on the economic growth of a region as used in the study of Jing Jin and Heng-fu Zou (2000) (Note 15).
a. Variable Gross Regional Domestic Product per capita
Dependent variables in this study are variables related to community welfare that are defined in a broad sense, which includes several aspects such as: regional economic aspects, infrastructure aspects, educational aspects and health aspects. Variables that become approaches from regional economic aspects include: Gross Regional Domestic Product variables per capita. This is based on the logic that the Gross Regional Domestic Product per capita variable provides information about how much value added is generated in each region, so it is a basic indicator for understanding the economic condition of an area as a whole.
b. Variable Length of Road Increase Per Capita
The variable used in this study is the variable length of road per capita. This variable is considered important in supporting the creation of regional welfare. Judging from the theory, the length and condition of the roads are closely related to the comfort of life of each citizen which can be further linked to aspects of community welfare.
c. Education Indicator
To measure education indicators, this study uses the approach of the number of school buildings per population and the ratio of students to teachers for the education level of senior high school. The first variable shows the level of accessibility of the community to secondary education infrastructure, while the second variable shows the quality of education level in terms of the composition of the teacher to student ratio.
d. Health Indicator
To measure health indicators, this study uses the approach of the number of doctors per 1,000 population and the number of beds in hospitals per 1,000,000 residents. Here, the number of doctors is an indicator that is widely used to represent the development of health services. Meanwhile, the number of beds in the hospital, although it does not appear to be directly related to regional development, can certainly be closely related to the level of welfare of an area.
Similar to the treatment of independent variables, this research is also not based on the idea that if regional differences are not measured by a single indicator but rather measured by various indicators that represent various aspects, then this is considered more persuasive to see various aspects of community welfare in the Special Region of Yogyakarta compared if it must be arranged in one composite index.
Quantitative descriptive
The analytical method used in this research is quantitative descriptive. This analysis method aims to provide an overview, study and test the existence of the theory empirically of the independent variables that affect a dependent variable. Descriptive analysis methods are prepared based on secondary data, literature, journals, papers, articles and previous research results relating to the problem under study. In conducting quantitative analysis methods carried out through econometric modeling which is interpreted statistically. The data processing technique is done by using the Reviews Program.
Data Types, Data Collection Techniques and Data Sources
The data used in this study is panel data to produce an index that is composite data composite to measure the level of community welfare. It consists of time series data and cross section data from 5 districts / cities in Yogyakarta Special Region. Data collection techniques are carried out through library research activities from various sources to find a factual description, starting with literature review and reviewing related research results, so that a clear and comprehensive picture of the object and analysis will be obtained.
Research Model
The model used in this study refers to the model developed by Hong (2011) with some adjustments in defining the independent variables that are used to measure the degree of decentralization and also the dependent variable that explains about people's welfare.
Furthermore, mathematically the models used in this study were 6 (six) models, which were formulated as follows: 1) To see the effect of decentralization policy on community welfare in the economy in the Special Region of Yogyakarta, use Model 1: 2) To see the effect of decentralization policies on community welfare in infrastructure, Model 2 is used: 3) To see the effect of decentralization policy on the welfare of the community in the field of education, Model 3 and Model 4 are used as follows: 4) To see the effect of decentralization policy on community welfare in the field of health, Model 5 and
Estimation Techniques, Model Evaluations and Statistical Test Criteria
Estimation Techniques, Model Evaluations and Statistical Criteria Tests In this study three estimation techniques will be used namely: the PLS (Panel Least Square) method or known as the Common Effect model, Fixed Effect model, and Random Effect model. After that, the suitability of the model will be tested using the test: Chow Test, Hausman Test and Langrange Multiplier (LM) test. Meanwhile, the statistical criterion tests carried out include the estimation of the coefficient of determination (R), the partial significance test with the t test and the overall regression coefficient test with the F test.
1) Decentralization Indicators
In various related studies, to measure fiscal decentralization in an area, the variable that is often used is expenditure and revenue. Ebel and Yilmaz (2002) state that there are variations in the selection of indicators to measure the degree of decentralization between one country and another (Note 16). That is, although both use government expenditure and revenue variables, the size variables used can vary.
Previous studies have used variable variables of the degree of fiscal decentralization that are highly variable. In terms of revenue, a number of approaches are used, for example the Regional Budget Revenue share variable (Elhiraika, 2007) (Note 17); percentage of local tax revenue divided by total national tax revenue (Hong, 2011) (Note 18); PAD share of total regional revenue, share revenue sharing of total regional revenue, and share of balancing fund of total regional revenue ( In fact, the use of revenue and expenditure indicators as an approach to measure the degree of fiscal decentralization contains weaknesses. The explanation of this is as follows: The problem with the expenditure decentralization is that local government usually does not have a real degree of autonomy but act on behalf of the regional and federal government We also have problems with the revenue side source revenue) to the total estimation of fiscal regional income and the share of the transfer of funds to total revenue 240 decentralization since those also could not be the consequence of municipal ability to rise and assign taxes, but the consequences of revenue-sharing policies of regional government (Note 21). However, in this study, the revenue approach will still be used as an approach variable to measure the degree of fiscal decentralization. The variables to be used are the share of Original Regional Revenue to total regional revenue, as used in the research of Jin and Heng-fu, Zou (2000), , (Note 22). The use of this variable is based on justification that the Regional Original Revenue is a measure of regional independence. It actually states that Local Revenue reflects "sufficient" local taxing power as a necessary condition for the realization of broad regional autonomy. This is because local tax and local user fees are the main sources of Regional Original Revenue.
This study will also use variables from the aspect of expenditure (expenditure), namely the ratio of regional expenditure to national expenditure. This variable was also used by Hong (2011) (Note 27). In a more comprehensive research framework, this study will adopt the approach carried out in Hong (2011) research (Note 28), which in addition to using the fiscal decentralization dimension also uses the functional decentralization and personnel decentralization dimensions.
2) Community Welfare Indicator
People's welfare is often approached by using the per capital Gross Domestic Product (GDP) approach. But in recent years, the use of GDP per capita as a unit of measurement that is often used to measure the welfare of society, has invited debate in various circles. This is because it often occurs out of sync between the numbers and the reality of "well-being" that occurs in society. Several previous studies have also tried to use a more comprehensive approach to explain the welfare of society.
This problem subsequently led French President Nicholas Sarkozy, in
The latest trend interprets that the concept of community welfare in an area in a broad sense is interpreted as a condition where a good quality of life is achieved or the adequacy of basic human needs. Liu in Hong (2011) is an example of a case that uses a quality of life approach to measure the level of regional development (Note 30).
Here, Liu builds five dimensions of quality of life such as economic, political, environmental, health and education, and social, and then selects 123 indicators for the five dimensions (Note 31).
Furthermore, defining public welfare in this study will follow definitions according to Hong (2011) which not only uses the GRDP per capita indicator as the only indicator that explains public welfare, but also includes several dimensions related to basic public services such as education, health and infrastructure (Note 32). This is what makes the positioning of this research in the constellation of similar studies that have been carried out, as one of the studies using a fairly comprehensive approach.
Estimated Results
There are three techniques for estimating panel data regression models that can be used, namely models with the Common Effect Model or Pooled Least Square (PLS), Fixed Effect models and Random Effect models. In this study, all three estimation techniques are made to further be tested to determine which estimation techniques are most appropriate. Next, the following three estimation results are presented.
Estimation with the Fixed Effects Approach
These different characteristics must be accommodated in the model. One way is to change the assumption that intercepts differ between district/city analysis units in the Special Region of Yogyakarta while the slope between district/city analysis units remains the same. The model that assumes the existence of intercept differences in the equation is known as the Fixed Effects regression model.
Estimation with the Random Effect Approach
In this approach, residual variables are used which assume that residuals may be interconnected between time and between individuals. The assumptions used intercept are random or stochastic variables. The right method used to estimate the Random Effect model is Generalized Least Squares (GLS). The results of the Random Effect regression are shown in Table 1.C.
After estimating with the three approaches above, it will then be determined which approach is the most appropriate to use. For this purpose this research will use 3 (three) types of tests, namely:
Analysis Results
Based on the test results as shown in Table 2, then next to the ease of the analysis process will refer to Table 3. -11.19679 11.55429 0.305114 Note: * p <0.1; ** p <0.05; and *** p <0.01 Models 1, 3, 5 using Random Effects, Models 2 and 6 using Fixed Effects, and Model 4 using OLS / Common Effects.
Source: Estimated Results.
Model Analysis 1
From Table 3, it can be seen that there is no direct effect between the degree of decentralization and the level of social welfare in the economy. This can be seen from the absence of a decentralized variable that significantly affects the economic variables of the Special Region of Yogyakarta, which is represented by the variable Gross Regional Domestic Product per capita. And when seen from the presence of control variables, it is clear that both investment variables and labor variables have statistically significant coefficients with the t test at á = 1% and have a positive sign. This means that changes in the two control variables have a positive influence on the economy of the Special Region of Yogyakarta.
Although there is no direct influence between the three variables of decentralization and economic growth, there is an indication that the indirect effect of this variable is through investment channels, where high investment in the region is justified as a form of performance of local governments in attracting investment. Furthermore, the coefficient of determination (R2) of 0.98258 means that model 1 is able to explain variations in economic growth progress of 98.26%, however, this effect does not originate from the decentralization variable.
Model Analysis 2
From Table 3 it can also be seen that only the personnel decentralization variable has a significant coefficient through the t test at á = 5%. This variable is negative, meaning that if there is an increase in the ratio of Civil Servants in the Special Region of Yogyakart to Central Civil Servants by one unit there will be a decrease in the ratio of good quality. This is most likely due to the large number of Civil Servants recruited by the Regional Government each year since the decentralization policy took effect. As a result, there is currently a swelling of the routine budget allocated for salaries and benefits of regional civil servants, while the road infrastructure development budget is constrained. On the other hand, the rate of population growth in the Special Region of Yogyakarta is relatively high, so with the lack of road construction carried out in the last 5 years, the ratio of the length of good quality roads per population is getting smaller. Furthermore, when viewed from a very small coefficient of determination (R2), which is equal to 0.047097, it can be said that model 2 is only able to explain the variation in the growth of good quality road infrastructure by 5%. This means that around 95% of the variation in the growth of good quality road infrastructure is influenced by other variables not included in this model.
Model Analysis 3
Table 3 also explains that there are two decentralization variables that have a significant influence on community accessibility in the upper secondary education sector represented by the variable number of senior high school buildings per population. The two decentralization variables referred to are functional decentralization variables, which are approximated by the variable ratio of local government expenditure to total government expenditure nationally, and the personnel decentralization variable represented by the variable ratio of regional Civil Servants to central Civil Servants. The significance level of these two variables is at the 95% confidence level or á = 5%. However, these two variables have different effects. The functional decentralization variable is negative, meaning that an increase in the ratio of local government expenditure to total national government expenditure of 1 unit will affect the reduction in the ratio of the number of high school senior secondary schools per population.
The variable decentralization of personnel is positive, meaning that an increase in the ratio of the regional Civil Servants to the central Civil Servants by 1 unit will affect the increase in the ratio of the number of schools per population. This can be explained as follows: in the era of decentralization, many local governments undertook the recruitment of Civil Servants, including the recruitment of teaching staff or teachers at the High School Level. As a result of the large number of teaching staff / teachers, the government then built educational facilities in the form of school buildings, so the ratio of the number of schools per population increased.
Meanwhile, the two control variables, namely investment and labor variables, were each positive and statistically significant through the t test at á = 1% and á = 5% respectively. This means that changes in these two variables have an influence on changes in the ratio of the number of schools per population. This is because the increase in investment also includes investment in the education sector, which consequently the demand for teaching staff / teachers for senior high schools also increases.
While the determination coefficient value of 0.347784 means the model is able to explain variations in the ratio of the amount per population of 34.78%.
Model Analysis 4
The most suitable approach for model 4 is to use the PLS or Common Effect method. As seen in table 1.B, none of the decentralization variables has a significant effect on the ratio of students and teachers to senior secondary schools. It's just that the labor variable which is a control variable in this study, has a significance at á = 1%. Similar to the explanation in model 1.C, the insignificance of the decentralization variable in this model is due to the fact that local governments are more focused on developing elementary school education and junior high mas.ccsenet.org Vol. 15, No. 1;2021 104 school education which is part of the 9-year compulsory education program. Actually, the recruitment of teachers at senior high schools is still carried out, but there are not as many as at the elementary and senior high schools. Furthermore, when seen from the coefficient of determination (R2) which is very small, which is equal to 0.004175, it can be said that model 4 is only able to explain variations in the ratio of students and teachers at the High School Level of 0.4%. This means that around 99.6% of the variation in the ratio of pupils and teachers at the senior secondary level is influenced by other variables not included in this model.
Model Analysis 5
In model 5, it was found that the decentralization variable did not have a significant effect on changes in the ratio of the number of doctors per 1000 population. Furthermore, when viewed from the coefficient of determination (R2) which is 0.796001, it can be said that although the variables in model 5 are not significant, but simultaneously, this model is able to explain variations in the ratio of doctors per 1000 population of 79.60%.
This means that only about 20.40% of the variation in the ratio of doctors per 1000 population is influenced by other variables not included in this model.
Model Analysis 6
For model 6, the personnel decentralization variable is positive and statistically significant through the t test at á = 1%. This means that if there is an increase in the ratio of the regional Civil Servants to the central Civil Servants by 1 unit it will affect the increase in the ratio of the number of beds in the Hospital per 1,000,000 population. Furthermore, when viewed from the coefficient of determination (R2) which is equal to 0.305114, it can be said that model 6 is only able to explain the variation in the ratio of the number of beds in the Hospital per 1,000,000 population by 30.51%.
Conclusions
Based on the results of the analysis, the following conclusions can be made: 1) In the economic field, the decentralization variable has not shown any direct effect, either in terms of fiscal decentralization, functional decentralization and personnel decentralization. This can be seen from the absence of the coefficient of the decentralized variable which significantly influences the regional economic variable represented by the Gross Regional Per capita variable. The determinants that significantly affect economic growth in the area are explained by the presence of control variables, namely investment variables and labor variables where each variable has a statistical significance with a t test at á = 1% and has a positive sign. This means that under ceteris paribus conditions, an increase in the ratio of investment to Gross Regional Domistic Products and an increase in the ratio of labor to labor force of 1 unit, each will result in an increase in Gross Regional Dom per capita by the coefficient value as seen in model 1.
2) In the infrastructure sector, only the personnel decentralization variable has an influence on changes in the road infrastructure variable. However, the effect shown by the personnel decentralization variable is negative. This means that if there is an increase in the ratio of the regional Civil Servants to the central Civil Servants by one unit, there will be a decrease in the ratio of good quality road lengths by the coefficient value. Allegedly, this is caused by the different post as indicated by the coefficient of functional decentralization variable which is negative and the coefficient of decentralization variable in the implementation of personnel policy which is positive. A sign of decentralization, the regions began to recruit civil servants every year. As a result, the number of regional Civil Servants has increased and led to the swelling of routine budget requirements for the payment of salaries and benefits of regional Civil Servants. This in turn has implications for the decreasing budget for road infrastructure development.
3) In the field of Education, there are two decentralization variables that have a significant influence on community accessibility in the field of secondary education and above. The second variable decentralization that is agreed upon is the functional decentralization variable and the task decentralization variable. However, this second variable has a negative effect of this functional decentralization variable showing the ratio of increasing government to total government based on the number of units that will increase the ratio of the number of schools. Meanwhile, a positive sign of the decentralization variable regarding an increase in the ratio of the regional Civil Servants to the central Civil Servants by 1 unit will increase the ratio of the number of schools per population by its coefficient. In addition to model 3, model 4 also explains the effect of decentralization in the education sector as seen from the quality of top-level education represented by the dependent variable in the form of student and teacher ratios. In this model 4, it was found that there was a decentralized variable that had a significant effect on the ratio of students to teachers at the Upper Level Education level. It's just that the labor variable which is a control variable in this study, has a significance at á = 1%.
4)
In the health sector, it was found that the decentralization variable did not have a significant effect on changes in the ratio of the number of doctors per 1000 population. However, for model 6 with the dependent variable in the form of the ratio of beds in the hospital to, it appears that the personnel decentralization variable is significant and positive. This means that if there is an increase in the ratio of the regional Civil Servants to the central Civil Servants by 1 unit it will affect the increase in the ratio of the number of beds in the Hospital per 1,000,000 population. Furthermore, related to the channel of the decentralization variable which is able to give effect to the improvement of people's welfare it can be concluded that the functional decentralization variable and the personnel decentralization variable are channels that contribute to the welfare of the community. However, the performance of these two channels must be improved in the future.
Suggestions
Based on the conclusions above, several suggestions were then formulated to direct the district / city governments in the Special Region of Yogyakarta to maximize their role in the framework of improving the functioning of the decentralization variable, both fiscal, functional and personnel decentralization variables. Some steps that can be done, namely:
1)
Maximizing the role of fiscal decentralization by optimizing the performance of revenue instruments that are able to drive an increase in regional income, for example through optimizing regional taxes and charges which are one of the main components in Regional Original Revenue.
2)
Optimizing the role of functional decentralization, through budget allocation policies that are adjusted to the regional vision and mission to improve public welfare and services.
3)
Increasing the role of personnel decentralization through efforts to increase the capacity of Human Resources, namely Civil Servants.
|
2021-03-13T05:44:33.899Z
|
2021-01-06T00:00:00.000
|
{
"year": 2021,
"sha1": "7d204251824913a3fad70b1784bb01d08be36454",
"oa_license": "CCBY",
"oa_url": "http://www.ccsenet.org/journal/index.php/mas/article/download/0/0/44543/47188",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "7d204251824913a3fad70b1784bb01d08be36454",
"s2fieldsofstudy": [
"Political Science",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
14275446
|
pes2o/s2orc
|
v3-fos-license
|
Down-Regulation of CTLA-4 by HIV-1 Nef Protein
HIV-1 Nef protein down-regulates several cell surface receptors through its interference with the cell sorting and trafficking machinery. Here we demonstrate for the first time the ability of Nef to down-regulate cell surface expression of the negative immune modulator CTLA-4. Down-regulation of CTLA-4 required the Nef motifs DD175, EE155 and LL165, all known to be involved in vesicle trafficking. Disruption of the lysosomal functions by pH-neutralizing agents prevented CTLA-4 down-regulation by Nef, demonstrating the implication of the endosomal/lysosomal compartments in this process. Confocal microscopy experiments visualized the co-localization between Nef and CTLA-4 in the early and recycling endosomes but not at the cell surface. Overall, our results provide a novel mechanism by which HIV-1 Nef interferes with the surface expression of the negative regulator of T cell activation CTLA-4. Down-regulation of CTLA-4 may contribute to the mechanisms by which HIV-1 sustains T cell activation, a critical step in viral replication and dissemination.
Introduction
HIV-1 regulatory viral proteins such as Nef, Vif, Vpr and Vpu create a cellular environment that is favorable for viral replication and dissemination [1]. Of particular importance, Nef plays a critical role in modulating the cellular microenvironment required for efficient viral replication by down-regulating multiple cell surface molecules through its interference with the intracellular sorting machinery [2,3]. Nef-mediated down-regulation of CD4, the major HIV receptor, prevents superinfection during the early and late stages of infection as well as formation of the viral Env protein/CD4 oligomers during the budding process [4,5,6,7,8,9]. Nef also down-regulates MHC class I molecules in infected cells, likely preventing their killing by cytotoxic CD8 T cells [10]. Expression of Nef enhances HIV-1 production by interacting with PI3k and p21-activated kinase2 (PAK2) [11,12]. In addition, Nef is known to modulate several pathways of cell signaling and protects infected cells from apoptosis through the phosphorylation and inactivation of Bad, a proapoptotic member of the Bcl-2 protein family [13]. Moreover, the presence of Nef alters T cell activation through its interaction with the T cell-specific tyrosine kinase Lck via a conserved proline-rich repeat sequence {(PxxP)4} [14,15,16].
Nef has also been reported to play a critical role in the early activation of infected cells by sensitizing TCR to stimulation, thereby promoting secretion of the major T cell growth factor IL-2 and HIV replication [17,18]. However, stimulation of T cells via TCR and CD28 leads to the up-regulation of molecules such as CTLA-4, which are known to negatively regulate cell activation [19] and potentially HIV replication. CTLA-4 is a cell surface protein that interacts with its ligands CD80 (B7-1) and CD86 (B7-2) expressed on APCs and stops T cell activation and IL-2 production [20,21]. CTLA-4 is also essential for the suppressive functions of Tregs [22] and the induction of indoleamine 2,3dioxygenase (IDO) in tolergenic dendritic cells [23]. CTLA-4 is found mainly as an intracellular protein that resides in endocytic vesicles and secretory granules [24,25]. Surface expression of CTLA-4 is regulated by tyrosine motifs embedded within its cytoplasmic tail and mediate CTLA-4 binding to the m2 subunit of the adaptor sorting protein AP2. Following TCR stimulation, these tyrosine motifs become phosphorylated and prevent AP2-mediated CTLA-4 internalization leading to CTLA-4 accumulation on the cell surface [26,27,28,29].
The mechanism(s) underlying sustained HIV-1 replication in activated T cells that express high levels of molecules such as CTLA-4 have yet to be elucidated. Here, we show that HIV-1 Nef protein down-regulates surface and total expression of CTLA-4 by targeting this negative molecule to lysosomal degradation.
Cells
The 293T and HeLa cell lines were obtained from ATCC. Cells were kept in DMEM medium, 10% FCS and penicillin/ streptomycin (Gibco-Life Technologies) and maintained at 37uC and 5% CO 2 .
Antibodies
For FACS analysis on 293T cells we used anti-CD4 PE antibody (BD) and biotinylated Goat anti-CTLA-4 antibody from R&D Systems (used in combination with Streptavidine-APC). Anti-Nef and anti-CTLA-4 antibodies used in Western blot analysis were homemade by injecting rabbits with full length of these proteins fused to GST. Both homemade anti-CTLA-4 and anti-Nef polyclonal antibodies recognized the purified forms of GST-fused CTLA-4 and Nef proteins that were used to immunize rabbits. These antibodies also reacted positively with CTLA-4 and Nef transfected but not un-transfected cells and recognized proteins with the expected molecular weights of 30-34 kD for CTLA-4 and 27 kD for Nef. Anti-human CD4 antibody for Western blotting was purchased from RDI and mouse anti-b-actin antibody was purchased from Sigma. For confocal microscopy experiments, all secondary rabbit anti-mouse antibodies, anti-FLAG conjugated with Alexa 568, mouse anti-goat conjugated with Alexa 488, and transferrin labeled with Alexa 633 were purchased from Molecular probes.
Nef and CTLA-4 Vectors
The CTLA-4 eukaryotic vector was synthesized by inserting the CTLA-4 coding sequence by PCR into the EcoR1 and Xba1 restriction sites of pCDNA3 (Invitrogen). The primers used were CTLA-4 5p/EcoRI: 59-CATCGAATTCATGGCTTGCCTTG G-39 and CTLA-4 3p/XbaI: 59-TGCTCTAGATCAATTGA TGGG-39. The CD4 eukaryotic expression vector was generated in our laboratory. The CD4 cDNA was cloned in SR-alpha expression vector under the control of the CMV promoter. The Nef-FLAG expression vector used was pCMVtag4A vector obtained from Invitrogen. Nef was cloned using 59EcoR1 and 39HindIII restriction sites. The primers used were: NefCMV5p/EcoRI:59-CGGAATTCCGCCGCCAGGGATG-39 and NefCMV3p/HindIII: 59-GCAAGCTTGCAGTT-39. Nef wt and Nef mutant vectors used for 293T transfections, including the negative control, were generated by O. S using pCMV-Nef [30].
Transient transfections
Transfections were performed using calcium phosphate method [31]. Briefly, 10 million cells were plated in 100 mm 2 plates in 10 ml of complete DMEM (10% FCS) 24 hours before transfection. At the day of transfection, medium was replaced by 10 ml of fresh warm complete DMEM (10% FCS). For each condition, 15-45 mg of DNA was mixed with 500 ml of CaCl 2 (0.025 M). This step was followed by the addition of a 1:1 mixture of BBS (BES buffered solution) previously incubated at room temperature for 20 minutes and then added to cells. BBS 2X was prepared by adding 50 mM N,Nbis (2-hydroxyethyl)-2-aminoethanesulfonic acid (BES; Calbiochem), 280 mM NaCl, 1.5 mM Na 2 HPO 4 pH 6.95. Cells were harvested at 48 hours after transfection for either flow cytometry or Western blotting. We have routinely monitored our transfection efficacy by including irrelevant GFP plasmid; the transfection efficiency was always stable between 80-90%. The transfection efficiency of single transfected proteins was routinely monitored by determining the protein expression levels by Western blot analysis and normalization to the housekeeping gene b-actin.
Biochemical analyses of Nef, CD4 and CTLA-4 proteins Cells were lysed in lysis buffer containing 250 mM NaCl, 0.5% NP-40, 50 mM Hepes, 5 mM EDTA and protease inhibitors (Roche). Purified proteins were then quantified using the BCA kit (PIERCE). Cell lysates were then ran on 12% SDS gel and transferred on polyvinylidene fluoride (PVDF) membrane for Western blot analysis. For anti-CTLA-4 and anti-Nef blots, the primary antibodies used were homemade rabbit polyclonal antibodies followed by a step that includes incubation with a secondary goat anti-rabbit antibody conjugated to horseradish peroxidase (HRP). Bands were visualized by ECL (Amersham) then quantified by densitometric analysis using GelEval 1.22 software on images generated from films. The relative intensity of each band was normalized against b-actin or GAPDH.
Confocal Microscopy
Cells were plated at 50,000 cells per coverslip mounted with poly-L-Lysin (Sigma) in 6-well plates. Transient transfections were carried out on these cells on coverslips using pCDNA3/CTLA-4 and/or pCMV/Nef-FLAG vectors. Forty-eight hours after transfection, cells on coverslips were washed with PBS containing 1 mM MgCl 2 and fixed with 4% paraformaldehyde (PFA) for 15 min at room temperature. Cells were then washed by PBS and permeabilized by PBS containing 0.2% triton X-100 for 10 min at room temperature. After washings, cells were incubated with the primary and secondary antibodies as indicated in the figure legends. For CTLA-4 staining, 5 ml of anti-CTLA-4 and 1/50 dilution of secondary antibodies were used. For anti-FLAG, 1 mg of primary antibody and 1/50 secondary were used. Slides were then mounted and the images were taken with an LSM 510 META laser scanning confocal microscope (Zeiss) and 63 Plan-Apochromat objective with a numerical aperture of 1.4. Images were analyzed using Image J software (National Institute of Health).
Flow cytometry
To monitor CTLA-4 expression on 293T cells transfected with CTLA-4, cells were stained with biotinylated goat anti-CTLA-4 antibody for 30 min at 4uC then washed twice with PBS-FBS (2%) and stained with Streptavidine-APC for 20 min (4uC). For CD4 expression, cells were stained with anti-CD4 PE antibody for 30 min at 4uC. Stained cells were then washed twice with PBS-FBS (2%) and fixed with 2% paraformaldehyde (in PBS) and analyzed on BD LSRII flow cytometer.
Statistical Analysis
All p values were calculated by paired t test (two tailed). Pearson's correlation coefficient (Rr) was used to determine the colocalization, segregation or lack of correlation for CTLA-4 and Nef. Pearson's correlation coefficient was obtained for each individual cell. In Pearson's correlation, the average pixel intensity values are subtracted from the original intensity values. As a result, the value of this coefficient ranges from 21 to 1, with a value of 21 representing a total lack of overlap between pixels from the images; a value of 1 indicates perfect image registration [32]. A single sample t test using GraphPad Prism software was performed to establish whether the correlation coefficient mean value was significantly different from zero [33].
HIV-1 Nef protein down-regulates CTLA-4
To assess the role of HIV-1 Nef protein in down-regulating CTLA-4 expression we established a transient transfection system to co-express human CTLA-4 and HIV-1 Nef (Nef wt ) in 293T cells. A plasmid encompassing Nef in its reverse orientation (Nef neg ) was used as negative control. Expression of Nef protein in CTLA-4-expressing 293T cells reduced CTLA-4 surface levels by 57-77% (n = 5) compared to cells transfected with Nef neg plasmid ( Figure 1a, left panels). Similarly, co-expression of Nef and CD4 in 293T cells led, as expected, to significant down-regulation of CD4 surface levels (80%) (Figure 1b, left panels). Nefmediated down-regulation of both CTLA-4 and CD4 was also established by co-transfecting 293T cells with a GFP-reporter Nef construct followed by gating on GFP + cells and determining the expression levels of CTLA-4 or CD4 (data not shown). Western blot analysis of total cell lysates was then used to assess whether CTLA-4 and CD4 down-regulation by Nef was mediated by protein degradation versus accelerated internalization and/or intracellular retention. As shown in Figure 1a&b (right panels) expression of Nef significantly decreased the total pools of both CTLA-4 and CD4 proteins, most likely by mediating protein degradation (the two forms of CTLA-4 on the CTLA-4 blot, Figure 1a right panel, correspond to membrane (30 kD) and cytosolic moieties (34 kD) [34]).
To test the hypothesis that Nef interacted with the cytoplasmic tail of CTLA-4 leading to Nef-mediated down-regulation of CTLA-4, we generated several CTLA-4 mutants known to impact on CTLA-4 cellular localization, trafficking and degradation. CTLA-4 Y201A and CTLA-4 Y218G mutants are unable to bind to the adaptor protein-2 (AP-2) and are expressed primarily on the cell surface and to a much lesser extent in intracellular compartments [35,36]. These tyrosine motifs of CTLA-4 cytoplasmic tail were targeted by site directed mutagenesis to generate Y201A, Y218G and a double tyrosine mutated form Y201A Y218G. Another potential sorting double-leucine motif of CTLA-4 cytoplasmic tail LL181 was mutated to generate the LL181AA. We also generated a construct encompassing the CTLA-4 molecule deleted of its cytoplasmic tail (CTLA-4DCT). Mutating the tyrosine motifs or deleting the cytoplasmic tail resulted in the expected accumulation of CTLA-4 on the cell surface, due to lack of internalization signals [26], whereas the LL181AA mutant decreased CTLA-4 surface expression (Figure 2a). To determine the ability of Nef to down-regulate these mutated forms of CTLA-4 we co-transfected 293T cells with Nef wt and CTLA-4 mutants. Our data shown in Figure 2 (b-d) demonstrated that neither the tyrosine sorting motifs (CTLA-4 Y201A, CTLA-4 Y218G or CTLA-4 Y201AY218G) or the mutated leucine (CTLA-4 LL181AA) nor the deleted cytoplasmic tail construct (CTLA-4DCT) were able to rescue CTLA-4 expression upon cotransfection with Nef wt , as shown by FACS (Figure 2b&c) or by Western blotting (Figure 2d). There were no significant differences in the levels of CTLA-4 down-regulation between the CTLA-4 wt molecule and the different CTLA-4 mutants. Of note, the CTLA-4DCT mutant lost reactivity to the polyclonal antibody used for the Western blotting while retaining reactivity for antibodies used for the FACS analysis. These experiments show that increasing the levels of CTLA-4 by deleting or mutating the cytoplasmic tail did not affect the capacity of Nef to down-regulate this molecule. Thereby demonstrating that Nef does not require the CTLA-4 cytoplasmic tail to exert its effect. Our data with the CTLA-4DCT suggests that Nef down-regulates CTLA-4 expression mostly by interfering with CTLA-4 localization in intracellular compartments before reaching the plasma membrane instead of causing CTLA-4 internalization.
Nef-mediated CTLA-4 down-regulation requires motifs in Nef involved in the interaction with the vacuolar ATPase, b-COP and the AP-1 sorting complexes CTLA-4 traffics rapidly between intracellular vesicles and the plasma membrane, which accounts for its transient low levels of cell surface expression [37]. We tested a set of Nef mutants known to interfere with this protein trafficking in intracellular compartments [38]. The DD175 motif is required for the interaction of Nef with the v-ATPase, an enzyme present in intracellular vesicles that plays an important role in the maintenance of acidic pH of vesicular compartments, an essential component of the intracellular proteolytic machinery [39]. The EE155 motif is required for the interaction of Nef with the Coatomer subunit ß-COP, known to be involved in the transport of proteins from the ER to the cis-Golgi compartment [40]. This motif impacts on the late stages of Nef-dependent CD4 down-regulation [7]. The dileucine motif LL165 mediates the interaction of Nef with the sorting adaptor protein AP-1 [6]. To investigate the involvement of these pathways in the down-modulation of CTLA-4 by Nef, we co-transfected CTLA-4 and the above listed Nef mutant constructs into 293T cells. Our Western blot analysis on total lysates from these cells (Figure 3a) showed that these Nef mutants, DD175GA, LL165GG and EE155GG had a significantly lower ability to mediate CTLA-4 degradation compared to Nef wt (p = 0.017, 0.04 and 0.02, respectively, Figure 3b, n = 3). However, among the three Nef mutants, LL165GG had a higher capacity to downregulate CTLA-4 compared to DD175GA and EE155GG (57% vs 22 and 26% mediated CTLA-4 down-regulation, respectively).
Altogether, these data indicated that Nef down-modulates CTLA-4 through a mechanism that requires the interaction of Nef with the sorting adaptor protein AP-1, the v-ATPase as well as with the b-COP subunit. This also suggested that cellular compartments where interaction between CTLA-4 and Nef takes place are likely to be endosomes and lysosomes, where the v-ATPase is active [41].
Lysosomal function is required for Nef-induced downregulation of CTLA-4 Results in Figure 3 suggested that Nef-induced CTLA-4 downregulation was likely mediated by protein degradation in lysosomes since mutation of the v-ATPase-binding motif of Nef (DD175GA) rescued CTLA-4 expression. To confirm the role of acidic compartments in Nef-induced CTLA-4 down-regulation, we investigated the impact of Nef on CTLA-4 expression in the presence of Concanamycin A, a specific inhibitor of the vacuolar ATPase [42]. Similar experiments were performed in the presence of the weak-base amine ammonium chloride (NH 4 Cl) known to inhibit the lysosomal machinery by increasing the pH of endocytic and lysosomal vesicles [43,44,45]. 293T cells co-transfected with CTLA-4 (or CD4) and Nef wt were treated with increasing concentrations of Concanamycin A or NH 4 Cl. Treatment of 293T cells expressing Nef wt and CTLA-4 with 1, 3, 5, 10 and 20 nM Concanamycin A increased, in a dose-dependent manner, the levels of CTLA-4 expression as measured by the frequency of CTLA-4 + cells, from 44% to 72% (Figure 4a, lower panels & 4b, left panel). In contrast, treatment with Concanamycin A of 293T cells co-transfected with CD4 and Nef wt did not rescue CD4 surface expression (Figure 4b, right panel). However, at higher concentrations (20 nM) of Concanamycin A, increased-expression of CD4 protein was only observed with biochemical analysis on total cell lysates (Figure 4c) suggesting the accumulation of CD4 in intracellular compartments, consistent with earlier observations by other groups [46]. Ammonium chloride, a lysosomal inhibitor, treatment of 293T cells expressing Nef wt and CTLA-4 also increased the total pool of CTLA-4 by 2.7 fold (Figure 4d for Western blotting and 4e for surface expression by FACS analysis). Of note, we also observed a dose-dependent accumulation of the total molecular pool of CTLA-4 in cells co-transfected with Nef neg vector treated with either Concanamycin A (Figure 4a, upper panels) or with NH 4 Cl (Figure 4d&e), most likely resulting from the inhibition of steady-state degradation of CTLA-4. As expected, the expression of both CTLA-4 and CD4 was indeed down-regulated in this transfection system in the absence of both inhibitors (Figure 4ae).
Altogether, our results showed that CTLA-4 is down-regulated by lysosomal degradation as previously shown for CD4. However, we provide evidence herein that significant differences exist in the mechanisms leading to the down-regulation of the two molecules as CTLA-4 re-circulates to the cell surface following treatment with lysosomal inhibitors whereas, CD4 does not and likely accumulates in intracellular compartments.
Nef and CTLA-4 co-localize in early and recycling endosomes
The results described above suggested that Nef and CTLA-4 most likely co-localize in intracellular compartments where protein degradation typically occurs including lysosomes. In order to confirm this hypothesis, we monitored the intracellular localization of both proteins by confocal microscopy in transiently transfected HeLa cells. The transferrin receptor (Trf) was used as a marker for early and recycling endosomes [47], which is the compartment most likely involved in the initial Nef-CTLA-4 interaction. CTLA-4 is known to reside in intracellular endocytic compartments prior to its translocation to the plasma membrane [48]. Confocal microscopy analysis confirmed the presence of CTLA-4 in specific intracellular granules (endosomes) where the Trf, a marker of early [47], is expressed ( Figure 5a). As expected from our mutagenesis studies, these Trf + early and recycling endosomes showed significant co-localization between Nef and CTLA-4. The scatter diagram shows the intensity of fluorescence for CTLA-4 (Y axis) and Nef (X axis) in one single representative cell (Figure 5b). The white granules (Figure 5a lower panels last picture) depict the co-localization of Nef, CTLA-4 and Trf. Quantification of CTLA-4 and Nef co-localization was assessed in multiple independent cells (Figure 5c) (n = 16 and a Pearson coefficient value of 0.6, p,0.0001).
Previous results ( Figure 2) showing that deletion of the cytoplasmic tail of CTLA-4 did not affect the capacity of Nef to down-regulate CTLA-4, indicated that the interaction between the two molecules occurs mostly in intracellular compartments. This was confirmed by confocal microscopy visualization, since the colocalization between CTLA-4 and Nef occurred mostly in intracellular compartments and to a lesser extent at the plasma membrane (yellow granules that indicate Nef (red) and CTLA-4 (green) interaction are primarily present in intracellular compartments). This was also different from what was described for CD4 as Nef was previously shown to interfere with CD4 expression at the plasma membrane [49]. Together, these results showed that Nef co-localizes with CTLA-4 in intracellular acidic compartments, most likely lysosomes, where accelerated degradation of CTLA-4 occurs.
Discussion
In this study we provide first line of evidence that HIV Nef down-regulates CTLA-4 expression at the cell surface by interacting with CTLA-4 in intracellular compartments including endosomes and lysosomes. The down-regulation of CTLA-4 by Nef, early in acute infection when most activated T cells express CTLA-4 at the cell surface [50,51] could provide an important mechanism to circumvent the physiological effect of CTLA-4 i.e. inhibition of T cell activation and HIV replication, and to allow viral dissemination.
Nef proteins from virtually all primate lentiviruses modulate the expression of a large number of cell surface proteins including CD4 [2,52,53,54,55,56] and MHC I molecules [10,57] through multiple mechanisms. We show here that Nef mediates CTLA-4 and CD4 down-regulation via distinct mechanisms. The CTLA-4 cytoplasmic tail was dispensable for this new function of Nef. In surface after transfection with CTLA-4 Y201AY218G and CTLA-4DCT mutants and lower expression after transfection with CTLA-4 LL181AA. Black solid histograms represent the isotype controls. Grey solid histograms represent the expression of CTLA-4 Wt. Black empty histograms represent CTLA-4 mutants. Numbers in inset represent the MFI of CTLA-4 mutant/CTLA-4 Wt. b) Surface expression levels of the different mutants of CTLA-4 with (empty histograms) or without (filled histograms) Nef expression as measured by flow cytometry. The numbers in inset represent the MFI of CTLA-4 under Nef wt /Nef neg co-expression. c) MFI of surface CTLA-4 analyzed by FACS under Nef neg or Nef wt co-expression (mean of three independent experiments). The p values were calculated comparing values of CTLA-4 MFI under Nef wt co-transfection relative to Nef neg co-transfection. d) Western blot analysis showing the total expressions of CTLA-4 Wt and CTLA-4 mutated forms after co-transfection with Nef neg (2) or Nef wt (+) in 293T cells. * Refers to the corresponding MW on the SDS gel. doi:10.1371/journal.pone.0054295.g002 Figure 3. CTLA-4 down-regulation by Nef requires motifs in Nef involved in the interaction with the vacuolar ATPase, b-COP and the AP-1 sorting complexes. a) Total expression levels of CTLA-4 in 293T cells after co-transfection with Nef wt or Nef mutants by Western blot. Numbers above the Nef blot represent the Nef densitometric units normalized to the ß-actin and showing similar expression levels for the different Nef constructs. b) Densitometric results, normalized to the ß-actin, are presented as mean percentages of CTLA-4 expression after co-transfection with Nef wt or Nef mutants and normalization to the negative control (Nef neg ) from 3 independent experiments. The p values are calculated using the percentage of CTLA-4 expression under co-transfection with each of the Nef mutants relative to CTLA-4 expression under Nef wt co-transfection. * Refers to the corresponding MW on the SDS gel. doi:10.1371/journal.pone.0054295.g003 contrast, the CD4 cytoplasmic tail was shown to be required for CD4 degradation by Nef as CD4DCT or a chimera between the CD4 extracellular and the CD8 intracellular domains are resistant to Nef-induced down-regulation [53,58]. We also showed that Nef interacts with CTLA-4 in intracellular compartments. This was confirmed by our confocal microscopy visualization showing that most Nef/CTLA-4 interactions occurred in early and recycling endosomes with very low levels of co-localization of these two Black histograms represent CTLA-4 and either Nef neg (upper panels) or Nef wt (lower panels) co-transfected cells. b) Percentage of CTLA-4 + cells (left panel) and CD4 + cells (right panel) from the same experiment at different concentrations of Concanamycin A (0-20 nM). Solid line represents CTLA-4 or CD4 under Nef neg and dotted line represents CTLA-4 or CD4 under Nef wt co-transfection. c) Rescue of total CD4 protein levels by Concanamycin A treatment. Total lysates from 293T cells co-transfected with CD4 and Nef neg or Nef wt and treated with increasing concentrations of Concanamycin A (1-20 nM). Numbers on top of CD4 blot represent arbitrary densitometric units after normalization to the b-actin. d) Western blot analysis for total lysates from 293T cells co-transfected with CTLA-4 and either Nef neg or Nef wt vectors and treated with increasing concentrations of NH 4 Cl (0-20 mM) (n = 2). Numbers on the CTLA-4 Western blot represent arbitrary units for CTLA-4 protein expression levels after normalization to b-actin. e) FACS analysis on 293T cells showing CTLA-4 surface levels under Nef neg (upper panels) and Nef wt (lower panels) co-expression in the presence or absence of NH 4 Cl. * Refers to the corresponding MW on the SDS gel. doi:10.1371/journal.pone.0054295.g004 Figure 5. Nef and CTLA-4 co-localize in early and recycling endosomes. a) HeLa cells transfected only with a FLAG-Nef expression vector (upper panels) or co-transfected (lower panels) with CTLA-4 expressing vector were incubated with Alexa 633 labeled transferrin (blue), CTLA-4 specific antibody (green) and FLAG-Nef antibody (red). The extreme left panels show a transmission light field. The middle lower panel shows the colocalization between Nef and CTLA-4 (yellow color results from merging green (CTLA-4) and red (Nef)). The extreme right lower panel shows the colocalization of the three proteins; CTLA-4, Nef and transferrin together (the white color is a combination of green (CTLA-4), red (Nef) and blue (transferrin)), Bar = 20 mm. b) Scatter diagram showing the intensity of the CTLA-4-Alexa-fluor 488 (Y-axis) and Nef-Alexa-fluor 546 (X-axis). c) Levels of co-localization between CTLA-4 and Nef is represented by the mean Pearson's correlation coefficient (Rr) (n = 16, 9 fields). doi:10.1371/journal.pone.0054295.g005 molecules at the plasma membrane ( Figure 5). The low levels of CTLA-4 at the cell surface could likely explain the lack of colocalization between CTLA-4 and Nef at the plasma membrane, although CTLA-4 mutants, known to be highly expressed at the cell surface, were still down-regulated by Nef to levels similar to those of the wt molecule. Our results suggest that CTLA-4 is directed for degradation in the endosomal/lysosomal compartment prior to reaching the plasma membrane, thus highlighting a major difference in the mechanisms that lead to Nef-mediated degradation of CTLA-4 versus CD4.
Despite the above listed differences, similarities were also observed in the mechanisms of Nef-mediated down-regulation of CTLA-4 and CD-4. Both mechanisms required Nef motifs known to be involved in vesicle trafficking. The double aspartic acid DD175 of Nef is involved in the interaction of Nef with the catalytic subunit of vacuolar ATPase (vATPase), originally known as Nef binding protein 1 or NBP1, an essential component of lysosome degradation machinery [55]. Binding to the vATPase was shown to be required for the successful interaction of Nef with the endocytic machinery by connecting Nef to the m2 chain of AP-2 adaptor protein [59]. Similar to CD4 down-regulation, the DD175GA mutant completely abolished the capacity of Nef to modulate CTLA-4 expression, suggesting that the major mechanism leading to CTLA-4 down-regulation by Nef is protein degradation and not just retention in intracellular compartments. This is supported by our results showing that Nef expression leads to a major reduction in the total intracellular pool of CTLA-4 molecules. Moreover CTLA-4 expression was indeed rescued when weak-base amines were used to neutralize the lysosomal acidity or to inhibit the vATPase activity ( Figure 4). Importantly, in contrast to CD4, treatment with pH-neutralizing agents rescued CTLA-4 surface expression, thus highlighting another difference between Nef-mediated down-regulation of CD4 and CTLA-4.
Together, our study reveals a novel potential mechanism for HIV pathogenesis by which Nef-mediated CTLA-4 downregulation may decrease the threshold of T cell activation, a critical step for HIV-1 replication and dissemination. Nefmediated down-regulation of CTLA-4 on the cell surface of infected cells may also contribute to the global hyperimmune activation, a hallmark of HIV infection. In line with, Nef was shown to be exported from infected cells through accelerated release of Nef-containing exosomes [60]. Extracellular Nef targets bystander CD4 + T cells for apoptosis [61] and also B cells for the suppression of immunoglobulin class switching [62]. The uptake of Nef by bystander CD4 + T cells may result in the down-regulation of CTLA-4 on the cell surface from non-infected cells leading to global sustained hyperimmune activation and increased viral replication. Interestingly, work by Cecchinato et al. [63] on SIVinfected non-human primates (NPHs) demonstrated that antagonizing CTLA-4 effect by blocking CTLA-4 with specific antibodies prior to and following SIV challenge led to hyperimmune activation and increased viral replication. Most importantly, this treatment decreased the responsiveness to antiretroviral therapy and abrogated the ability of therapeutic T cell vaccines to decrease the viral load set point. This effect is likely a consequence of the lack of CTLA-4 binding to its ligands leading to substantial increase of activated CD4 + T cells and subsequent infection with SIV. Therefore, CTLA-4 evidently plays a physiological role in limiting viral replication and/or dissemination and that Nefmediated down-regulation of CTLA-4 is likely to counteract this function.
In conclusion, our findings showing that HIV-1 Nef protein down-regulates the negative modulator CTLA-4 represent a novel mechanism for HIV-1 pathogenesis that is likely involved in the enhancement of T cell activation and T cell turnover, two key cellular functions that are important for HIV-1 replication and dissemination. HIV-1 Nef is subject to high sequence variation during the course of infection and among infected individuals. The ability of HV-1 Nef to modulate CTLA-4 expression may be different between viral strains and may be linked to the course of disease progression. Our study opens the path for this new type of investigation, with a particular focus on differences in HIV Nef immunoregulatory functions between transmitted/founder versus chronic viruses and between subjects with slow versus rapid disease progression.
|
2017-04-14T15:45:14.114Z
|
2013-01-23T00:00:00.000
|
{
"year": 2013,
"sha1": "96b94c7c2b43bd18e13c549277b15ca084868748",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0054295&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "96b94c7c2b43bd18e13c549277b15ca084868748",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
266133994
|
pes2o/s2orc
|
v3-fos-license
|
Small Molecule Fluorescent Ligands for the Atypical Chemokine Receptor 3 (ACKR3)
The atypical chemokine receptor 3 (ACKR3) is a receptor that induces cancer progression and metastasis in multiple cell types. Therefore, new chemical tools are required to study the role of ACKR3 in cancer and other diseases. In this study, fluorescent probes, based on a series of small molecule ACKR3 agonists, were synthesized. Three fluorescent probes, which showed specific binding to ACKR3 through a luminescence-based NanoBRET binding assay (pKd ranging from 6.8 to 7.8) are disclosed. Due to their high affinity at the ACKR3, we have shown their application in both competition binding experiments and confocal microscopy studies showing the cellular distribution of this receptor.
T he atypical chemokine receptor 3 (ACKR3), previously known as CXC-chemokine receptor 7 (CXCR7), is an atypical chemokine receptor belonging to the class A G protein-coupled receptor (GPCR) family.Although the Received: October 19, 2023 Revised: November 24, 2023 Accepted: November 29, 2023 Published: December 8, 2023 Figure 1.Examples of reported small molecule ACKR3 ligands: 1, 10 2, 11 3, 12 4, 13 5, 14 6, 15 7, 16 and 8. 17 Details of the affinity or potency of the compounds are shown in the Supporting Information (Table S1). 10 suggests the highlighted 3-methoxy group present in VUF11207 (1) is not essential for ACKR3 binding and can be targeted for linker and fluorophore attachment (b) Docking of VUF11207 (R)-1 into ACKR3 (pdb 7SK9) suggests substitution on the 3-position of the aryl ring would be an appropriate choice for linker and fluorophore attachment.Docking experiments were performed using OEDOCKING Hybrid docking. 24,25iological role of ACKR3 is not entirely understood, it is reported to function as a scavenger of CXCL12 (C-X-C chemokine 12, also known as SDF-1, stromal cell-derived factor 1) establishing CXCL12 gradients, thereby modulating CXCR4 signaling. 1,2It has been postulated to regulate a range of biological functions that occur after binding of the endogenous ligand CXCL12 and subsequent recruitment of the multifunctional intracellular protein β-arrestin, resulting in phosphorylation-dependent receptor internalization without detectable activation of G-proteins. 3 Expression of ACKR3 on the surface of platelets has been shown to be up-regulated in patients suffering with acute myocardial infarction and subsequent elevation of ACKR3 expression leads to an improvement in recovery. 4,5Additionally, increased infarct size and subsequent patient mortality have been observed, where ACKR3 expression has been decreased, signifying the importance of ACKR3 in promoting proliferation and angiogenesis. 6ACKR3 is known to be overexpressed in numerous cancer types, indicating its involvement in the modulation of tumor cell proliferation and migration and tumor angiogenesis, contributing to cancer progression and metastasis. 7Due to the increasing literature for the role of ACKR3 in disease, several structurally diverse small molecule ACKR3 ligands have been reported (Figure 1). 8,9rrently, the most widely used compound to study ACKR3 function is the endogenous ligand CXCL12.Although human CXCL12 and its radiolabeled and fluorescently labeled versions are available through commercial sources, their arduous synthesis makes them very expensive to employ in both in vitro and in vivo imaging.Antibodies and nanobodies have also emerged as highly selective tools to study ACKR3 18,19 but similar to CXCL12, the development of ACKR3-specific antibodies and nanobodies is difficult and time-consuming, making them also very expensive for the medicinal chemist to routinely employ.Small molecule ligands that selectively target ACKR3 can offer several advantages over chemokines and antibodies as tool compounds to probe receptor function.−23 We report the synthesis of the first fluorescent ACKR3 probes, based on the receptor agonist VUF11207 (1). 10 An evaluation of the reported structure−activity relationship (SAR) of the small molecule inhibitor, combined with in silico docking experiments utilizing the recently disclosed Cryo-EM structure of ACKR3 complexed with the partial agonist 8 CCX662, 17 informed the synthetic strategy for linker design and fluorophore attachment (Figure 2).
The resulting fluorescent compounds were characterized in a BRET-based assay, enabled by a NanoLuciferase (NLuc)-ACKR3 construct.The recently developed NanoBRET methodology has allowed characterization of various (fluorescent) probes targeting GPCRs, even when under endogenous promotion. 20,21,26he synthesis of fluorescent derivatives of VUF11207 was based on procedures that were described by Wijtmans in the development of VUF11207. 10Zarca et al. recently reported on the pharmacological evaluation of the synthesized single enantiomers of VUF11207 (1) showing that (R)-1 had a pEC 50 of 8.3 ± 0.1 compared to (S)-1, which has a corresponding pEC 50 of 7.7 ± 0.1 in a [ 125 I] CXCL12 displacement assay. 27ynthesis started with an aldol reaction between 2fluorobenzaldehyde and propionaldehyde, which under basic conditions provided (E)-3-(2-fluorophenyl)-2-methylacrylaldehyde 10 in excellent yield.A reductive amination with a picoline borane complex and (R)-2-(1-methylpyrrolidin-2yl)ethanamine gave the homochiral precursor 11 in good yield.With this key fragment in hand, we set out to synthesize the various linkers.Here, we chose to develop linkers of three different lengths, with PEG chains ranging from 0 to 2. Commercially available alcohol-carbamates 12a−c were first converted into tosylates using tosyl chloride 13a−c.O-Alkylation using methyl 3-hydroxy-4,5-dimethoxybenzoate efficiently installed the linkers on the 3′-position.Hydrolysis of the methyl ester to the benzoic acids 15a−c using lithium hydroxide proceeded with quantitative yields, allowing subsequent peptide coupling with key intermediate 11 to give 16a−c and after N-Boc deprotection, the congeners 17a− c were ready for conjugation to commercially available fluorescent dyes (Scheme 1).
The congeners 17a−c were reacted with the commercial BODIPY FL-X succinimidyl ester to give the corresponding fluorescent ligands 18a−c, after purification by reverse phase HPLC.The fluorescent ligands were prepared in >95% purity as defined through analytical HPLC (Scheme 2).
Pharmacological Evaluation of Fluorescent ACKR3 Antagonists.The fluorescent conjugates (18a−c) were evaluated by using a range of pharmacological assays.Initially, saturation binding experiments were used to determine the affinity of the fluorescent conjugates toward the ACKR3 receptor.The fluorescent properties of the compounds allowed detection of the proximity of the fluorescent ligands to an Nterminal NanoLuciferase-tagged receptor (NLuc-ACKR3) by means of bioluminescence resonance energy transfer (Nano-BRET). 20The three fluorescent conjugates produced clear saturable specific binding to the NLuc-ACKR3 receptor that was associated with low levels of nonspecific binding (determined in the presence of unlabeled (R)-1) resulting in pK d values ranging from 6.8 to 7.9 (Figure 3 and Table 1).
To further evaluate the use of 18a in the NanoBRET-ligand binding assay, affinities of ACKR3 ligands 4, 9, 19, and 20 were determined in competition binding experiments (Table 2).
The availability of high affinity green fluorescent ACKR3 receptor ligands suggested utility for live cell imaging.Confocal microscopy images of fluorescent ligand 18a incubated with HEK293 cells transiently expressing N-terminal SNAPTag-ACKR3 (referred to as SNAP-ACKR3) for 30 min at 37 °C were captured.Under these conditions, SNAP-ACKR3 labeled with the cell impermeable SNAP-AF647 showed a predominantly vesicular intracellular location, with a small amount on the cell membrane (Figure 4, second column).This is consistent with its known high levels of constitutive ACKR3 cycling.Ligand 18a (100 nM) showed a very similar distribution of mainly intracellular fluorescence, which was colocalized with that of the SNAP-ACKR3 receptor (Figure 4) and may also therefore indicate some ligand induced internalization.Images collected at various time points during incubation of 50 nM 18a (Supporting Information, Figure S1) indicated that 18a was initially bound to the cell surface at early time points and then internalized with SNAP-ACKR3.When cells were pretreated with (R)-1, its level of binding was significantly reduced, suggesting that the majority of observed fluorescence was specific binding of 18a to the SNAP-ACKR3 receptor.
We have reported the characterization of the first new small molecule-based fluorescent probes for ACKR3.Compounds (18a−c) retained good affinity toward the ACKR3 receptor, as shown by NanoBRET saturation experiments.We further demonstrated that 18a is a useful screening tool for discovering new ACKR3 agonists.Compound 18a displayed good signalto-noise in NanoBRET competition-binding experiments and was displaced by the established small molecule agonist (R)-1, close analogues, and a structurally diverse agonist 4. The fluorescent ACKR3 ligands (18a−c) can be used in live cell confocal microscopy experiments and in combination with the NanoBRET approach may shed further light on ACKR3 function and its participation in pathophysiological conditions.
Figure 2 .
Figure 2. (a) Reported SAR10 suggests the highlighted 3-methoxy group present in VUF11207 (1) is not essential for ACKR3 binding and can be targeted for linker and fluorophore attachment (b) Docking of VUF11207 (R)-1 into ACKR3 (pdb 7SK9) suggests substitution on the 3-position of the aryl ring would be an appropriate choice for linker and fluorophore attachment.Docking experiments were performed using OEDOCKING Hybrid docking.24,25
Figure 3 .
Figure 3. Saturation binding of (R)-1 using 18a−c in HEK293G_N-Luc-ACKR3 cells.HEK293G_NLuc-ACKR3 cells were treated with 18a−c in the presence and absence of 10 μM unlabeled (R)-1 fulllength N terminal NanoLuciferase-ACKR3 stably expressing HEK293 cells.Compounds were added simultaneously and incubated for 60 min at 37 °C in HBSS containing 0.2% BSA.Furimazine (1:400 final dilution) was added and plates incubated for 5 min.Fluorescence and luminescence emissions were measured using a BMG Pherastar FS.The raw BRET ratio was calculated by dividing the fluorescent signal by the bioluminescent signal and specific binding was calculated by deducting nonspecific binding from the total binding values.
Table 1
a pK d values were calculated from the negative logarithm of the equilibrium dissociation constant (K d ) determined from saturationbinding experiments using increasing concentrations of labeled ligand in the presence or absence of (R)-1 (10 μM).Data are expressed as mean ± SEM, where each experiment was performed in triplicate.
Table 2 .
Binding Affinities of a Series of Known ACKR3 Ligands a Data are combined mean ± SEM, where each experiment was performed in triplicate.
|
2023-12-10T16:23:13.604Z
|
2023-12-08T00:00:00.000
|
{
"year": 2023,
"sha1": "ff82f7d95e0f3ea84efb282c0938f508cbf9e53a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acsmedchemlett.3c00469",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6395e01d02657e7a9b7665c9242319e6d17ca028",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
29537124
|
pes2o/s2orc
|
v3-fos-license
|
N-terminal brain natriuretic propeptide levels correlate with procalcitonin and C-reactive protein levels in septic patients
The aim of this study was to find the relationship between N-terminal brain natriuretic propeptide (NT-proBNP), procalcitonin (PCT) and C-reactive protein (CRP) plasma concentrations in septic patients. This was a prospective study, performed at Medical University Hospital No. 5 in łódź. Twenty patients with sepsis and severe sepsis were included in the study. N-terminal brain natriuretic propeptide, procalcitonin and C-reactive protein concentrations, and survival were evaluated. In the whole studied group (128 measurements), the mean NT-proBNP, procalcitonin and C-reactive protein concentrations were, respectively: 140.80±84.65 pg/ml, 22.32±97.41 ng/ml, 128.51±79.05 mg/l. The correlations for the NT-proBNP level and procalcitonin and C-reactive protein levels were 0.3273 (p<0.001) and 0.4134 (p<0.001), respectively. NT-proBNP levels correlate with PCT and CRP levels in septic patients. In the survivor subgroup, the mean NT-proBNP plasma concentrations were significantly lower than in the non-survivor subgroup.
INTRODUCTION
In 1992, the American College of Chest Physicians (ACCP) and the American Society of Critical Care Medicine (SCCM) published their official definitions of sepsis, severe sepsis and septic shock. These septic states are associated with poor prognoses and increased mortality [1]. For many years, studies have been conducted to find and introduce into clinical practice more sensitive and specific markers of the severity of the inflammatory response and organ dysfunction [2][3][4][5]. Procalcitonin (PCT) and C-reactive protein (CRP) blood concentrations are accepted sepsis markers. Cardiac dysfunction frequently accompanies severe sepsis and septic shock. N-terminal brain natriuretic propeptide (NT-proBNP) is a useful laboratory marker to indicate cardiac dysfunction [6][7][8]. Procalcitonin is a protein formed from 116 amino acids. In physiological conditions in thyroid C cells, PTC is a precursor of calcitonin, among others. Procalcitonin is generated as the result of the proteolysis of the pre-procalcitonin precursor protein, formed of 141 amino acids. In acute inflammatory reactions, an increase in PCT release to the blood is observed. Most probably, in those conditions, it does not originate from thyroid C cells. PCT is assumed to be synthesized in liver macrophages and monocytes, and also in pulmonary and intestinal neuroendocrine cells. The latest studies have also suggested that various types of blood leucocytes may be sites of PCT production [9][10][11]. PCT is a sensitive and specific marker of generalized bacterial, fungal or parasitic infection. The PCT level determined via the immunoluminometric method in subjects without generalized infection is <0.5 ng/ml. The level is not elevated or only insignificantly increased in the case of viral infection, while it demonstrates high dynamics of increase in septic patients, even reaching values exceeding 1000 ng/ml. The PCT level is generally not affected by injury, surgery, chronic inflammatory processes, autoimmune diseases or applied drugs with few exceptions. Serum PCT concentration correlates with the severity of sepsis, and is a reliable factor in determining the prognosis and response to the treatment. Initial studies estimating the correlation between PCT concentration and the severity of sepsis were carried out by Zeni et al. and published in 1994 [12]. They demonstrated that in cases of severe clinical course, the serum PCT concentration is higher. These observations were confirmed by later studies. PCT was also found to be the only serum marker other than neopterin enabling differentiation between sepsis and severe sepsis [13,14]. The serum levels of CRP, an acute-phase protein synthesized by the liver following stimulus by various cytokines, markedly increase within hours of infection or inflammation. Numerous studies have demonstrated increased CRP levels in patients with sepsis [15][16][17]. N-terminal brain natriuretic propeptide is a newly described cardiac hormone. NT-proBNP consists of 76 amino acids and is formed from a propeptide pro-BNP, which is produced first of all in ventricular myocytes, and then splits in the blood serum into physiologically active brain natriuretic peptide and physiologically non-active NT-proBNP. Few studies demonstrated increased NT-proBNP levels in septic patients, but their relationship to PCT and CRP levels has not been evaluated [18,19]. Considering that cardiac dysfunction is often present in patients with septic shock, and that PCT and CRP are markers of sepsis, we assumed that NT-proBNP levels correlate with PCT and CRP levels in septic patients. The aim of this study was to investigate the relationships between NT-proBNP and PCT and CRP plasma concentrations in septic patients.
MATERIALS AND METHODS
The approval of the Bioethics Committee of the Medical University in Łódź was obtained (No. RNN/26/03/KB), and 20 consecutive patients were qualified for the study: 15 men and 5 women. The basic data about the investigated group is given in Tab. 1. The criteria of sepsis according to the definition accepted at the ACCP/SCCM conference (modified by the Polish Working Group on Sepsis [20]) were the basis for enrollment in the study [1,2]. The investigations were carried out on each patient until the patient stopped meeting those criteria or died. All the patients were given verbal and written information about the potential risks and benefits of participation in the study. They gave written consent prior to the study. Subjects were recruited consecutively from patients received by the ICU from 1 st July 2003 to 31 st July 2004. All the patients were treated by the same team of physicians, and care of the patients was conducted according to the same existing protocols. The standard treatment included administration of adequate antibiotics, control of the source of infection and supportive therapy (intravenous fluids, medication aiding the circulatory system, vasopressors, aiding failing organs). Two patients were given Recombinant Human Activated Protein C. Blood serum NT-proBNP, PCT and CRP concentrations were determined for each patient at given time intervals. The first measurement was performed within 12 h of the patient's inclusion into the study, the second 12 h after the first, the third 24 h after the first, the fourth 48 h after the first, and the fifth and each subsequent measurement 48 h after the previous one. The quantitative determination of the NT-proBNP level (in pg/ml) (half life: 60-120 minutes) was based on the immunoenzymatic method: a test based on the competitive EIA failure, and 3 (21.4%) with diabetes. In the non-survivor subgroup, there were 3 patients (50.0%) with hypertension, 3 (50.0%) with ischemic heart disease, and one (16.7%) with chronic obstructive pulmonary disease. In the whole studied group (128 measurements), the mean NT-proBNP, procalcitonin and C-reactive protein concentrations were respectively: 140.80±84.65 pg/ml, 22.32±97.41 ng/ml, 128.51±79.05 mg/l. Detailed data (mean, median, minimum, maximum and standard deviation) is presented in Tab. 3. The correlation of the NT-proBNP level and procalcitonin and C-reactive protein levels was 0.3273 (p<0.001) and 0.4134 (p<0.001), respectively.
Tab. 3. NT-proBNP, PCT and CRP levels in the studied group. Mean, median, minimum, maximum and standard deviation (SD).
DISCUSSION
For a patient to be diagnosed with severe sepsis or septic shock, they must have dysfunction of at least one organ. It may but need not be cardiac dysfunction. In the case of patients in septic shock, cardiac dysfunction is frequent. NT-proBNP is a recognized marker of cardiac dysfunction. Taking the above into account, an assumption was put forward that in septic patients, together with the intensification of the severity of sepsis, there comes an intensification of cardiac dysfunction (increase in NT-proBNP levels). PCT and CRP were used as markers of the intensification of sepsis severity. It is true, however, that CRP is a less specific marker of sepsis than PCT, but some studies have suggested that CRP may be an indicator of organ dysfunction [21][22][23]. NT-proBNP was used as the marker of cardiac dysfunction. In the studies carried out by Castelli et al., the mean serum concentration of PCT was 0.38 ng/ml in patients with SIRS (Systemic Inflammatory Response Syndrome), while in those with sepsis (patients with sepsis, severe sepsis or septic shock), it was 1.58 ng/ml [24]. In the studies of Tugrul et al., in patients with severe sepsis and septic shock, the mean PCT concentrations were 19.25 ng/ml and 37.15 ng/ml, respectively, while in SIRS patients, it was 0.73 ng/ml [25]. According to the majority of researchers, a PCT concentration exceeding 10 ng/ml is associated with the development of severe infection and poor prognosis [26,27]. The mean PCT concentration obtained in our study (22.32 ng/ml) is in agreement with the results reported by other authors. Pereira-Barretto et al. suggested that serum NT-proBNP concentrations exceeding 100 pmol/l allow for the identification of patients with heart failure, while concentrations over 270 pmol/l are observed in patients with severe heart failure [28]. Chua et al. described a significantly elevated level of NT-proBNP in patients in septic shock [21]. Comparing our results to those obtained by other authors, attention should be paid to the method used for NT-proBNP determination. A minor difference in the method may make this comparison impossible [29,30]. Furthermore, according to Prontera et al., healthy women (64.3±41.6 pg/ml, 7.59±4.91 pmol/l) showed significantly higher values of NT-proBNP than men (46.9±30.9 pg/ml, 5.53±3.64 pmol/l) [31]. In our study, the mean NT-proBNP concentration was below 200 pg/ml. It suggests that cardiac failure was not very significant in our septic patients. NT-proBNP levels correlated with the PCT and CRP levels in septic patients. To our knowledge, our study is the first to describe the correlation between NT-proBNP and PCT and CRP levels in septic patients. In this study, we estimated the correlation not only in the investigated group of patients (128 measurements), but also in two subgroups: survivors (93 measurements) and non-survivors (35 measurements). In each subgroup, a significant positive correlation was observed between NT-proBNP concentrations and the concentrations of PCT and CRP. Attention should be paid to the markedly stronger correlation of NT-proBNP with PCT in the survivor subgroup (0.485, p<0.001) and of NT-proBNP with CRP in the non-survivor subgroup (0.6378 p<0.001). The explanation for this requires further study on a larger group of patients. In our investigations, in the survivor subgroup, the mean NT-proBNP plasma concentrations were significantly lower than in the non-survivor subgroup. This is in agreement with the studies of Hoffman et al. [4]. The obtained results suggest that cardiac dysfunction played a more significant role in patients in the non-survivor subgroup. In the studies of Brun-Buisson et al., concerning patients with severe sepsis in intensive care units in France, mortality analyzed within a period of 30 days was 35%. In our study, mortality analyzed within a period of 28 days was 30%. Here the relatively low levels of NT pro-BNP that were reported for each patient make the presence of severe underlying cardiac dysfunction very unlikely.
However, two hypotheses should therefore be considered to partly explain this phenomenon and the relationship between NT pro-BNP and inflammatory markers. Many patients with heart failure have stiff hearts with an increased wall thickness and small volumes, leading to diastolic dysfunction. Natriuretic peptides (BNP or NT-proBNP) might be used to detect patients with diastolic dysfunction, especially those patients with a restrictive filling pattern or pseudonormalized mitral flow pattern and those who are symptomatic. However, patients with relaxation abnormalities and mild symptoms or who are asymptomatic may have normal levels of the natriuretic peptides, indicating no or only a slight elevation of the left ventricular filling pressures. Thus, low levels cannot be used as a rule-out diagnosis of diastolic dysfunction [32,33]. However, it should be understood that BNP and NT-proBNP levels might be raised to different degrees, not only in heart failure but also in critical illness and various pulmonary diseases; in these situations, BNP and NT-proBNP may also serve as markers of severity and prognosis [34]. On the other hand, low levels of NT-proBNP in septic patients may result from minor myocardial cell damage and wall stress, due to supportive and fluid reanimation, which are both related to the severity of sepsis and critical illness (as expressed by inflammatory biomarkers levels). The second hypothesis is connected with the potential up-regulation and release of natriuretic peptides by pro-inflammatory cytokines. The available data shows that an increase in circulating BNP is observed coincident with cardiac allograft rejection, and this is reversed upon treatment with anti-lymphocyte therapy, suggesting that pro-inflammatory cytokines may uniquely modulate BNP gene expression and secretion. Ma et al. investigated pro-inflammatory cytokines or conditioned medium (CM) derived from mixed-lymphocyte reaction (MLR) cultures in their ability to modulate ANF or BNP mRNA expression and secretion, as well as BNP promoter activity in cultured neonatal rat cardiomyocytes. On the basis of their results, they showed that the exposure of cultured rat cardiomyocytes to specific pro-inflammatory cytokines and MLR-CM results in the only known instance of upregulation of cardiac BNP at the transcriptional and translational levels without a corresponding increase in ANF gene expression. Furthermore, these effects were dependent on signaling by p38 MAP kinase. These findings reveal a unique discoordinated expression of BNP and ANF to inflammatory cytokines, and offer an opportunity to better understand the differential regulation of these two cardiac-derived endocrine hormones, which share receptors as well as biological properties. These relationships may also help to partly understand the cause of low levels of NT-proBNP in patients with severe sepsis [35][36][37]. Our study should be considered as a preliminary examination due to the small number of samples. There is considerable evidence that NT-proBNP levels primarily reflect underlying chronic myocardial dysfunction and active myocardial cell damage, and, in the available literature, several variables are Vol. 12. No. 2. 2007 CELL. MOL. BIOL. LETT. 172 well known to potentially influence natriuretic peptide concentrations, including: age, gender, body mass index and renal function. These factors were not evaluated in our study.
In conclusion, NT-proBNP levels correlate with PCT and CRP levels in septic patients. In the whole investigated group, the correlation between NT-proBNP and CRP concentrations was more significant then the correlation between NT-proBNP and procalcitonin. In the survivor subgroup, the mean NT-proBNP plasma concentrations were significantly lower than in the non-survivor subgroup.
|
2017-08-03T00:22:00.119Z
|
2006-12-06T00:00:00.000
|
{
"year": 2006,
"sha1": "6b81d06c5ca286f921f5456886d59ae9deefd0c1",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc6275983?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b81d06c5ca286f921f5456886d59ae9deefd0c1",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247090246
|
pes2o/s2orc
|
v3-fos-license
|
Using Microbial Responses Viewer and a Regression Approach to Assess the Effect of pH, Activity of Water and Temperature on the Survival of Campylobacter spp.
This study aimed at developing a model for evaluating the survival of various Campylobacter jejuni strains under different conditions in culture media and poultry data from ComBase. Campylobacter data of culture media (116) and poultry (19) were collected from Microbial Responses Viewer, an additional tool of ComBase. The Weibull equation was selected as a suitable model for the analysis of survival data because of the nonlinearity of survival curves. Then, the fitting parameters (first reduction time and shape parameter) were analysed through a Kruskall–Wallis test and box-whisker plots, thus pointing out the existence of two classes of temperature (0–12 °C and 15–25 °C) and pH (4–6.5 and 7–7.5) acting on the viability of C. jejuni. Finally, a general regression model was used to build a comprehensive function; all factors were significant, but temperature was the most significant variable, followed by pH and water activity. In addition, desirability and prediction profiles highlighted a negative correlation of the first reduction time with temperature and a positive correlation with pH and water activity.
Introduction
Campylobacter spp. are commensal organisms in bovine, sheep, pigs and poultry alongside various birds and usually do not cause any symptoms in animals. They are Gramnegative, slender, spirally curved rod, non-spore-forming and microaerophilic bacteria [1,2]. It is known that there are 51 species and 16 subspecies belonging to the Campylobacter genus of the Campylobacteraceae family. Two subspecies belong to the species Campylobacter jejuni: Campylobacter jejuni subsp. doylei and Campylobacter jejuni subsp. jejuni [3].
Campylobacteriosis is an infectious disease generally caused by C. jejuni, but C. coli, C. concisus, C. upsaliensis, C. ureolyticus, C. hyointestinalis and C. sputorum also give rise to this infection [4]. Given foodborne diseases, it is seen that C. jejuni and C. coli are the most important and most resistant to physical conditions. The most common clinical manifestations in humans consist of diarrhoea, fever, abdominal pain, headache, nausea and vomiting. However, there are complications such as Guillain-Barré syndrome after infection [5].
Campylobacter spp. can contaminate foods in different ways. It is known that the major contamination sources of C. jejuni infection in humans are poultry products [6,7]. The important step causing an increase in the thermophilic Campylobacter load, which is one of the clinically crucial etiologies of gastroenteritis in humans all over the world, is the transportation and slaughter of animals with intestinal carriers [2,8].
Predictive microbiology in foods is an area of applied research in food microbiology using mathematical models to predict microbial growth and responses in different environmental conditions [9,10]. Predictive models can provide accurate predictions on microbial growth and inactivation. Using these predictions instead of microbiological experiments offers a cheaper and more efficient alternative for researchers [11,12]. There are different levels of modelling in Predictive Microbiology; the classic definition is as follows: 1.
Primary models, which show cell number as a function of time to model growth, inactivation or survival; 2.
Secondary models (for example gamma approach, square root and polynomial equations), which focus on some parameters of primary models (e.g., growth or inactivation rate, lag phase) as a function of intrinsic or extrinsic factors (pH, activity of water, temperature, salt, the concentration of antimicrobial compounds, etc.) 3.
Tertiary models, which are databases and software able to simulate a priori growth or inactivation as a function of some input conditions.
The most known database or tertiary model is ComBase, which includes more than 60,000 records on microorganisms' behaviours in different environments and some models to predict growth and inactivation [13,14]; moreover, it also contains some additive tools, improving its performances in some fields. One of these additive tools is Microbial Responses Viewer (MRV), which is a database consisting of microbial growth/no growth data under specified environmental conditions of temperature, pH and activity of water (a w ) derived from ComBase [15].
Although Campylobacter spp. are challenges in food safety, there are still a few research works that predict its survival; moreover, the research available in the literature are based on single strains or a mix composed of few isolates. Some evidence is available for chicken meat during a model gastric digestion [16] to predict infections through a dose-response approach [17], inactivation during heat treatment through a Baynesian approach [18], the compliance to performance criteria in poultry meat [19] or the estimation of incidence of campylobacteriosis through a Monte Carlo simulation; however, to the best of our knowledge there are not models able to predict the effect of some intrinsic or extrinsic factors of foods on a wide range of isolates of Campylobacter spp. Therefore, the main aim of this research was to develop a polynomial model able to predict C. jejuni survival in lab media and foods (poultry), taking into account strain variability as well as evaluating the goodness and the usefulness of this model. This aim was addressed through some intermediate milestones: (a) Using MRV to generate the data of C. jejuni (cell counts vs. time); (b) Primary modelling; (c) Building a polynomial equation through the multiple regression approach.
Research Planning
The research focused on the building of a comprehensive model to assess the effect of pH, temperature and a w on Campylobacter spp. The source of data was MRV, while the steps for model building are in Figure 1.
Data Collecting from MRV
The source of data was Microbial Responses Viewer (MRV), an additional tool of ComBase. By selecting Campylobacter in MRV and then a culture medium or poultry, from the general plots it was possible to gain access to the different datasets available in the model (Supplementary Figure S1/File S1). For some of them, there was only a linear death kinetic without experimental values (Supplementary Figure S2A/File S1). These combinations were excluded, while only the combinations with a scatter plot were retained (Supplementary Figure S2B). The values were pasted and copied into an Excel sheet for the second step (Supplementary File S2).
Data from both wild and collection isolates (see Supplementary Tables S1 and S2/File S1) were collected; Table 1 shows the conditions for which data were gained. pH 3 (6.0, 6.1 and 6.5) 6.0-6.5
Primary Modelling with Weibull Function
The Weibull equation is considered to be a suitable model for the analysis of survival data, as it explains the nonlinearity often observed in survival curves [20]. This function has two parameters: shape parameter (p) and first reduction time (δ). The shape parameter
Data Collecting from MRV
The source of data was Microbial Responses Viewer (MRV), an additional tool of ComBase. By selecting Campylobacter in MRV and then a culture medium or poultry, from the general plots it was possible to gain access to the different datasets available in the model (Supplementary Figure S1/File S1). For some of them, there was only a linear death kinetic without experimental values (Supplementary Figure S2A/File S1). These combinations were excluded, while only the combinations with a scatter plot were retained (Supplementary Figure S2B). The values were pasted and copied into an Excel sheet for the second step (Supplementary File S2).
Data from both wild and collection isolates (see Supplementary Tables S1 and S2/File S1) were collected; Table 1 shows the conditions for which data were gained.
Primary Modelling with Weibull Function
The Weibull equation is considered to be a suitable model for the analysis of survival data, as it explains the nonlinearity often observed in survival curves [20]. This function has two parameters: shape parameter (p) and first reduction time (δ). The shape parameter informs about the shape of the survival curves of microorganisms and takes into account at least three shapes of the death curve: downward (p > 1), upward (p < 1) and linear (p = 1). The first reduction time is the time to attain a reduction of 1 log CFU/mL in the cell counts [21,22]. The δ value is similar with the D (decimal reduction time) value, but it differs from D in that the δ parameter gives information about the mean of the distribution describing the time of death of the microbial population [20].
The Weibull equation was used in the form cast by Mafart et al. [23]: The total data in culture media (116 datasets) and poultry (19 datasets) were fitted through Statistica software version 7.0 (Statsoft, Tulsa, OK, USA). The goodness of fitting was evaluated according to the coefficients of regression, the sum of squares error/residual sum of squares/final loss.
Box-Plots
The first reduction time and the shape parameter were also analyzed through the Kruskal-Wallis test (p < 0.05) and box-whisker plots as a function of pH, temperature and a w to gain a comprehensive overview of the effects of these variables on strain survival.
General Regression Model
A general regression model was used to build a secondary model able to predict the effects of temperature, pH and a w on the fitting parameters of Weibull function. The significance of the models and parameters was evaluated by the Sum of Squares, the Mean Sum of Squares, the R-value for multiple regression and using Fisher's test.
The effect of each independent variable (temperature, pH, a w ) on the fitting parameters of the death kinetic of Weibull (p and δ) was evaluated through the individual desirability functions, estimated as follows: where y min and y max are the minimum and maximum values of the dependent variable, respectively.
Primary Model
The main assumption of the models described in this section and in the following ones is that C. jejuni experiences only a death kinetic, as also reported by MRV. Growth was not considered.
Campylobacter survival in MRV is described by a linear model; however, for several situations, the time-dependent survival kinetics of the strains cannot be explained by the linear model because there were some deviations from linearity (data not shown).
After a preliminary selection, the Weibull function was chosen because it is suitable to describe concave or convex decay curves of microorganisms. Most datasets from MRV, in fact, showed a concave shape for a possible shoulder effect. Biologically, it is known that the shoulder step refers to the period when microorganisms do not die yet due to various reasons [13]. As an example, Figure 2 shows two death kinetics of Campylobacter spp. in lab media, while in Supplementary Table S1 there are Weibull parameters and R-values for all datasets.
In culture media, the δ parameter varied as a function of temperature level, but for each temperature, a strong variability was found; mainly at 0, 4, 8 and 12 • C, the difference between the min-max values of δ were the highest. For example, it has been observed that the δ parameters for 0 • C were 37.4-955.4 h (min-max), and the min-max values for p parameters were 0.64 and 8.30. However, at 25 • C, it is shown that the δ parameters of the strains were between 3.22-140.79, and the p parameters were between 0.71-2.20. In culture media, the δ parameter varied as a function of temperature level, but for each temperature, a strong variability was found; mainly at 0, 4, 8 and 12 °C, the difference between the min-max values of δ were the highest. For example, it has been observed that the δ parameters for 0 °C were 37.4-955.4 h (min-max), and the min-max values for p parameters were 0.64 and 8.30. However, at 25˚C, it is shown that the δ parameters of the strains were between 3.22-140.79, and the p parameters were between 0.71-2.20.
The lowest correlation coefficient values of the Weibull equation in the culture media were 0.12416 and 0.61198, while the rest were above 0.85 (Supplementary Table S1), thus Table S1), thus suggesting that the Weibull model could satisfactorily describe the death kinetics of this microorganism.
In poultry, while the min δ was 4.09 h, the max was 401.65 h, and the p parameters were in the range of 0.68-2.13. Therefore, it was observed that survival kinetics exhibited upward and downward curves similar to the culture media. It has been shown in Poultry's data that the correlation coefficients of the Weibull equation were 0.92 and above (Supplementary Table S2).
Effects of pH, Temperature and a w
As a first step, box-whisker plots for the effects of pH, temperature and a w on the first reduction time and shape parameter were built. Temperature profiles for the first reduction time highlighted two groups ( Figure 3A): the first one comprised the death kinetic at 0, 4, 8 and 12 • C with higher values of δ, although at 10 • C a statistical artifact was found due probably to a lower number of cases and datasets available in MRV. This first group was characterized by a δ-value up to 1000 h. The second group for the temperature profile (p < 0.05) was composed of the datasets at 15, 20 and 25 • C with a lower δ-value (<200 h). The box plot also suggests a strong variability for each temperature due to at least two different reasons: the experiments were conducted at different conditions of pH and a w and with different strains.
The pH profile of δ points out a possible effect of pH with the same limitations reported for the temperature (variability) ( Figure 3B). The Kruskal-Wallis test pointed out two groups: the first one was composed of the death kinetics up to pH 6, which showed a first reduction time < 200 h, and the second one was at pH 7.0-7.5 (median value of δ at 780 h). Moreover, an intermediate group with a trend similar to pH 3.5-6.0 and 7.0-7.5 was found at pH 6.0-6.5: this transition group had a median value of δ of 80 h, similar to the group at pH 3.5-6.0, but the third quartile (500 h) and the maximum of the distribution (1000 h) suggests the existence of some strains with a trend similar to the second group (pH 7.0-7.5).
The effect of a w was less pronounced and less significant ( Figure 3C), similar to the effect of the factors on the shape parameter ( Figure 4). For this second parameter, a significant effect was recorded only for pH because in the range 3.5-4.0, the shape parameter was always <1, thus suggesting an upward death kinetic and the lack of a shoulder phase (or resistance period) ( Figure 4B).
Secondary Models
In the second step of this research, a regression approach was used to assess the statistical weight of each factor (temperature, pH and a w ) on the first reduction time and on the shape parameter; the methodology used was the general regression model.
For the first reduction time, the model highlighted the significance of all factors (pH, temperature and a w ), although the existence of several outliers and the strong variability in some combinations pointed out only a partial correlation and a qualitative trend, rather than a quantitative function. Figure 5 shows the Pareto chart of standardized effect (bars); a longer bar denotes a more significant effect. Thus, the most significant term was temperature, followed by pH and a w . In addition, the mathematical term of the temperature was negative, while pH and a w had a positive term; that is, the model predicts a decrease of the first reduction time when temperature increases, while an increase of pH and a w determines an increase of this parameter. (Figure 6A, B), thus stressing the strong survival of The quantitative correlation of the shape parameter with the factors could be better highlighted by the desirability profiles. Desirability is a dimensionless parameter, ranging from 0 to 1 and is the answer to following question: how much desired is an output? The reply is 0 for the worst result (or the minimum value) and 1 for the best one (or the maximum value). Moreover, a desirability profile is often completed by a prediction profile, which shows the predicted values of the dependent variable as a function of the coded values of the factors of the design. Figure 6 shows the desirability (A, C and E) and the prediction profiles (B, D and F) for the effects of the factors on the first reduction time. The model predicted a negative correlation of the temperature with a decrease of δ from 255 h at 0 • C (desirability at 0.64) to 0.81 h at 25 • C (desirability at 0.35) ( Figure 6A,B), thus stressing the strong survival of C. jejuni under refrigeration.
Discussion
A model for predictive purposes could be a useful tool to increase safety and to prevent foodborne illnesses; however, to the best of our knowledge, few attempts have been made for Campylobacter spp. mainly to model thermal inactivation [24,25] or for a qualitative risk assessment for Campylobacter prevalence and diffusion in the food chain [19,[26][27][28].
The first steps for a robust risk assessment are hazard characterization and exposure assessment, which rely on the definition on the growth/inactivation rate of the pathogens and on the role of intrinsic and extrinsic factors. This research was aimed at contributing to this step, focusing on both the exact definition of C. jejuni kinetic and the evaluation of the statistical weight of three main parameters for food preservation (pH, aw and temperature).
The first question was on the shape of the survival kinetic. MRV uses the log-linear model, but the pathogen experienced a different kinetic, and the Weibull model was generally able to describe it, as also reported by González et al. [29]. A non-linear kinetic is a challenge for food preservation because it could be associated to two different phenomena: a shoulder length and a tail phase. As reported for the Pareto chart, the correlation of δ with pH and a w was positive, as it increased from 0 at pH 4 to 290 h at pH 8 ( Figure 6C,D) and from 47 (a w 0.96) to 207 h (a w , 0.99) ( Figure 6E,F).
The general regression approach was also used to model the shape parameter; however, this parameter was less affected by the factors of the design (data not shown).
Discussion
A model for predictive purposes could be a useful tool to increase safety and to prevent foodborne illnesses; however, to the best of our knowledge, few attempts have been made for Campylobacter spp. mainly to model thermal inactivation [24,25] or for a qualitative risk assessment for Campylobacter prevalence and diffusion in the food chain [19,[26][27][28].
The first steps for a robust risk assessment are hazard characterization and exposure assessment, which rely on the definition on the growth/inactivation rate of the pathogens and on the role of intrinsic and extrinsic factors. This research was aimed at contributing to this step, focusing on both the exact definition of C. jejuni kinetic and the evaluation of the statistical weight of three main parameters for food preservation (pH, a w and temperature).
The first question was on the shape of the survival kinetic. MRV uses the log-linear model, but the pathogen experienced a different kinetic, and the Weibull model was generally able to describe it, as also reported by González et al. [29]. A non-linear kinetic is a challenge for food preservation because it could be associated to two different phenomena: a shoulder length and a tail phase.
The shoulder is the initial phase of the death kinetic and denotes a period when a pathogen does not decrease; in the Weibull model, it is associated to a p > 1 and a high first reduction time, as found in most datasets. The tail (associated to p < 1) is also a challenge because the pathogen could experience a strong reduction in the first phases of the death kinetic and then a prolonged survival, with a residual sub-population [30]. The shape parameter and the first reduction values of C. jejuni found after primary modelling suggests the existence of both scenarios, depending on the strain and on the combination of pH/temperature/a w .
The second step to build a predictive function is secondary modelling, performed in this study through a multiple regression approach.
Some studies have shown that Campylobacter has a high survival capacity at low temperatures [31]. In a culture medium study, conducted with Müller Hinton agar including 2% horse blood (at +2 • C), Campylobacter strains were viable for at least one month under atmospheric conditions [32]. The data of this research confirmed the high viability of C. jejuni; the δ parameter was observed at nearly 41.9 days (1006.1 h) at +4 • C and 5.9 days (140.8 h) at 25 • C. In poultry meat, the maximum value was 11.6 days (278.9 h) at +4 • C.
Concerning the p-parameter (shape parameter), in a study examining the survival of different C. jejuni strains in high-and low-mineral drinking water at +4 • C using the Weibull model, p parameters ranged between 1.80 ± 0.20 and 3.00 ± 0.39 [33]. We reported in our study at +4 • C that p parameters were in the range of 0.00-2.08. In addition, in poultry at +4 • C, p parameters ranged between 0.94 and 2.13.
Besides the high survival capacity in low temperatures, the food matrices are another important parameter for survival. In a study, the influence of retail liver and meat juices on the survival of Campylobacter strains at +4 • C was investigated for five weeks. Strains showed higher survival in beef liver juice and chicken liver juice than beef juice, chicken juice and Müller Hinton broth [31]. Particularly, a cryoprotective effect of the liver composition is mentioned, which promotes survival at low temperatures. In terms of cold tolerance, different responses between Campylobacter strains were observed [34].
Campylobacter is known to be sensitive to acid stress, as well as to drying and low a w [35]. In our study, the δ value is affected by pH and to a lesser extent by a w . The increase of pH extends the time of death of the bacterial cell in poultry meat. In the study of Askoura et al. [36], it was observed that the acid resistance of the microorganism increased with the change of the cell membrane composition in the presence of iron. It is known that Campylobacter species transform their shape from a motile spiral form to coccoid under adverse conditions and become viable but non-culturable [37].
Apart from temperature, pH and a w , there are other factors influencing Campylobacter survival, such biofilm formation and oxygen. These variables were not considered in this study because they were not described in MRV; however, they should be added in the future to a comprehensive model for Campylobacter along with other variables such strain difference, nutrient or antimicrobial content and structure of food matrix.
The last issue raised by this research was the strong strain variability, which should be carefully considered when building a comprehensive model to avoid faildangerous scenarios.
Conclusions
This study aimed to develop a predictive model on the factors affecting the survival conditions of C. jejuni, i.e., culture media, food type (poultry), strain variability. The Weibull model was successfully used to model C. jejuni survival in culture medium and poultry because the pathogen had generally a non-linear death kinetic varying from a downward to an upward shape.
In addition, the not-parametric test and the box-whisker plots pointed out the existence of two classes of the first reduction time for both the temperature and pH, that is, 0-12 • C vs. 15-25 • C and 4-6.5 vs. 7-7.5, which could qualitatively describe a longer or a shorter survival of the pathogen.
Finally, the general regression model pointed out the quantitative correlation of the first reduction time with temperature, pH and a w as a prodromal step to build a comprehensive model.
Several variables affected modelling and function building: (a) The strong variability amongst the different datasets due to the different strains and experimental conditions. (b) The lack of a unifying design (for example a Design of Experiments) to describe the interactive factors. Model building was based on several randomized combinations available on MRV, but the lack of a geometrical or factorial scheme in the combinations did not allow an estimation of interactive effects (additive or synergistic variables).
It is reasonable to imagine that the factors assessed in this research acted synergistically; thus, for a comprehensive model building, the preliminary details found in this research should be validated through a DoE able to focus also on interactive factors because mutual dependence of factors is probably more important than their liner effects. (c) The prediction of Campylobacter is a challenge because it is a pathogen very difficult to study (slow growth in lab, fastidious requirements for growth, etc.). In addition, MRV does not consider several variables (among others, food structure, food components, effects of natural microbiota, oxygen and carbon dioxide in the headspace), which could strongly and significantly affect the growth/survival of this pathogen.
These issues should be taken into account to design and to develop a robust model for C. jejuni with practical implications; moreover, other variables should be added to the model. Nevertheless, this research could be the background for future studies because it highlighted some crucial factors to consider (kind of death kinetic, strain dependence and the role of the three main factors).
|
2022-02-26T00:21:28.403Z
|
2022-02-22T00:00:00.000
|
{
"year": 2022,
"sha1": "6e473693cc0facce7ffb9326ee7c18a6da656786",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2304-8158/11/5/637/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6b9c1fae0e32bde525d9947a029734d7306a6ac6",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259634647
|
pes2o/s2orc
|
v3-fos-license
|
AN INVESTIGATION ON THE ADSORPTION OF METHYLENE BLUE FROM WATER BY MnFe 2 O 4 - MODIFIED DIATOMITE
The MnFe 2 O 4 /diatomite was obtained by wet chemical methods. The specific structure of the material has been determined by modern physicochemical methods. The results showed that the surface of diatomite was coated by the manganese/iron oxide nanoparticles. The prepared MnFe 2 O 4 /diatomite material is a good adsorbent for the removal methylene blue (MB) in water. The adsorption kinetics of MB on modulation materials are consistent with the pseudo-second-order kinetics model. The adsorption isotherms well followed the Langmuir isotherm model and maximal adsorption capacity of MB can read 151.52 mg/g at 323K. The adsorption behavior of MnFe 2 O 4 /modified is an endothermic and spontaneous process. The results show that MnFe 2 O 4 /diatomite is a promising adsorbent for the efficient removal of cationic dyes from wastewater.
Introduction
Given the rapid population growth, many developed industries have confronted climate change. Therefore, the environmental pollution has been increasing, especially in the water environment. Dyes from textile industry, leather shoe production, pharmaceuticals, cosmetics is the most important agents to waste water resources (Dang, T.-D. et al., 2016, Sun, Z. et al., 2017. In many dye kinds, cationic dyes are more poisonous than the anionic ones. The cationic easily interact with the negatively charged cell membrane. They can also pervade and accumulate in the bio cells (Bayramoglu, G. et al., 2009). The textile dyeing wastewater has been listed as the most important pollutants because of their high toxicity and the low bio decomposition. Therefore, many scientists have been interested in treating the wastewater containing cationic dyes, to save the environment (Dai, D. et al., 2021, Pang, J. et al., 2019, Sun, Z. et al., 2017. In many decades, the different methods have been developed to remove dyes in water as adsorption (Azha, S. F. et al., 2019, Sun, Z. et al., 2017, the photocatalyst decomposition (Dai, D. et al., 2021, Pang, J. et al., 2019, flocculation (Guibal, E. and Roussy, J., 2007), and bio-decomposition (Kornaros, M. and lyberatos, G., 2006). In these methods, the adsorption method is one of the most efficiency and simplest methods to treat dyes in solutions. Some different adsorbents have been used including activated carbon (Ramírez-Aparicio, J. et al., 2021), zeolite (Supelano, G. et al., 2020), montmorillonite (Abdul Mubarak, N. S. et al., 2021, diatomite (Pang, J. et al., 2019). However, they have not been treated after adsorption, which challenges practical application.
Recently, magnetic nanoparticles have been used in the field of adsorption and catalyst by many scientists (Dai, D. et al., 2021, liang, H. et al., 2015, Pang, J. et al., 2019. Magnetic nanoparticles can adsorb strongly and can be reproduced by magnetic methods for reusing. Many magnetic nanoparticles as manganese oxide (He, Y. et al., 2018, Islam, M. A. et al., 2019, iron(III) oxide hydrate (Pan, B. et al., 2010), and ferric oxide (Gonawala, K. H. and Mehta, M. J., 2014) have been widely used for this purpose. However, magnetic nanoparticles tend to conglomerate, which decreases their surface's area and adsorption efficiency. To solve this problem, some nanosized metal oxides dispersed on different carrier substances as bentonite (Belachew, N. and Bekele, G., 2020), zeolite (Li, C. et al., 2015), and diatomite (Bui, V. T. et al., 2021, Pang, J. et al., 2019. Compared with the system of only metal oxides, the heterogeneous system is more efficient. Since the natural diatomite has high surface's area, high porosity and chemical stability, they are widely used as the carrier of catalysts, building material, material for filter and treat wastewater. The reserves of diatomite in nature are abundant. Diatomite is the fitness carrier for nanosized metal oxides through preventing the agglomeration of metal oxide particles. Therefore, these particles can contact directly with water pollutants.
In this study, the magnetic nanoparticles of MnFe 2 O 4 are distributed on the surface of pure diatomite by the hydrothermal method and their ability in treating methylene in water is investigated. The study of adsorption kinetics, adsorption isotherms, and adsorption thermodynamics, and adsorption mechanism are done based on the experimental results.
Experiment and methods 2.1. Materials
In this study, diatomite from Phu Yen (RD) is used to have the purified carrier. The used chemicals are at pure analysis level of China as manganese (II) sulfate monohydrate MnSO 4 .H 2 O; iron (III) sulfate Fe 2 (SO 4 ) 3 ; acid hydrochloric; sodium hydroxide, methylene blue (MB). The structure and general properties of MB are presented in Table 1.
Synthesis of MnFe 2 O 4 /diatomite material
The MnFe 2 O 4 /diatomite material have been prepared by the co-precipitation method based on the report by Pang et al. (Pang, J. et al., 2019) with some innovations: Provide 2.00 gram Fe 2 (SO 4 ) 3 and 0.85 gam MnSO 4 .H 2 O (the ratio of Fe/Mn is 2/1) into an erlen of 100 ml distilled water, heating up and stirring on the magnetic stirrer in 30 minutes at 60 o C. Then, add 5.00 gram diatomite into solution and stir about 30 minutes to have a uniform system. Then, 35 ml of NaOH 1,0 M is dropped into the above erlen and stir 1.5 hours at 80 o C. Finally, the brown solid is obtained by the vacuum filter and the distill water, dry at 60 o C in 12 hours. To obtain the better catalyst, the product is heated at 400 o C in 1 hour with the heating rate of 10 o C/min (the symbolized sample as MnFe 2 O 4 -D).
Besides, the MnFe 2 O 4 material is also prepared by the equivalent method without diatomite (the symbolized sample as MnFe 2 O 4 ).
Material characterization methods
The crystal structure of diatomite and the modified diatomite is determined by X-Ray Diffraction (XRD) on D8 Advance-Bruker (Germany) with the photon 40 kV, 300 mA, angle of 1-50 o . The surface area BET of samples are calculated from the nitrogen adsorptiondesorption isotherm at 77K with the Micromeritics TriStar 3000 (America). The SEM images of the diatomite and modified diatomite are collected from Hitachi S-4800 (Japan). The zeta potential of these samples is determined at the different values of pH by Horiba SZ-100 (Japan), the pH value is controlled by adding NaOH or HCl solution.
Survey the MB adsorption on the MnFe 2 O 4 /diatomite
The (Pang, J. et al., 2019). In the XRD of the MnFe 2 O 4 -D in Figure 1c, the peaks at 2θ = 21,9 o and 36,1 o are the characteristic peaks of diatomite and MnFe 2 O 4 . In addition, the last peaks are not observed clearly, showing that the kinds of the amorphous oxides are mainly formed on the surface (Pang, J. et al., 2019). The XRD pattern ( Figure 1) shows that the samples have poor crystallization, many defects, and amorphous level, likely suitable as adsorbents. Figure 2a and 2b indicate that diatomite has high porosity with the sphere and the average porous diameter is 300-400 nm. This is the ideal hard mold material for preparing the 3D material that is widely applied. Figure 2c and 2d show that many nanoparticles with size of several ten nanometer are distributed on the surface of diatomite, thus preventing nanoparticles agglomeration. The MnFe 2 O 4 -D material has higher porosity and coarser; thus increasing surface's area can increase the ability of treating dyes.
The results of the EDX analysis of diatomite and MnFe 2 O 4 -D are shown in Figure 3. The spectroscopy of the diatomite sample contains the characteristic peak of the main elements including C, O, Al, Si, Fe, and some trace elements. In the spectroscopy of the MnFe 2 O 4 -modified RD in Figure 3, besides the above elements, there are the peaks of Mn, Fe with the higher intensity. This indicates that the diatomite has been modified successful by MnFe 2 O 4 . This result is aligned with the SEM analysis in Figure 2.
The percent of some elements of RD and MnFe 2 O 4 -D shown in Table 2 present that these elements of O, Si, Al, Fe, C are mainly in weight and some trace elements include S, Cl, Mg, Na… In the MnFe 2 O 4 -modified diatomite, the percent of Fe increases from 3.75% (RD) to 22.60% (MnFe 2 O 4 -D), and the percent of Mn is 11.06% (MnFe 2 O 4 -D). In the MnFe 2 O 4 -D, the mol ratio of Fe:Mn is 2:1, same with the ratio of the prepared precursors. The nitrogen adsorption-desorption isotherm and the porous size distribution of diatomite and MnFe 2 O 4 -D are shown in Figure 4 and Figure 5. According to IUPAC, the adsorption-desorption isotherm of RD and MnFe 2 O 4 -D are the model IV. At the low pressure (P/P o < 0,4), amount of adsorbed N 2 increases slowly with the relative pressure, the two curves of adsorption and desorption are almost identical due to monolayer adsorption (Yu, T. T. et al., 2015). In addition, the weight of adsorbent increases fast at the relative pressure of P/ P o > 0,4. The hysteresis ring is clearly in the relative pressure range (P/P o ) from 0.45 to 0.95, which is the characteristic of capillary condensing material. The kind of nitrogen adsorption-desorption isotherm curve and the hysteresis ring indicate that the material has a mesoporous structure with the small porous. The porous size distribution of diatomite and MnFe 2 O 4modified diatomite in Figure 5 show that the porous size of modified diatomite is from 3 nm to 80 nm, and the main size from 3 nm to 8 nm.
The structural parameters as the area surface BET, the porous size, the porous volume are shown in Table 3. The area surface BET of RD and MnFe 2 O 4 -D are 57.63 and 107.98 m 2 /g, respectively. The porous size 's distribution in BJH show that the average diameter of RD is 5.65 nm and that of MnFe 2 O 4 -D is 7.33 nm. The surface area of MnFe 2 O 4 -D is larger than diatomite, so the ability of treating the dyes increases. The surface area of MnFe 2 O 4 -D in this study is higher than the reported result in different studies as FM-diatomite: 58.1 m 2 /g (Son, B. H. D. et al., 2016); FMBO-diatomite: 15.04 m 2 /g (Chang, F. et al., 2009); MnFe 2 O 4 /DE: 85.03 m 2 /g (Sun, Z. et al., 2017).
Results of treating MB by MnFe 2 O 4 / diatomite 3.2.1. Adsorption kinetics
For better understanding adsorption properties of the cationic dyes, the relative between the reaction time and adsorption capacity of the MB adsorption by MnFe 2 O 4 -D, RD, and MnFe 2 O 4 are observed, and their results are presented in Figure 6. The rate of removing MB by adsorbents increases fast and equal at 15 minutes. The fast rate indicates that the interaction between the adsorbent's surface and MB is strong. These results show that the ability of MB adsorption is higher than that of RD.
Two kinetic models are usually applied to analyze the experimental data and assess the adsorption mechanism are kinetic models of pseudofirst order (2) and pseudo-second order models as the following formulas: ln(q e -q t ) = lnq e -k 1 t where, q t and q e (mg/g) are the adsorption capacity at t minutes and at the equal point; k 1 (min -1 ), k 2 (g.mg -1 .min -1 ) are the adsorption rate contanst of the kinetic models of pseudo-first order and pseudo-second models. The values of q e and k 1 are determined from the linear line of ln(q e -q t ) and t. The values of q e and k 2 are determined from the linear line of t/q t and t.
The linear lines of kinetic models of the MB adsorption on MnFe 2 O 4 -D are shown in Figure 7 and the parameters of kinetic models of pseudofirst order and pseudo-second models are shown in Table 4. Correlation coefficients (R 2 ) are both high. However, correlation coefficients of pseudo-second model is higher than pseudo-first model. Therefore, this adsorption happens by the sharing or changing electron between adsorbent and MB (Sun, Z. et al., 2017). In addition, the maximum of MB adsorption capacity of MnFe 2 O 4 -D is 99.01 mg/g.
The adsorption isotherm
In this study, two isotherm models are usually used as the langmuir isotherm (4) and Freundlich isotherm models (5) to assess and to propose the mechanism of treating MB by MnFe 2 O 4 -D as two below equations: where, q m (mg/g) is the maximum value of adsorption capacity; K l (l/mg) is the langmuir constant which relates to affinity of binding center and adsorption energy; K F (l/g) is the Freundlich constant relate to adsorption capacity; 1/n is the experience parameter related to the adsorption intensity, the value of 1/n < 1 shows that the adsorption process happens easily and the new adsorption positions are formed on the adsorption surface. The values of q m and K l are calculated from the linear curve of C e /q e and C e (Figure 8). The values of K F and n can be deduced from intercept and slope of the linear graph of lnq e and C e (Figure 8). Figure 7. The ability of treating MB of material increase when the temperature increase. This indicates the adsorption process is endothermic, agreed with the before report (Sun, Z. et al., 2017). The parameters of langmuir and Freundlich of the MD adsorption on material at different temperatures are shown in Table 5. The values of correlation coefficients of the Langmuir (R 2 ) are larger than that of Freundlich. So, adsorption data in langmuir model is better than the Freundlich model in the MB adsorption on MnFe 2 O 4 -D. This result indicates that the MB adsorption on adsorbent is monolayer adsorption. The adsorption capacity is high at high temperature, i.e. this process is endothermic. The value of 1/n is less than 1 at three ranges of observed temperature, this indicates the MB adsorption on MnFe 2 O 4 -D is the adsorption process on the surface. In addition, the maximum capacity of MB adsorption on MnFe 2 O 4 -D is 151.52 mg/g (at 323K). The MB adsorption ability of some materials in some different reports are shown in Table 6. The adsorption capacity of the MB adsorption on MnFe 2 O 4 -D in this study is higher than different adsorbents (Sun, Z. et al., 2017). Therefore, the MnFe 2 O 4 -D material can remove efficiently cationic dyes in water. (Meshko, V. et al., 2001) Fe 3 O 4 @APS@AA-co-CA MNPs 142.9 (Ge, F. et al., 2012) MnFe 2 O 4 /DE-NaOH 104.06 (Sun, Z. et al., 2017) MnFe 2 O 4 -D 151.52 This work
Thermodynamics of adsorption
The parameters of thermodynamics of adsorption, standard free energy of Gibb (∆G°), standard enthalpy (∆h°), and standard entropy (∆S°) are calculated by using the following formulas (Sun, Z. et al., 2017).
where, ∆G o is standard free energy; R is gas constant, 8,314 (1/mol.K); T is absolute temperature (K); ∆h° is standard enthalpy (kJ/mol); ∆S° standard entropy (J/mol.K); K d distribution coefficient; q e the adsorption capacity at the equal state (mg/g); C e the concentration of the cationic dye in solution at the equal state (mg/L). The value of ∆h° and ∆S° can be estimated from intercept and slope of linear equation which is built from lnK d and với 1/T (Figure 9).
The adsorption thermodynamic parameters of MB are calculated and shown in Table 7. The ∆h° value of MB is position, this process is endothermic process. The ∆S° values are position, this adsorption process causes an increase in disorder at the interface between the solid and liquid phases. The ∆G° value is negative, this process happen spontaneously. On the other hand, the ∆G° values decrease when temperature increases, this indicates MB adsorption on MnFe 2 O 4 -D happens easily at the higher temperature. Besides that, the ∆G° values increase when the initial concentrations increase, it agrees with different reports (Sun, Z. et al., 2017, Tseng, R.-l. andTseng, S.-K., 2005).
Effect of the initial pH
Given pH of solution effects on the surface charge of adsorbent and degree of ionization of adsorption position, the effect of pH is investigated in a range of pH 2,0-10,0. The obtained results in Figure 10 show that the adsorption ability is maximum at range of pH 4-10. To understand the adsorption mechanism, the values of zeta potential of MnFe 2 O 4 -D are determined at different values of pH. In Figure 11, the values of the zeta potential of MnFe 2 O 4 -D are negative in a range of pH 2,0-10,0. Moreover, the values of the zeta potential decrease when the values of pH increase. It plays an important role in the adsorption process of cationic dyes due to the electrostatic attraction between the negative surface of MnFe 2 O 4 -D and cationic dyes. When value of pH decreases, the amount of positive charged position increases and the amount of negative charged position decreases. On the other hand, the negative charged position on the surface of adsorbent interacts with the cationic dyes. The adsorption is lower at acid environment because the appearance of the residual H + which competes with the cationic dyes at the adsorption position (Elmoubarki, R. et al., 2015, Sun, Z. et al., 2017.
Conclusion
The MnFe 2 O 4 /diatomite material has been prepared by hydrothermal method. The obtained material can remove the dye MB. The adsorption isotherm of MnFe 2 O 4 -D and MB can be described by the isotherm langmuir model. The maximum of adsorption capacity is 151.52 mg/g. The adsorption capacity of MnFe 2 O 4 -D increased because the surface area is large with the porosity structure, the larger porous volume, and the more negative surface. In addition, adsorption process happens fast in the second-order kinetic model. The data of thermodynamics of adsorption indicates the nature of adsorption process is endothermic and it happens spontaneously. Therefore, the obtained MnFe 2 O 4 -D material is an adsorbent which can remove effectively cationic dyes from water./.
|
2023-07-11T18:00:43.803Z
|
2023-06-23T00:00:00.000
|
{
"year": 2023,
"sha1": "97d62b9a7906845d4caa5b4977fceaeccd88b6d8",
"oa_license": "CCBYNC",
"oa_url": "https://dthujs.vn/index.php/dthujs/article/download/563/501",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "76c8278d4f1be5c95af6bd5dc08bad8c386a209f",
"s2fieldsofstudy": [
"Chemistry",
"Engineering",
"Materials Science"
],
"extfieldsofstudy": []
}
|
150151478
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of TPSq and TPS Strategy on Students’ Speaking Skill Viewed from Students’ Motivation
This research was aimed to investigate the effect of think pair square and think pair share strategy on students’ speaking skill of describing things viewed from students’ motivation. This research employed 2 x 2 factorial design and involved the students of SMK YPM Zain Pauh Kambar, Padang Pariaman. The samples were chosen by using purposive sampling. The data were gained through speaking test and analysed by using appropriate quantitative methods. The result of the research showed that think pair square and think pair share strategy were able to improve students’ speaking skill. There was an interaction between teaching strategy and students’ motivation of speaking skill. Hence, it can be concluded that these two strategies can be used as alternative strategies to enhance students’ speaking skill, and that they can enrich students’ experience to do other cooperative learning
I. INTRODUCTION
Any student who learns a language must be able to communicate in that language in terms of spoken or written language. Through the spoken or written language, the students are able to share their ideas, communicate with other people who speak that language, get a lot of friends, and more. Therefore, English teacher should provide a lot of opportunities for the students to communicate in English. For vocational school students, for example, the fluency in speaking is required, since the vocational school students are prepared to professional world. After graduated from vocational school, the students must have good speaking skill to compete with others in order to be employed in the job field. It is becomes one of the challenges for the teachers to help the students to improve their speaking skill. To achieve this goal, vocational school has developed a communicative approach in its language teaching. Therefore, the teachers should provide class activities in terms of real life communication.
The researcher focused on grade X of SMK YPM Zain Pauh Kambar. It was because of in that grade, the students were introduced to various kinds of the materials in learning English. One of them was describing things. In addition, the materials provided in grade X were bunches for the national examination, which was 40 %. The other reason was the students" mark on the semester test was very low. Only a few of the students achieved the minimum standard of the score.
These situations led the researcher to investigate the students and also the teacher in that school. Based on the researcher"s experience in teaching English at SMK YPM Zain Pauh Kambar it was found that the students had difficulty to speak, or reluctant to speak. It was caused by several problems. The problem can be caused by internal and external factors of the students. Due to the facts, the researcher would like to create various experiences that enable students develop their speaking skills. One of the ways to increase students" speaking skill in the classroom is by applying cooperative learning, which would not be far from the communicative approach.
The cooperative learning enables students to work collaboratively with their friends to exchange information, and emphasizes the interaction of the 1,28 students. This technique is divided into several strategies. The strategies are make a match, think pair share, think pair square, numbered head together, structured numbered head, jigsaw, etc. Among those cooperative learning strategies, the researcher would like to focus on think pair share and think pair square strategies as the way to solve these problems.
In think pair share strategy, the students have longer time to do their task, listening to their friends, and involve in a group. The teacher gives the students opportunity to work alone, and work together with others. Simply, it is designed to influence the students" interaction to help each other in a group of two. Based on Handayani (2012, p. 6), there are five steps of think pair share strategy, they are; orientation, thinking, pairing, sharing, and giving reward. After all the students get involved in the discussion process, the teacher gives the reward for the students individually and group.
While in think pair square strategy, the opportunity is given twice for students to share their ideas. Each couple was doing discussion. If one of the couple could not solve the problem or answer the question, the other couple will help. Based on Lie (2007, p. 57) there are six phase of think pair square strategy, which is little bit longer that than think pair share strategy. They are orientation, thinking, pairing, squaring, and sharing, and giving reward to the students individually and group to motivate the students to do better in the next meeting.
of the groups were given post test after they receive different treatments to see whether the groups show the differences and the effectiveness of the treatment.
The population of this research was grade X students in SMK YPM Zain Pauh Kambar. There were 134 students who were separated in six parallel classes. The researcher did cluster random sampling to choose the experimental class. Therefore, there were two groups that were involved in this research, class B and class C.
Test of speaking skill and the questioner were the instruments in this research used to collect the data about students" speaking skill. The form of the test was an oral performance test in form of presentation for both of the experimental. The students were asked to describe the simple object for about three to five minutes for each student.
Before analysing the hypothesis testing, the researcher needed to see the normality and the homogeneity of the data. The normality of the data could be analysed by using Lilliefors formula. Then, the homogeneity of the data can be seen trough F test formula. At last, the researcher could continue to analyse the hypothesis testing to answer the result of the research by using two ways ANOVA.
III. FINDINGS
There were four hypotheses in this research to answer the research questions. The hypotheses were:
A. First Hypothesis
The first hypothesis was tested by using t test with the significant level : 0.05. The value of t score II. METHOD OF THE RESEARCH This research was conducted by using quasi experimental research. It employed factorial design. The purpose of this design was to determine whether the effects of independent variable were generalizable across all levels or whether the effects were specific to particular levels. It was categorized as quantitative research, which is testing a theory of variables, measuring by numbers, and analyzing by using statistical techniques.
This research was aimed to find out the significance of think pair share and think pair square in teaching speaking. Therefore, there were two experimental groups used by the researcher here. First, the group which was taught by using think pair share strategy. Second, the group which was taught by using think pair square strategy. Both was compared with the value of t table. From the data analysis, it was found that think pair square strategy produces the same gain as think pair square strategy on students" speaking skill of describing things. For further description, it can be seen in the following table: The table above shows that first hypothesis was tested by using t-test with the significance level 0.05. The value of t obtained was compared with the value of t table. It was found that t table at the test by considering number of students, mean, and variance between two groups. For further description, it can be seen from the table below: TABLE III The Summary of the Third Hypothesis significant level : 0.05 is "2,012" while t calculation's score is 1.28. It can be concluded that Ho is accepted and Ha is rejected. It means that think pair square strategy produces the same gain as think pair square strategy on students" speaking 1 2 skill of describing things.
B. Second Hypothesis
The second hypothesis was found that think pair square strategy produces better gain than think pair share strategy on students speaking skill for high motivation students. For further description, it can be seen from the table below: The third hypothesis was tested by using t-test at the significant level 0.05. The value of t calculated was compared with the value of t table. It was found that t table at the significance level 0.05 is 2.17 while t calculated is 2.19. It can be concluded that Ha is accepted. It means that Think pair share strategy produced better gain than think pair square strategy on students" speaking skill of describing things for low motivation students.
C. The Fourth Hypothesis
The fourth hypothesis was used to find out the 1 2 interaction between think pair share strategy and think pair square strategy. In this research, the researcher used ANOVA 2x2 formula in analysing the interaction between both of the strategy and the students" speaking skill (high and low motivation The table above shows that the second hypothesis was tested by using t-test with the significance level 0.05. The value of t calculated was compared with the value of t table. It was found that the score of t table at the significance level 0.05 is 2.17 while the score of t calculated is 2.27. It meant that t calculated is bigger than t table. It can be concluded that Ha is accepted. It means that Think pair square strategy produces better gain than think pair share strategy on students" speaking skill of describing things for high motivation students.
B. Third Hypothesis
The third hypothesis was found that think pair share strategy produced better gain than think pair square strategy on students" speaking skill for low motivation students. The test is conducted with t-students). The result of analysis can be seen on the following table: The third row in the ANOVA table above shows that the F calculated score is 0.36 and the score of the F table is 7.24. It can be concluded that F calculated ≤ than F table, it means that Ho is accepted. It can be said that there is no interaction between think pair share strategy and think pair square strategy to the students" initial achievement in students" speaking skill.
IV. DISCUSSION 1. Think pair square strategy produces the same gain as think pair share strategy on students speaking skill of describing things.
The process of treatment in both of the classes was preceded in think pair square strategies and think pair share strategies on students" speaking skill of describing things. From the mean score of the students speaking skill of the square class and the share class were not significantly different. The mean score of the students in square class was 79.19, and the mean score of the share class was 75.33. The data indeed that statistically both of the strategies produce the same gain on students" speaking skill. In other words, the result found that square class and the share class did not have significant different result in all categories computation. However, the mean score told that the think pair square class produces better gain on students" speaking skill of describing things.
The result of hypothesis testing 1 showed that the think pair square strategy give the same gain as think pair share strategy on students" speaking skill of describing things. There are several possible explanations for the same gain of the result; think pair square produces the same gain as think pair share strategy on students" speaking skill. First, the limited time of the treatment; the period of implementation may not sufficient to produce the desired effects. It is possible that students in both of the experimental group needed more time and training to master the skills for implementing the think pair square and think pair share strategy. The researcher just explained the rules of think pair share and think pair square strategy to the students right after the researcher take the pre test for the students in speaking. It made the students little bit confused with the rules because it was the first time for the students get this strategy in studying English. This is supported by Slavin (1995) that cooperative learning strategy need periods of training before starting to implement it in the classroom. The researcher realize that the researcher never apply these two strategies before, and just start to implement it in research right after reading the theories about the cooperative learning especially think pair square and think pair share strategy. Despite the teacher had explain the rules of the think pair square and think pair share strategy at the first meeting, the students apparently confuse with the rules or the procedures of the strategies for several meeting.
The other reason for this gain was the absences of the students influenced the result of the research. In every meeting, almost none of the students were absence, it also include on their late to come to class. These situations forced the researcher not divided the group proportionally based on the rules of the strategies. It also impact to the concentration of the students during the discussion.
Another reason for this gain is these two strategies are included in cooperative leaning strategy especially on informal method. Some of the other experts mention think pair share and also square at once. It is because of think pair square is the continuation of the think pair share strategy. The difference appeared on the number of the groups and also the square phase on the think pair square strategy. When the students could not get the answer of the questions on the pair phase, the students come to the square phase to get deep understanding or get the desired answer. Hence, these two strategies are close each other, and would affect to the same gain statistically.
The researcher wouldn"t see this result as the failure, however these two strategies have been successful in helping students to learn academically and support each other. It is supported by Johnson and Holubec (1993, p. 94) conclude that "the elements of cooperative learning are positive interdependence, face to face interaction, individual and group accountability, and interpersonal and small group skill". These basic elements are structuring into group learning situations to ensure cooperative efforts and enable the implementation of cooperative learning for long term. The benefits of small group are many. Some include reducing the learning anxiety, the students become a team player, and the students participate in peer tutoring, building cooperative teams. In classroom discussion, the students try to help the other friend to fulfill the task given by the teacher. It makes the students fell comfort to reply the teacher, and they do not need to feel worry that the other friend will look down to him/her because they already have ideas on their mind.
These finding is also closely related Michiel (2008, p. 146) who stated that the use of small group work and pair work activities can turn the students into positive learning experience because both of small group work and pair work increase students" talking time, allow mimic real English conversation, create a more secure and positive classroom atmosphere.
However, even though these two strategies produce the same gain on students" speaking skill of describing things, the result also show that these two strategies are also the same excellent strategies on students" speaking skill, not the same bad strategies on students speaking skills. In conclusion, both think pair square and think pair share are balance to make the students more motivated to speak up. It is supported by Zakaria (2012, p. 14) who states that cooperative leaning are great technique to allow the students work together not only for speaking skill, but also for the other skills such as listening, reading, and also writing.
2. Think pair square strategy produces the same gain as think pair share strategy on students" speaking skill of describing things for lower half student It has been mentioned previously that think pair square produces the same gain as think pair share strategy on students" speaking skill of describing things. Actually, the lower achievement students get benefit from working in pair or group academically trough practice and communication. The students work together and help one other together. It is can be achieved through think pair share and think pair square strategy. For example, from the procedures of the think pair square itself; it can raise the students" consciousness to speak up to help their friend in solving the problem. Besides, in think pair share, the students also have chance to contribute meaningful discussion based on the material given to them.
For lower achievement students, the advantages of think pair square and think pair share strategy include a higher possibly of engaging individual learning style, increase the students" chance to be active in language use and greater variety types of language of students. Lower achievement students are often benefits from working in pairs and groups academically through practice and communication when the students help one another to work together. It can be achieved through the composition of pairing and grouping since the formation of these two types consist of higher and lower achievement. Sert (2005, p. 10) found that a variety of advantages of students collaboration in preparing written work since outputs are far more grammatical, include less spelling mistakes, and indicate a higher level of grammatical awareness. Pellowe (1996, p. 3) claimed that the students can work properly in pairs and groups. In order to complete a task assigned by the teacher, students in pairs or group must negotiate in order to complete the task, find the correct work and determine the best way to complete the task. When students have freedom to negotiate the meaning and the form of what they are saying, it will lead the students to the specific areas of their language that need development. attitudes toward Generally, Schul (2012, p. 2) claims that both high and low achievers in cooperative learning treatment are usually perform better and have positive attitudes toward grouping than working individually. He adds that the way of cooperative learning like think pair share and think pair square can success to ensure that all of the groups member"s activity are focus on explaining concept to one another, helping one another to practice, and encouraging one another to achieve. Thus, the effect of cooperative learning on students" speaking skill would be wider. There is one evidence prove that group processing activities can enhance the effect of cooperative learning.
3. Think pair share strategy produces the same gain as think pair square strategy on students" speaking skill of describing things for upper half students According to the description of the data above, it could be concluded that think pair share strategy produces the same gain as think pair square strategy on students" speaking skill of describing things for upper half students. Referring to the finding, the Association for Educational Communication and Technology (2001) declared that high achieving students benefit by the cognitive restructuring that occurs when providing in-deepth explanation to peers. Moreover, cooperative learning works best in groups of two to five students (Ulmer and Cramer, 2005). Here, the students must take accountably not only to the group but also be responsible for their own learning. Positive interdependence can be developed through individual and group accountability. For cooperative learning to be effective, it is important that students work together to learn material but be tested individually. This is the art of creating positive interdependence. To complete the process the teacher recorded the groups' average test score as an individual student's grade. The accountability is created in two ways: first all team members must do well for themselves, and second, they must do well for the group. It becomes the common goal of the group for everyone to do well on the assessment in order to obtain a higher overall grade.
Generally, Brophy (2000,p. 18) claimed that students often benefit from working in pairs or small groups to construct understandings or help one another master skills. Moreover, he explained that there is often much to be gained by arranging for students to collaborate in pairs or small groups as they work on activities and assignments. Cooperative learning promotes affective and social benefits such as increased student interest in and valuing of subject matter, and increases in positive attitudes and social interactions among students who differ in gender, race, ethnicity, achievement levels and other characteristics. Co-operative learning also creates the potential for cognitive and metacognitive benefits by engaging students in discourse that requires them to make their taskrelated information-processing and problemsolving strategies explicit (and thus available for discussion and reflection).
4. There is no interaction between think pair square strategy and think pair share strategy to students" initial achievement on students speaking skill of describing things.
Based on the hypothesis testing, it showed that the students" speaking skill of describing things who were taught by using think pair square and think pair share strategy were not influenced by their initial achievement. It means that upper half students and lower half students who were taught by using think pair square and think pair share strategy would not improve their speaking skill. Shortly, the dependent variable was not affected purely by the independent variable, nor the moderator variable. Solomon (2009, p. 73) stated that the level of students" initial achievement that use cooperative learning will contribute positively to learning outcome for both lower and upper achievement. Similar to the previous opinion, Slavin (2009, p. 38) explained that the interaction between the students and learning progress will happen simultaneously in developing students" learning achievement. This due to both pairing and squaring has responsibility being counted individually and grouped. It means that high and low achievement will support each other to learn together in learning knowledgeable students to start high and low support each other to obtain the best result for their group.
As a whole, the effect of cooperative learning toward the students" ability on students" ability is exactly gain positive result. In other words, any levels of initial achievement students who used cooperative learning strategy such as think pair share and think pair square will contribute positively to the students" speaking skill either for high and low initial achievement students. Thus, the high and the low achiever students will support each other to learn together and obtain the best result of their group.
V. CONCLUSIONS
The researcher had afforded to conduct better research to reach maximum research. However, the researcher realized that there are several limitations of the research, such as: 1. This research was only conducted in speaking skill of describing things at grade X SMK YPM Zain Pauh Kambar. Therefore, it could not be treated on the other learning material or even other genres of text. 2. There was also limited time for teaching think pair square and think pair square in the classroom. Both of the teacher and the students just new to both of the strategies. It makes both of the students and teacher did not really master in applying these strategies in the classroom. The absences of the students influence the group of discussion in every meeting, and impact to the result of the speaking test.
VI. SUGGESTION
In accordance with the conclusion and implication above, the researcher provides suggestions as follow: 1. The students are suggested to apply the think pair square and think pair share strategies in other suitable material to reduce their anxiety, become a good team, and build the cooperative team. The study showed that both think pair square and think pair share strategies are equally powerful in improving students" speaking skill of describing things. 2. The teacher should plan carefully in planning the activities for teaching and learning process by using think pair square and think pair share strategy. 3. Other researchers who are interested in carrying out the research in applying the think pair square and think pair share strategies are suggested to conduct the similar research in wider sample in order to get larger empirical data and knowledge. Besides, they are also suggested to consider the research finding because these two strategies still need adjustment between the strategies and the other factors such as the other skills, and the other materials.
|
2019-05-12T14:22:40.109Z
|
2018-03-29T00:00:00.000
|
{
"year": 2018,
"sha1": "671c92e472b16dde5e31f5c9578434ea660ac818",
"oa_license": "CCBYNC",
"oa_url": "http://polingua.org/index.php/polingua/article/download/41/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "5192258111e4931f812db18c2e7fa6170039d63c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
225140632
|
pes2o/s2orc
|
v3-fos-license
|
Ridge Penalization-based weighting approach for Eco-Efficiency assessment: The case in the food industry in the United States
Eco-efficiency assessment is of great importance for monitoring and managing environmental and economic aspects of sustainable development. The eco-efficiency indicators are required to assess and measure the impact of multiple environmental aspects per unit of economic value-added. The aggregation of multiple environmental impacts in the presence of high correlation is a critical challenge to sustainability practitioners. This study presents a weighting approach using ridge penalization-based regression to overcoming the consequence of the high correlation among the environmental aspects and hence providing accurate weighting values. The performance of the proposed approach is assessed using economic and environmental footprints of 20 food industries in the United States. The new weighting approach is expected to provide decision-makers with a quantitative management tool for monitoring and controlling core operational functions associated with the sustainable development and management.
Introduction
The eco-efficiency assessment is widely recognized as a powerful management tool for managing environmental sustainability aspects and enhancing the opportunities of the well-being of future generations [1]- [6]. The alignment with the sustainable development goals of the United Nations has recently become the focus of governmental and business organizations at both national and international levels [7]. The eco-efficiency assessment with high dimensional space of environmental impacts imposes a critical challenge to sustainability practitioners in specifying the weight of each environmental indicator to the eco-efficiency value [8], [9].
Several weighting techniques have been proposed and examined in literature, for instance, not limited to, linear programming [10], [11], Principal Component Analysis (PCA), Data Envelopment Analysis (DEA), and Factor Analysis (FA) [12]- [14], Regression Analysis [15]. The equal weighting (EW) is the most common among the existing methods [16]- [18]. Despite the distinctive advantage of mathematical and operational properties, this method has been extensively criticized for the lack of considering the double-counting when multiple indicators measure the same behavior [19].
The extension of statistical methods to the sustainability assessment context has received increasing attention over the recent years; see, for instance [20], [21]. The PCA, DEA, and FA are widely recognized for their ability to accommodate high dimensional space of sustainability indicators. Moreover, these methods are independent of subjective opinions [4]. The PCA is mainly based on the development of the Principal Components (PCs) as a linear combination of the corresponding sustainability indicators, then use their associated weight to complete the aggregation step in order to obtain a single value representing the overall environmental impact of these indicators [22]. The collinearity among two or more of the sustainability indicators describes the extent of linear correlation between the variables [23] thus, critical to the outcome of several of the existing weighting methods. The PCA is extensively used in literatures due to its capability in effectively handling the collinearity among the sustainability indicators [3], [24], [25]. Despite the merits, PCA lacks in interpreting the results of the dimension reduction analysis.
The PCs are linear combinations of all original indicators. A large number of independent variables can result in numerous significant coefficients in the first few PCs. The matter makes these PCs difficult to explicate [23], [26], [27]. Moreover, despite that the PCA is preferred for not relying on subjective and arbitrary opinions, it can be criticized for ignoring the relationship between the independent and dependent variables, especially as a weighting method. The inclusion of this relationship would provide a second criterion, in addition to the variation of the data matrix, to precisely quantify the individual weight of each of the sustainability indicators.
The PCA assigns a high amount of variance to the PC with the largest scale, the matter that results in undesirable skewness in the outcome. The normalization is a very well-known step to overcoming this issue. A difficulty that may result due to the normalization of the data matrix is that the number of PCs increases leading to difficulties in interpreting the results.
In accordance with the above, this paper presents a systematic methodology for eco-efficiency assessment using the ridge penalized regression to overcome the multicollinearity among the sustainability indicators. The ridge penalized regression is widely recognized in statistic for its effectiveness in overcoming the effect of multicollinearity on the accuracy and stability of the regression model. This study uses a dataset that represents the environmental impact of 20 food industries in the United States.
Input-Output (I-O) Model
The single region industry-by-industry I-O model is used here based on the Eora database that is connected with the UN's System of National Accounts and COMTRADE databases [21], [28]. In this study, the domestic supply and use tables (SUTs) of the U.S. economy were combined with several sustainability indicators. Then, the I-O model is used to quantify the economic (value-added), and environmental impacts of 15 food consumption industries in US; see Table 1. Wet corn milling S15-WCM
Ridge Penalization-based Regression
The multiple regression analysis has been widely recognized in the literature as an effective tool to overcome the collinearity; see, for instance, [29]- [32]. The collinearity refers to the extent to which the indicators are linearly correlated to each other [33]. The multiple regression estimates the weights or relative-importance based on the extent to which each of the sustainability indicators significantly contributes to explaining the variability around the response variable. The penalization-based regression, in particular, has received notable attention as a weighting method; see [34], [35]. This paper uses a ridge penalization-based regression as a weighting method to overcome the multicollinearity phenomenon among the sustainability indicators. The error term , in the generalized linear relationship between response variable (y) and predictor variable (x) as shown in eq (1), is assumed to have a normal random distribution.
where is the coefficient estimate associated with the th indicator, and is the number of indicators. The ridge-penalized regression is commonly formulated as a minimization problem of the squared errors were the problem is solved using the Ordinary Least Squared (OLS), Weighted Least Squared (WLS), or Maximum Likelihood Estimation (MLE) methods. The OLS and WLS are easier in practice than the MLE were the decision of selection depends on the practitioner. The ridge-based OLS formulation is as follows: where is a vector that contains the estimated values of the regression coefficients, is the number of observations, and is the tuning or shrinkage parameter, and its value is usually specified by K-fold cross-validation. Several computer packages available in the CRAN library and packages such as SPSS and Solver-Excel, can be used to solve the ridge-based OLS formulation, shown in (2). However, Figure 1 shows the outlines of the proposed methodology. This study uses five sustainability indicators related to the food and beverage industry in U.S. These are: (1) CO2 (Kt), (2) CO (Kt), (3) HFC-143a (Kt), (4) PM10 (Kt), (5) N2O (Kt), and (6) SO2 (Kt). The sustainability impacts of these indicators were estimated by using the Eora database-based economic input-output framework developed by [27] using the latest and high-resolution I-O tables of the U.S. economy; see also [36]. The household consumption (HC) under each food industrial category were calculated and used as the response variable. Figure 2 illustrates the distribution of the highest three impacts under each of the sustainability indicators.
Measuring Collinearity
This section is dedicated to measuring the level of collinearity among the sustainability indicators. The correlation of determination is the most widely used method to measure the collinearity. Normalization step is neglected here due to the usage of same units for the selected indicators. In this study, explains the percentage of variation in one of the sustainability indicators that is predictable from the other indicators. The magnitude of the measure is limited between "0" and "1". The value "0" refers to a "very-poor" linear relationship, while the value "1" to a "very-strong" linear relationship.
Figure 2. Distribution of the highest three-environmental sustainability impacts
The pairwise correlations among all the potential pairs of the sustainability indicators can be seen in Table 2.
Weighting sustainability indicators
To initiate the ridge-penalized regression, we used the Trace-Plotting method, proposed by [25], to determine the optimal value of λ. This method has been widely used under different research contexts; see, for instance, [22], [26]. Initially, the ridge regression coefficients are plotted over a wide range of λ. Secondly, we define the range of λ that exhibits better stability of the fitted regression coefficients. Finally, we select a single value of λ providing a better criterion of selection. However, The Mean Square Error (MSE), is used as a criterion for the selection of the optimal λ. Figure 3 shows the distribution of the ridge regression coefficients over a wide range of the tuning parameter λ. From Figure 3, one can easily notice the stability in the changes of the regression coefficients, namely the CO2, SO2, HFC-143a, and NO2, around the optimal λ. The best stability can be achieved when the λ value equals 0.014 to 1.4. The MSE at several values of λ has been estimated and the optimal value of λ is found to be 0.090 (MSE = 0.00092). Table 3 shows the results of the analysis of variance (ANOVA) of the collected sample. In this study, the p-value (1.53E-08) is less than the set standard of 95% confidence interval, concluding the significance of the regression ( =0.995). While Table 4 summarizes the output of the ridge regression analysis. The results show that the SO2 has the largest positive impact on the regression model, while the PM10 has the smallest impact. The decision regarding the inclusion of the negative values in the aggregation step has been argued by several researchers [37]. 0.000092 In this study, we replace the individual weight of the sustainability indicator by their associated relative weight (RW). The RW represents the importance of a specific indicator with regard to the other indicators. The RW is found as the absolute value of the individual weight divided by the sum of Table 5 reports the calculations of the weighting step. Table 5. Weight Calculation
Eco-Efficiency score calculation
The eco-efficiency is often calculated as the ratio between the economic value-added and the aggregation of the weighted impacts of the environmental indicators. Using the RW values reported in Table 4, we calculated the eco-efficiency scores for all the industrial categories and reported these scores in Figure 4. Figure 4, S2-BCM has the highest eco-efficiency score, while the S1-BSM is the lowest. Several of the food and beverage industries have scored high scores, such as S14-TM, S11-SFM, and S5-DCFM. The eco-efficiency ratio or score, calculated can be referred to as the "higher the better" performance measure. This matter makes the comparison between the eco-efficiency performances of the industrial sectors difficult. However, in this paper, we use the average score of the eco-efficiency as a threshold between the "Below-Average," "On-Average" and "Above-Average" performance and specify the category of each industry based on its location with respect to the threshold value (5.26); see Table 6. Eco-Efficiency categories The results in Table 5 show that 53.34% of the food industries are classified as "Above-Average" performance, while the rest are "Below-Average" performance. The results also show that none of the industrial categories is classified as "On-Average" performance.
Conclusion and Remarks
This research work introduced a penalization-based approach for estimating weights of sustainability indicators. Here, the importance of the penalization in reducing the impact of multicollinearity among the sustainability indicators during the aggregation step is emphasized. The results have shown that more than 50% of the food industries in the US are performing well in terms of eco-efficiency performance. The (BCM) has the best eco-efficiency performance, while the (BSM) has the worst performance comparing with the other indicators. However, in terms of the individual eco-efficiency performance, all the food and beverage industries have scored a value that is greater than 1.
For future research, variable selection methods, such as stepwise regression, can be used to identify the most significant indicators to be included in the weighting process [38]- [40]. The authors also suggest the extension of the adaptive LASSO-based thresholding to enhance the estimation of variance-covariance matrix of the PCA method. The new approach will be used later for developing of a composite indicator of eco-efficiency assessment, further details of the adaptive LASSO can be found in [41], [42]. For future research, the authors also suggest the use of hybrid life cycle sustainability assessment methods [40]- [55]; ecological footprint analysis [56]- [58]; and economic input-output analysis [59], combined with other decision making models such as fuzzy multi criteria decision making [60], forecasting [61], agent based modelling [62],[63], and system dynamic modelling [64]- [68] considering Triple Bottom Line (TBL) approach. Finally, the multivariate regression is another suggested approach [69]- [70] to complete the aggregation step and develop a single composite indicator for the sustainability assessment, ruling out the difficulty in finding an appropriate response variable. References Industry S2-BCM S4-CTM S5-DCFM S10-SP S11-SFM S12-SDIM S14-TM ---S1-BSM S3-CM S6-FAO S7-FMM S8-FFM S9-PP S13-SMR S15-WCM
|
2020-10-28T19:12:57.726Z
|
2020-10-08T00:00:00.000
|
{
"year": 2020,
"sha1": "ea78805fe706d6e0e06e0168b4f80fc4ae4563dc",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/947/1/012003",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2b4f2c8ecb556a2abe50794d230d146ec70e268d",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Business"
]
}
|
207462024
|
pes2o/s2orc
|
v3-fos-license
|
Balloon dilatation of a benign biliary stricture through a T‑tube tract
Percutaneous cholangioplasty is a commonly performed procedure for both benign and malignant diseases. The most common route for accessing the biliary tree is transhepatic, following ultrasound or fluoroscopic‑guided percutaneous puncture. There are situations when alternative routes can be utilized to access the common bile duct (CBD). We accessed the CBD via T‑tube placed surgically in a 57‑year‑old man who had obstructive jaundice of obscure etiology which was likely inflammatory
Introduction
A T-tube is commonly placed following biliary surgery.Cholangiograms through this route are frequently performed to assess the biliary tree prior to removing the T-tube and discharging the patients from hospital.However, utilization of T-tube for percutaneous cholangioplasty has been described in a few case reports only. [1,2]
Case Report
A 57-year-old male presented with acute epigastric pain and decreased urine output.The pain was mild-to-moderate in severity, intermittent, with occasional vomiting.There was no radiation of pain, and the patient had little relief with analgesics.The patient had decreased urine output over the past 24 h.He was a chronic smoker.There was no past history of diabetes mellitus, hypertension, or tuberculosis.There was no past surgical history.On examination, there was mild pallor.Mild tenderness was elicited in the right upper quadrant.Hemogram revealed a low platelet count (14,000/µl).There was mild elevation of liver enzymes [aspartate transaminase (AST) 87 IU/l, alanine transaminase (ALT) 95 IU/l, alkaline phosphatase 198 IU/l] and serum bilirubin (1.5 mg/dl).Renal function tests were within normal range.
There was worsening of the abdominal pain over the ensuing few days associated with progressive elevation of conjugated bilirubin (7.5 mg/dl).Endoscopic retrograde cholangiography (ERCP) was performed.There was blood ooze seen at the papilla.However, cholangiogram did not reveal any filling defects.A naso-biliary drainage (NBD) tube was placed which resulted in decline of conjugated bilirubin to 4.4 mg/dl.Cholangiogram was performed through the NBD after 1 week, which revealed filling defects within the GB and CBD suggestive of blood clots.
cause [pseudoaneurysm or arteriovenous (AV) malformation] was suspected and computed tomography (CT) angiography (SOMATOM ® Definition Flash, Siemens, Erlangen, Germany) and digital subtraction angiography (Allura Xper FD 10, Best, Netherlands) was performed.The vascular causes were ruled out.Finally, an exploratory laparotomy was performed.Cholecystectomy and CBD exploration was performed.No calculus or mass was detected within the CBD.Intraoperative cholangioscopy, however, revealed unhealthy mucosa at the confluence that was biopsied.A T-tube was left in situ to achieve continuous drainage.Subsequently, there was no hemobilia.Histopathologic examination of the tissue revealed only inflammation.There was improvement in patient's condition with falling levels in serum bilirubin.However, 4 weeks later, there was mild rise in the serum bilirubin as well as alkaline phosphatase.
A T-tube cholangiogram was performed.There were filling defects within the proximal CBD with no opacification of IHBR suggestive of an inflammatory stricture at or just beyond the confluence [Figure 2].Cholangioplasty was planned through the T-tube tract under conscious sedation.A 0.035″ guidewire was passed into the left ductal system across the stricture at the confluence.After removal of the T-tube, a 7F 35 cm sheath was inserted over the wire and advanced into the biliary tree.Balloon dilatation was performed using a 10 mm diameter by 40 mm long angioplasty balloon (Advance ® , ATB PTA dilatation catheter; Cook ® Medical, Bloomington, IN, USA), inflated for 10 min.We also did cholangioplasty of the lower end of CBD as filling defects were seen in the proximal CBD on the T-tube cholangiogram suggesting distal stasis.Repeat balloon dilatation of the hilar stricture was done after 3 weeks.A temporary external drainage tube [12F percutaneous transhepatic biliary drainage catheter (PTBD catheter)] was placed with its tip just across the stricture, after each session of cholangioplasty.Cholangiography via the biliary catheter following the procedure revealed good opacification of the entire biliary tree [Figure 3].The external biliary drainage catheter was finally removed 3 weeks after the second session of cholangioplasty.The procedure was successful, with no complications.Patient was followed with serial evaluation of serum bilirubin, alkaline phosphatase, and ultrasound examinations.At the last follow-up, i.e. 7 months after the procedure, the patient was asymptomatic.
Discussion
Various treatment options for benign biliary strictures include endoscopic sphincterotomy, percutaneous cholangioplasty/stenting, and surgical biliary enteric diversion. [2]Percutaneous sphincterotomy is performed for peri-ampullary stricture. [3]Surgical diversion is best reserved for patients with anastomotic, extrahepatic, extra-pancreatic biliary stricture. [4]Thus, in majority of patients, interventional radiology techniques in the form of percutaneous cholangioplasty/stenting are employed.However, a successful and long-term favorable outcome involves consideration of several important issues.Restenosis rates are high as these patients have a long expectancy of life.Cause of stricture (viz.traumatic, post-transplant, post-infective, etc.), site of stricture (viz.hilar, proximal or distal CBD, peri-ampullary), presence of any pre-existing access (viz.T-tube, PTBD, Hutson Russell loop), and length of stricture are all expected to have a bearing on the results of cholangioplasty. [5]olangioplasty involves dilatation of the stricture using a transluminal angioplasty balloon (balloon cholangioplasty/biliary balloon dilatation).The traditional route for this procedure is transhepatic. [6]It involves access to the intrahepatic biliary radicles through percutaneous puncture, either under ultrasound guidance (when the ducts are dilated) or fluoroscopic guidance after obtaining a cholangiographic picture following a central puncture (when there is minimal ductal dilatation).Subsequently, the stricture is crossed using a guidewire.Two options follow: (a) placing an internal-external drain catheter and upgrading the catheter every 3 weeks to a larger caliber to dilate the tract or (2) balloon dilatation of the stenotic segment every 3 weeks, upsizing the balloon size every session.Former approach leads to reduced quality of life and the external catheter is prone to infections.
Alternative access routes have been rarely described. [7]hen there is indication of repeated dilatations for biliary stenoses and endoscopic access is not available, a transjejunal approach has been described.In these cases, the dilatation is performed through the fixed limb (either afferent or efferent limb) of a Roux-en-Y hepaticojejunostomy.In this context, it is helpful to remember that the efferent limb of the Roux loop is attached to the peritoneum in the right anterior location and the afferent limb is fixed anteriorly in the central upper abdomen.Under fluoroscopic or CT guidance, the selected loop is punctured using a thin needle.Subsequently, the biliary tree is catheterized in a retrograde fashion and bilioplasty is performed.Fontein, et al. [7] described their experience in 494 patients who were planned for cholangioplasty through the Roux loop.In 86% of the interventions, the Roux loop was successfully accessed.
Another potentially useful, yet a rarely described route for balloon dilatation, is the T-tube.T-tube is left in place following surgery for benign or malignant disease.In cholelithiasis with suspected choledocholithiasis, following CBD exploration, the T-tube is left in place to facilitate endoscopic treatment through the mature tract.T-tube placement is also performed in a difficult biliaryenteric anastomosis and extensive invasion of the distal bile duct.The basic technique of biliary balloon dilatation is similar to conventional percutaneous transhepatic route.A cholangiogram is obtained through the T-tube.Following detailed evaluation of the biliary anatomy, cholangioplasty is performed.Kim, et al. [1] reported three cases of biliary stent placement through the T-tube.The cause of biliary obstruction in these cases was malignant disease process in the CBD or peri-ampullary region.The procedure was successful in all three cases and follow-up did not show recurrence of jaundice.
Conclusion
In conclusion, unconventional access routes for cholangioplasty constitute a viable treatment option for benign biliary strictures.
Figure 1 :Figure 2 :
Figure 1: Thick slab magnetic resonance cholangiopancreatography (MRCP) image reveals mild dilatation of the entire common bile duct (CBD, arrow) with smooth distal tapering (short arrow).No filling defects are seen within the CBD The main pancreatic duct is of normal calibre (arrowhead)
Figure 3 :
Figure 3: Cholangiogram performed through percutaneous transhepatic biliary drainage catheter (PTBD catheter, arrow) following balloon dilatation of stricture reveals good opacification of the entire biliary tree
|
2018-04-03T04:18:50.151Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "b18f92de9535cc689a54c6420ba2b75233ee753d",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.4103/0971-3026.150133",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3a6705715804d2b0df5b43d318dc8e9cc9c014ae",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10145054
|
pes2o/s2orc
|
v3-fos-license
|
Human endometrial regenerative cells alleviate carbon tetrachloride-induced acute liver injury in mice
Background The endometrial regenerative cell (ERC) is a novel type of adult mesenchymal stem cell isolated from menstrual blood. Previous studies demonstrated that ERCs possess unique immunoregulatory properties in vitro and in vivo, as well as the ability to differentiate into functional hepatocyte-like cells. For these reasons, the present study was undertaken to explore the effects of ERCs on carbon tetrachloride (CCl4)–induced acute liver injury (ALI). Methods An ALI model in C57BL/6 mice was induced by administration of intraperitoneal injection of CCl4. Transplanted ERCs were intravenously injected (1 million/mouse) into mice 30 min after ALI induction. Liver function, pathological and immunohistological changes, cell tracking, immune cell populations and cytokine profiles were assessed 24 h after the CCl4 induction. Results ERC treatment effectively decreased the CCl4-induced elevation of serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities and improved hepatic histopathological abnormalities compared to the untreated ALI group. Immunohistochemical staining showed that over-expression of lymphocyte antigen 6 complex, locus G (Ly6G) was markedly inhibited, whereas expression of proliferating cell nuclear antigen (PCNA) was increased after ERC treatment. Furthermore, the frequency of CD4+ and CD8+ T cell populations in the spleen was significantly down-regulated, while the percentage of splenic CD4+CD25+FOXP3+ regulatory T cells (Tregs) was obviously up-regulated after ERC treatment. Moreover, splenic dendritic cells in ERC-treated mice exhibited dramatically decreased MHC-II expression. Cell tracking studies showed that transplanted PKH26-labeled ERCs engrafted to lung, spleen and injured liver. Compared to untreated controls, mice treated with ERCs had lower levels of IL-1β, IL-6, and TNF-α but higher level of IL-10 in both serum and liver. Conclusions Human ERCs protect the liver from acute injury in mice through hepatocyte proliferation promotion, as well as through anti-inflammatory and immunoregulatory effects.
alternative strategies for the treatment of decompensated liver diseases are required.
Recent development in stem cell-based therapeutic strategies have already garnered extensive attention and been introduced to regenerative medicine for hepatic diseases [3][4][5]. It has been demonstrated that infused mesenchymal stem cells (MSCs) engrafting in the liver facilitate the recovery from chemical-induced acute liver damage [6]. Moreover, MSCs possess the characteristics of immunomodulatory, anti-inflammatory and hypoimmunogenicity, and the potential of differentiating into hepatocyte-like cells. Also, MSCs can promote tissue repair by means of suppressing the local immune reaction, attenuating fibrosis and apoptosis, enhancing angiogenesis and stimulating mitosis and differentiation of tissue-intrinsic reparative cells and stem cells [7,8]. Currently, bone marrow mesenchymal stem cells (BM-MSCs) have become the focal point for cell therapy in liver regeneration [9,10]. However, BM-MSCs have low yield, invasive operation and decreased cell numbers that are dependent on donor age [11]. Consequently, it is imperative to identify alternative sources of stem cells with better safety and efficacy profiles.
In 2007, Meng et al. discovered a novel type of adult stem cells derived from human menstrual blood, named endometrial regenerative cells (ERCs). These cells possess a self-renewing, highly proliferative potential as well as a differentiation capacity towards diverse cell lineages in appropriate induction media, thereby overcoming the shortcomings of other conventional stem cell sources and the fear of karyotypic abnormalities during culture [12]. Furthermore, ERCs have proven to be an excellent cell source in the treatment of several experimental disease models, such as critical limb ischemia [13], ulcerative colitis [14], burn injury [15], renal ischemia reperfusion injury [16] and other dysfunctional diseases [17][18][19]. Moreover, it has been verified that these human cells were not rejected in a xenogeneic animal model [13]. ERCs are more readily available and non-invasive than other adult stem cells, making them a promising donor source for stem cell therapy. Recently, ERCs were found to be capable of differentiating into functional hepatocyte-like cells in vitro [20]. However, whether ERCs could simultaneously suppress inflammatory and immune responses and repair tissue damage following ALI remain obscure. Thus, the aim of this study was to explore the potential role of ERCs in alleviation of carbon tetrachloride (CCl 4 )-induced ALI.
Isolation and Culture of ERCs
ERCs were collected from the menstrual blood of healthy female volunteer donors (20-40 years old) using a urine cup after menstrual blood flow initiated. As previously described [12], mononuclear cells were obtained by standard Ficoll method. ERCs were then expanded from the purified mononuclear cells, which were allowed to attach in the endometrial stem cell culture medium (S-Evans Biosciences, China) overnight at 37 °C in 5 % CO 2 . Non-adherent cells were removed by washing with phosphate-buffered saline (PBS), while adherent cells were cultured until they reached 80-90 % confluence. Cells were trypsinized, sub-cultured and used for experiments during passages 4-7.
Animals
Healthy male C57BL/6 mice (Aoyide Co., Tianjin,China) weighing 18-20 g and aged 6-8 weeks were housed under conventional experimental environment with 12-h light-dark cycle in the Animal Care Facility, Tianjin General Surgery Institute. The mice had free access to commercial standard mouse diet and water until the time of the study. All experiments were conducted in accordance with the protocols following the Animal Care and Use Committee of Tianjin Medical University (China) according to the Chinese Council on Animal Care guidelines.
Experimental groups
The preparation of animal model was done as previously described [21,22]. In brief, 18 mice were randomly assigned to the following three groups (n = 6). (1) Normal control group: mice first receiving intraperitoneal (i.p.) injection of corn oil were then injected with 200 μl PBS intravenously 30 min later. (2) Untreated group: mice first receiving i.p. injection of a single dose of CCl 4 (Sigma-aldrich, St Louis, United States) for induction of acute liver injury were injected 200 μl PBS intravenously 30 min later. (3) ERC-treated group: mice first receiving i.p. injection of CCl 4 were injected intravenously with 1 × 10 6 ERCs at passage 4 resuspended in 200 μl of PBS 30 min later [23]. Mice were sacrificed 24 h after injection of CCl 4 , and blood was collected. Livers and spleens were then promptly removed for analysis or stored frozen at −80 °C.
Measurement of ALT and AST
Serum alanine aminotransferase (ALT) and aspartate aminotransferase (AST) activities were measured by standard spectrophotometric procedures using a Chemi-Lab ALT and AST assay kit (IVDLab Co., Ltd., Korea), respectively. Enzyme activities were shown in international unit per liter (IU/L).
Histological examination
Liver slices were made from part of the left lobes and fixed in 10 % neutral buffered formalin, embedded in paraffin and cut into 5 μm sections. Specimens were dewaxed, hydrated and stained with standard hematoxylin and eosin (H&E) to examine morphology.
Immunohistochemistry staining
Immunohistochemistry was performed with PCNA and Ly6G antibody as described previously [24]. Briefly, the paraffin specimens were cut into 5 μm, followed by deparaffinization and rehydration. Endogenous peroxides were eliminated with 3 % H 2 O 2 , and antigen retrieval was processed by heating in microwave. Then, the sections were blocked with 5 % bovine serum albumin (BSA) and incubated with anti-mouse PCNA and Ly6G (Abcam, Cambridge, MA) antibodies overnight at 4 °C, respectively. Secondary labeling was achieved by goat antirabbit IgG and rabbit anti-rat IgG polyclonal antibody, separately. Horseradish peroxidase-conjugated avidin and brown-colored diaminobenzidine were used to visualize the labeling. Finally, the slides were counterstained with hematoxylin. All of stained sections were photographed using an Olympus inverted microscope (Olympus Imaging America, Center Valley, PA).
Enzyme-linked immunosorbent assay
The levels of IL-1β, IL-6, TNF-α,and IL-10 in the serum and liver samples taken from mice 24 h after CCl 4 challenge were measured by ELISA kit (eBiosciences, San Diego, CA, USA) according to the manufacturer's instructions. ELISA was performed in duplicate for each sample. The preparation of liver homogenate was done as previously described [25]. In short, frozen liver tissues were homogenized in a protein extraction solution (PRO-PREP; Intron biotechnology, Sungnam, Korea), incubated for 30 min on ice and then centrifuged at 13,000 rpm (4 °C) for 10 min.
Labeling of ERCs with PKH26 and in vivo tracking
For in vivo tracking of administered ERCs, cells were isolated and labeled with PKH26 Red Fluorescent Cell Linker Kits (Sigma-aldrich, St Louis, USA), according to the manufacturer's instructions. Prepared PKH26-labeled ERCs at a final cell concentration of 1 × 10 7 cells/ml were then injected via tail vein 30 min after ALI induction. Mice were executed 24 h later, and the liver, lung, kidney and spleen were removed and frozen at −20 °C. Fluorescence microscopy was performed to analyze the 4 µm cryosections and identify the ERCs.
Statistical analysis
All the experimental data were presented as mean ± standard error of the mean (SEM). The results were statistically analyzed by ANOVA test utilizing SPSS version 17.0 software (SPSS Inc., Chicago, USA). p < 0.05 was considered statistically significant.
ERC treatment improved liver function after ALI
The serum levels of ALT and AST in the untreated group were markedly increased after CCl 4 injection compared with the normal control group (p < 0.01; Fig. 1a, b). In contrast, both serum levels of ALT and AST were significantly decreased by ERC treatment (p < 0.01), even though they were still higher than those of normal control group (Fig. 1a, b, p < 0.05). In addition, we also found that serum ALT and AST rapidly elevated to peak level 24 h after CCl 4 treatment, then decreased thereafter, while ERC treatment significantly inhibited the elevation of serum ALT and AST from 24 to 120 h (Data not shown). Taken together, these data suggested that ERCs improve liver function in mice with ALI.
ERCs ameliorated the histopathological damage of liver tissue after acute injury
As shown in Fig. 2, in the untreated ALI group, the liver became inflamed, turned yellowish-white, and increased in volume at 24 h after CCl 4 injection (Fig. 2a), suggesting that CCl 4 had induced severe liver cell injury. Notably, the changes observed in the ERC-treated livers were indistinguishable from those in the normal control group (Fig. 2a). In addition, degeneration of liver structure and pathological changes of hepatic parenchymal cells were observed in CCl 4 -induced mice, characterized by hepatocyte necrosis, shrinkage of nuclei, and infiltration of inflammatory cells in the portal area (Fig. 2b). In contrast, all these abnormal changes were alleviated at the same time point after ERC treatment (Fig. 2b). The gross findings and the pathological changes of the liver tissue in ERC-treated group were similar to those of normal controls (Fig. 2a, b). Meanwhile, in consistent with the results of liver function study, we found that the pathological changes of the liver tissue in untreated group at 48, 72, and 120 h time points could be reversed by ERC infusion (Data not shown). Overall, these results indicated that ERC treatment could effectively protect the liver from CCl 4 -induced acute liver damage.
ERC infusion promoted hepatocyte proliferation in mice with ALI
To determine whether ERCs have a role in accelerating hepatocyte proliferation after ALI, PCNA expression was detected in the liver tissue by immunohistochemistry. As shown in Fig. 3, compared to the untreated group, ERC infusion dramatically increased the number of PCNA positive-staining cells at 24 h after CCl 4 induction, with a great number of PCNA positive hepatocytes surrounding the portal area. The changes are indistinguishable from those in the normal control group. This finding demonstrated that treatment with ERCs may markedly promote liver cell proliferation after CCl 4 -induced ALI.
ERC treatment inhibited neutrophil infiltration in the liver after ALI
To characterize whether ERC treatment could prevent inflammatory cell infiltration following ALI, we performed immunohistochemical staining to evaluate the recruitment of neutrophils in liver tissue. As shown in Fig. 3, ERC treatment notably reduced the number of Ly6G positive cells in the liver compared to those without ERC administration. These results suggested that ERCs significantly reduced neutrophil infiltration in the liver caused by acute liver damage.
ERC treatment attenuated ALI by regulating cytokine expression
To determine whether ERC treatment could affect cytokine profiles, the levels of local and systemic inflammatory cytokines were analyzed and compared among different groups. As shown in Fig. 4, compared with the normal control group, the untreated group exhibited significantly higher levels of pro-inflammatory cytokines (IL-1β, IL-6 and TNF-α) in both the liver (p < 0.01) and the serum (p < 0.01). In contrast, these cytokine levels were markedly reduced in ERC-treated group (p < 0.01, versus untreated group). On the other hand, the level of anti-inflammatory cytokine IL-10 was notably elevated after ERC treatment (p < 0.01, versus untreated group). Taken together, these data suggested that treatment with ERCs not only suppress the level of pro-inflammatory cytokines, but also enhance the level of anti-inflammatory cytokine in CCl 4 -induced ALI mice.
ERC treatment decreased the percentage of CD11c + MHCII + cells in the spleen after ALI
Recently Zhang et al. and Nauta et al. demonstrated that MSCs derived from different human tissues reduced the expression of presentation molecules (MHC class II) and co-stimulatory molecules (CD80 and CD86) on mature dendritic cells (DCs) [26,27]. In this regard, we investigated whether ERC treatment could affect the population of antigen-presenting cells in ALI mice. As shown in Fig. 5, the frequency of MHCII positive DCs in the spleen was significantly lower in ERC-treated mice compared to untreated mice (p < 0.01), which suggested that ERC treatment could reduce the population of mature DCs.
ERCs influenced the populations of CD3 + CD4 + and CD3 + CD8 + T cells in ALI mice
To study the relationship between the changes of T cell population and ERC-mediated liver protection, we employed flow cytometric analysis to detect the levels of CD3 + CD4 + and CD3 + CD8 + T cells in both spleen and liver. As indicated in Fig. 6, the percentages of CD3 + CD4 + and CD3 + CD8 + T cells in the spleen were dramatically decreased as compared with those of untreated mice (Fig. 6, p < 0.01). However, no difference was observed in the percentages of CD3 + CD4 + and CD3 + CD8 + in the liver in all groups (data not shown).
ERC treatment upregulated splenic Treg population in ALI mice
To further investigate the immunomodulatory function of ERCs in the attenuation of ALI, we measured and compared splenic Treg population among different groups. The percentage of Tregs in the untreated group was much lower than that of the normal control group (Fig. 7, p < 0.01). In contrast, the percentage of CD4 + CD25 + Foxp3 + Treg population was significantly increased by ERC treatment in ALI mice (Fig. 7, p < 0.01, versus untreated group and normal control group), demonstrating that ERCs have hepato-protective effects in CCl 4 -induced acute liver injury through upregulation of Treg population in mice.
Tracking in vivo engraftment of ERCs
To investigate whether PHK26-ERCs are capable of engrafting CCl 4 -injured liver, animals were sacrificed 24 h after CCl 4 induction. As shown in Fig. 8, PHK26positive ERCs were detected by fluorescence microscopy in the liver (injured tissue) and the spleen (lymphoid organ) of ERC-treated mice. Moreover, the labeled ERCs were aslo mainly found in the lung, but not in other normal organs, such as the kidney.
Discussion
Liver failure can be caused by acute severe or chronic persistent liver injury, while effective treatment are still scarce. Considering the current clinical state, developing an alternative therapeutic strategy to reduce damage, prevent progression, and restore liver function is warranted. Several reports have described the safety and promising beneficial effects of MSCs in the treatment of acute liver injury [6,28]. However, the value of ERCs, a novel Lu et al. J Transl Med (2016) 14:300 type of MSCs obtained from menstrual blood, in ALI has not been studied. Compared with MSCs from other sources, ERCs have several additional outstanding merits, such as (1) abundant availability, (2) easy and noninvasive acquisition and separation method, (3) higher proliferative rate, (4) relatively unlimited expandability without karyotypic or functional abnormality, (5) more multi-lineage differentiation capacities [29]. In this study, we observed that ERC therapy is an effective strategy for alleviation of ALI. We mainly focused on investigating the therapeutic potential of ERCs related to anti-inflammation, immunomodulation, promotion of hepatocyte (See figure on previce page.) Fig. 3 ERCs promoted hepatic cell proliferation and suppressed inflammatory cell infiltration after CCl 4 injury. Immunohistochemical staining for PCNA and Ly6G were carried out as previous described. PCNA and Ly6G staining of liver sections in the mice with or without ERC administration were performed 24 h after CCl 4 treatment. The untreated group showed relatively few PCNA + hepatocytes and abundant Ly6G + cells in centrilobular areas. Sections from ERCs treated group exhibited numerous PCNA + hepatocytes surrounding the edge of hepatocellular necrosis and fewer inflammatory cell accumulating in liver tissues. The numbers of PCNA + and Ly6G + cells in the liver sections were measured. At least six 12 mm 2 tissue sections were counted for each mouse. Values represent mean ± SEM. ( ## p < 0.01 versus the normal control group. *p < 0.05, **p < 0.01 versus the untreated group, n = 6). (Magnification 100×) proliferation, as well as their engraftment after ERC infusion.
In the present study, we took the advantage of the mouse ALI model to mimic clinical liver dysfunction for evaluating the efficacy of ERC treatment. The mice exposed to CCl 4 showed significant increase of ALT and AST, which were reduced by ERCs from an early phase of liver injury. Furthermore, livers of the untreated group became inflamed, turned yellowish-white, and increased in volume at 24 h after CCl 4 injection, suggesting that CCl 4 had induced severe liver cell injury. Notably, the changes of gross findings observed in ERC-treated livers were indistinguishable from those in the normal control group. In accordance with this finding, the histopathological results demonstrated that ERC administration prominently alleviated cytoplasmic vacuolization, necrosis and infiltration of inflammatory cells. Furthermore, to clarify if the similar beneficial effects of ERC injection be seen longer term, the effects of ERC infusion at different time points have also been studied. The results of biochemical assays and histological examination showed that similar beneficial effects could still be observed 2, 3 and 5 days after ALI induction. Meanwhile, ERCs could still provide a similar benefit when infused 2 h after ALI as 30 min after induction. Taken together, ERCs exhibited liver protective effects on this model of liver damage.
Accumulating evidences indicate that hepatocyte proliferation in stem cell therapy is closely related to increased expression of endogenous and exogenous trophic molecules, including growth factors, transforming growth factor, vascular endothelial growth factor and so on [30]. Similar mechanisms have been reported in acute kidney failure and stroke models [31,32]. In addition, some in vitro studies also proved that ERCs could differentiate into functional hepatocyte-like cells [13,33]. To determine the effect of ERCs on liver cell proliferation, we performed PCNA immunohistochemistry. It was found that the population of PCNA positive cells was significantly higher in the ERC-treated group than that of the untreated group, demonstrating that ERCs could promote hepatocyte proliferation.
Neutrophils, a type of phagocytic cell, are potent immune regulators which play an important role in the inflammatory response [34]. Neutrophils have been implicated in several liver injury models such as alcoholic hepatitis [35], ischemia/reperfusion injury of the liver [36], and concanavalin A-induced liver injury [37]. In vivo studies have also exhibited that pathological changes in ALI are significantly improved in neutrophil-depleted mice [36,38]. Our study demonstrated that ERCs could significantly reduce the numbers of Ly6G-positive cells in the liver compared to that of the untreated group. Therefore, we speculated that ERC treatment contributed to alleviating hepatocellular damage against CCl 4 -induced ALI by suppressing inflammatory cell infiltration. Local down-regulation of pro-inflammatory cytokines and up-regulation of anti-inflammatory cytokines after MSC transplantation have been described in kidney, lung and liver injury models [31,39,40]. To address whether ERCs share the similar attributes in amelioration of liver damage partially through regulating cytokine profiles in the ALI model, we measured the local and serum levels of cytokines. Our data showed that treatment with ERCs dramatically reduced the levels of pro-inflammatory cytokines (IL-6, IL-1β, TNF-α) and increased IL-10, and anti-inflammatory cytokine, compared to those of untreated mice. It has been known that the three acute-phase proteins, IL-1β, IL-6, and TNF-α, are tightly associated with inflammation and cell proliferation and viewed as biomarkers that reflect inflammatory conditions [41]. IL-1β has been previously shown to hamper hepatocyte proliferation [42,43]. Both IL-1β and IL-1R-deficient mice were not sensitive to inflammatory conditions at the acute phase [44]. IL-6 and TNF-α have also been identified as attractive targets for initiation and progression of liver regeneration. Increasing evidence has shown that IL-6 hyperstimulation is more likely to cause liver injury [45,46]. In another study, it was found that ischemia-induced renal damage was ameliorated in IL-6 knockout mice [47]. TNF-α, produced by Kupffer cells (macrophages in liver), acts as a pro-inflammatory mediator in liver apoptosis closely related with cytotoxicity induced by CCl 4 [48,49]. Okajima et al. discovered that pretreatment with anti-rat TNF-α antibody could significantly inhibit hepatic I/R [50]. In the current study, the levels of all these three cytokines were reduced by ERC treatment, indicating that ERCs may directly inhibit the pro-inflammatory cytokine secretion to exert liver protective effects.
Meanwhile, previous studies reported that MSCs could secrete IL-10 directly and promote the production of IL-10 by other antigen-presenting cells to exert anti-inflammatory and immunomodulatory effects [51,52]. It was claimed that IL-10 has a protective function in the liver injury animal model [53]. IL-10 negatively regulates liver regeneration by suppressing production of pro-inflammatory cytokines and inhibiting macrophage and neutrophil recruitment in hepatocytes [54]. The liver protective effect was abolished in IL-10-deficient mice and administration of recombinant IL-10 rescued these mice from chemical-induced hepatitis [25,51]. In the present study, the levels of IL-10 in the liver and serum were elevated by ERC treatment, suggesting that ERCs may protect the mice from ALI by up-regulating IL-10 both locally and systematically.
Previous studies demonstrated MSCs preferentially integrated into injured liver and enhanced hepatocyte regeneration when infused into CCl 4 injured mice [28,55,56]. Similarly, we transplanted xenogeneic PHK26-ERCs via intravenous injection and found that the transplanted human ERCs quickly migrated into the liver lobules in mice and could be visualized as scattered individual cells 24 h after CCl 4 administration. Additionally, the level of PCNA positive cells was significantly enhanced after ERC infusion, implying that human ERCs can migrate into the liver and promote liver regeneration in this ALI model. This notion is supported by previous studies that therapeutic effects of ERCs were observed despite utilization of human cells in an immuno-competent xenogeneic animal [13]. Thus, we speculated that ERCs may contribute to hepatocyte proliferation within this damaged environment. In the current study we have also confirmed that ERCs mainly accumulate in the lungs within 24 h after intravenous infusion. This is in accordance with earlier findings that the exogenous fluorescently labelled MSCs remained viable in the lungs up to 24 h after injection [57]. Notably, more fluorescently marked cells were also found in the spleen. Accordingly, the populations of immune cells in spleen were studied to explore the relationship between ERCs and systemic immune reaction.
Dendritic cells (DCs) are the principal antigen-presenting cells in lymphoid organs and periphery including the liver, and are key mediators for the initiation and regulation of both innate and adaptive immune responses [58,59]. It has been reported that DCs exhibit fibrolytic properties, and the depletion of CD11c + cells in the CCl 4 -induced liver fibrosis model led to slower fibrosis regression and reduced clearance of activated hepatic stellate cells. Conversely, DC expansion induced either by Flt3L (fms-like tyrosine kinase-3 ligand) or adoptive transfer of purified DCs accelerates liver fibrosis regression [60]. In the current study, we evaluated the number of splenic DCs distant from the liver, and observed that the elevation of CD11c + MHC-II + DC population after CCl 4 challenge was significantly reduced by ERC treatment. This is consistent with the finding that MSCs are capable of inhibiting the differentiation of monocytes into DCs [26,27], suggesting that ERCs probably exert immunomodulatory effects on DCs to control the development of ALI.
T lymphocyte subsets, including CD4 + and CD8 + T cells, play an important role in the pathogenesis of liver disease [61,62]. However, the effect of CD4 + and CD8 + T cells on CCl 4 -induced acute hepatotoxicity in mice remains scarce and even controversial. According to previous studies, antigen-specific CD8 + T cells migrate to the contact site upon re-exposure to the chemicals and cause tissue damage through the release of cytokines and cytolytic molecules [63][64][65][66]. Researchers used an anti-CD8 monoclonal antibody to neutralized CD8 T cells and demonstrated that depletion of CD8 T cells protected mice from Amodiaquine-induced liver injury [67]. Results from other experiments confirmed that CD4 + T cell depletion was capable of ameliorating the extent of injury with less neutrophil infiltration after I/R liver damage; however, liver damage was reproduced when adoptive transfer of CD4 + lymphocytes to CD4 knockout mice [68,69]. In our study, as compared with the untreated ALI group, ERC treatment group experienced a significant reduction in CD4 + and CD8 + T cells, indicating that ERCs may inhibit T cell accumulation. The findings suggested that ERCs may have regulatory functions on the cell populations of splenic CD4 + and CD8 + T cells. Similar results were also found in animal models with renal I/R injury and ulcerative colitis [15,20]. Meanwhile, this study also proved that, like MSCs, ERCs possess immunomodulatory properties which could suppress the activation and proliferation of T cells [70].
Tregs are believed to play a critical role in the suppression of both innate and adaptive immune responses [71], and are also an important factor in the attenuation of liver injury [72][73][74]. CD4 + CD25 + Tregs account for 5-10 % of the CD4 + T cell panel in healthy humans and mice, which is sufficient to maintain immune homeostasis and limit autoimmune disease [75]. The role of Tregs in ALI has been confirmed in several studies using PC61, an anti-CD25 monoclonal antibody that depletes Tregs before liver damage, to verify the protective effect of Tregs. It was found that mice suffering from Treg depletion experienced an aggravation of ALI compared to ALI mice that did not have Treg depletion [25]. In another study, the protective effects of Tregs on ALI were confirmed via the adoptive transfer method [76]. Similarly, our results demonstrated that CD4 + CD25 + Foxp3 + Tregs were significantly decreased in the untreated-ALI mice compared to the normal control mice, and significantly elevated in ERC-treated mice, indicating that ERC treatment mitigated CCl 4 -induced acute hepatotoxicity in mice by increasing the population of Tregs. Overall, we speculate that transplanted PKH26-labeled ERCs engraft to the spleen in mice with ALI and interact with immune cells, leading to the downregulation of splenic CD11c + MHC-II + DCs, CD4 + and CD8 + T cell population, as well as the upregulattion of Treg population. In the meantime, since ERC supernatant could still exert similar beneficial effects on ALI as compared to the effects achieved by cell infusion (Data not shown), ERC treatment may also attenuate ALI by releasing immunomodulatory cytokines. Experiments to better understand the mechanisms of ERC-mediated immunomodulation in this ALI model are underway.
Conclusions
In conclusion, our study demonstrated that human ERCs are effective in treating CCl 4 -induced ALI. ERCs improved liver function and attenuated pathological changes by promoting liver cell regeneration, modulating cytokine profiles, and regulating immune cells. However, further studies are still needed to elucidate the complex pathways underlying ERC-mediated liver protective effects at molecular levels. Taken together, these findings may provide a rationale for the use of ERCs in clinical settings.
|
2017-08-03T02:04:19.487Z
|
2016-10-22T00:00:00.000
|
{
"year": 2016,
"sha1": "9b8bd62125f34e0bfa8d4d1132b82abc36f0f41a",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-016-1051-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b8bd62125f34e0bfa8d4d1132b82abc36f0f41a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59562365
|
pes2o/s2orc
|
v3-fos-license
|
Translation Technique Analysis of Mandarin Compound Sentence in Novel Huózhe 《活着》
The purpose of this research was to describe the translation technique from novel Huózhe 《活着》 written by Yúhuá (余华) and novel to Live (Hidup) by Agustinus Wibowo. The collected data were Mandarin compound sentences taken from novel Huózhe 《活着》 and the translation sentences from novel to Live (Hidup). Here, this research used qualitative descriptive method and provided by field work method. The techniques included content analysis, focus group discussion, and questioner. The researchers also provided the triangulation method, and used Spradley’s theory to analyze the data. This research used Molina & Albir’s theory to study translation technique. The study showed the result of any kind translation techniques which used by the translator in the translation novel. This research only analyzed Mandarin compound sentence and its translation result in the target text. Based on this result, there are sixteen translation techniques that found, i.e established equivalence 831 (71,39%) data, variation 93 (7,99%) data, transposition 61 (5,24%) data, modulation 52 (4,47%) data, natural borrowing 46 (3,95%) data, explicitation 26 (2,23%) data, implicitation 20 (1,72%) data, discursive creation 9 (0,77%) data, addition 7 (0,60%) data, paraphrase 5 (0,43%) data, reduction 4 (0,34%) data, annotation 3 (0,26%) data, literal 3 (0,26%) data, generalization 2 (0,17%) data, adaptation 1 (0,09%) data, and particularization 1 (0,09%) data. Established equivalence is the dominant translation technique used to translate Mandarin compound sentence in novel Huózhe 《活着》. It is used because the translator can keep the messages of the various term in Mandarin compound sentence. The techniques that are applied to translate Mandarin compound sentence in novel Huózhe 《活着》can be used as a consideration to translate Mandarin compound sentence into Indonesia.
Introduction
Translation technique could help to solve the problems in translation sector.Molina and Albir (2002:509) have concluded that translation technique is the procedure to classify how an equivalence of translation could be applied in any lingual terms.This research also intended to help the translation necessity using novel as a research object.The Chinese novel which served as a Source Text or ST (Mandarin Language) is Huózhe (活着) by Yúhuá (余华) and novel to Live (Hidup) by Agustinus Wibowo as a Target Text or TT (Indonesia Language).The researchers used Molina and Albir's theory to determine the translation technique.Goh and Boh (2014) have said that the analyze of translation strategy of Chéngyǔ (成语) into Melayu Language.They concluded that there are three factors to apply the translation strategy of Chéngyǔ (成语).Firstly , the kind of Chéngyǔ (成语) that appears on the source text.Secondly, the context where Chéngyǔ (成语) appears.Thirdly, is about the same and the different mind between the source language and the target language.Zainudin and Awal (2011) have said that the teaching of translation techniques in a translation classroom from the cooperative learning perspective.They used a methodology that is called Cooperative Work Procedure.They found that cooperative learning is suitable to be used in a translation class.They concluded that the students did not enjoy doing translation in the big groups (3-4) but preferred smaller group (2-3), because the students' learning style was very individualistic.The students found working in a groups encouraged discussion and exchange of ideas.Akhiroh (2013) has found that some translation technique in media give positive effect on translation quality, while some others do not.She found that deletion technique is the most widely used and affect the quality of the translation, significantly in the accuracy.The number of deletion committed by the translator led to a less accurate translation result.She stated that translation technique used by journalist translator is widely influenced by the characteristic of media translation.Thahara (2015) determined translation technique of simile in novel, used Molina & Albir's theory.He found that the most dominant translation technique was literal technique.This technique shows that the expression of simile in the target text does not change.He also stated that this technique will not be appropriate if the image of the simile has the equivalence simile expression in the target language.In addition, he also suggested that the adaptation technique is the best choice if the simile has its equivalence in the target language.Thahara also described each translation technique.
a. Adaptation
Adaptation is translation technique that is to replace ST cultural element with one from the target culture.It can be applied if the cultural element has its equivalence in the target language.
ST : As white as snow TT : Seputih kapas b.Amplification Amplification is translation technique that is to introduce details that are not formulated in the ST: information, explicative paraphrasing.The addition is merely to give information to the reader.It does not change the message and meaning of the source language.
ST : There are many Indonesian at the ship.
The word "Indonesian" is translated into "warga negara Indonesia".It means that the translator wants to give detail information to the readers.c.Borrowing Take a word, expression, or term straight from another language.It can be pure borrowing (without any change), e.g., to use the English word "zig-zag" in a Indonesian text or naturalized borrowing (to fit the spelling rules in the TT), e.g.deflasi, inflasi, musik.
f. Description
To replace a term or expression with a description of its form or/and function.This technique gives an explaining of term or function from ST.
ST : I like panetton.
g. Discursive Creation
To establish a temporary equivalence that is totally unpredictable out of context.This technique is commonly used in the title of film or novel.Example: Nod the head is translated into "ya!" in Indonesia.
q. Transposition
Transposition is translation technique that changes a grammatical category.
ST : I have no control over this condition.
r. Variation
To change linguistic or paralinguistic elements (intonation, gestures) that affect aspects of linguistic variation: changes of textual tone, style, social dialect, geographical dialect, etc. ST : Give it to me now!TT : Berikan barang itu ke gue sekarang!
The novel Huózhe 《活着》 itself had been intensively studied by Gabriella & Ovianti (2010) who intended to describe the way of thinking and to learn the meaning of life by the main personage (Fugui).They described the the personage Fugui in the novel Huózhe 《活着》 by direct and indirect way.The direct way involves selfportrait, psychology, word, and action.The indirect way involves a word from other personages, and an action from other personages.
The personage Fugui of Novel Huózhe 《活着》has a responsibility, struggling for a life, and a broad minded.
It is clearly that previous research in the translation of Mandarin compound sentence still needs to be improved.Today, the translation sector of Mandarin-Indonesia needs a lot of research in order to solve any translation problem, especially in the Mandarin compound sentence and its translation technique.This research hopefully can be a consideration by translator to deal with the translation of Mandarin compound sentence.This research also helps the translation sector of Mandarin-Indonesia, which is still developed currently, especially in the translation technique.
Method
This research belongs to translation sector, using qualitative descriptive method.This research also provides field work method, where the authors directly involved to the field to collect the data from the rater and the informant.
The setting of this research is novel Huózhe 《活着》 by Yú huá (余华) and novel to Live (Hidup) by Agustinus Wibowo.The participants are the author, the rater, and the target reader.The event is the sentences that collected from novel to Live (Hidup) as a translation data of the novel Huózhe 《活着》.
The data belonged to this research is the Mandarin compound sentences of the novel Huózhe 《活着》 and the Indonesia sentences of the novel to Live (Hidup), which served as the translation data.The researchers also provided additional data by the questioner from the informant.
The researchers' data collecting technique included content analysis, focus group discussion, and questioner.The authors also provided the triangulation method and used Spradley's theory to study and analyze the data.The research objective is to know what kind of translation technique should be used for the reader when deal with Mandarin compound sentences.
Results
Mandarin compound sentence is a sentence that formed by two or more clauses (Sun, 2003).Sun described that there are twelve kinds of Mandarin compound sentence.Here, Mandarin compound sentence served as a domain and only ten of them are found.The Mandarin compound sentences that found in this research consist of : Bìngliè Fùjù Saat aku sepuluh tahun lebih muda dari sekarang, aku mendapatkan satu pekerjaan yang teramat santai, yaitu mengumpulkan lagu rakyat di pedalaman desa.
(When I was ten years younger than I am now, I got a leisurely career and went to the village to collect folk songs) The result shows that established equivalence is the most numerous translation technique which used on the target text.
Conclusion
The data are the results of analysis along with the rater and the target reader.Throughout the process of this research, both the rater and the target reader are able to understand and comprehend the translation from the source text to the target text.This depends on what translation technique chosen by the translator.From the data and analysis, the researchers can determine some conclusion in the translation technique that is used for translating Mandarin compound sentence in novel Huózhe (活着).Firstly, established equivalence is the most numerous used in the target text by the translator (831 in total or 71,39%), that allow for the target reader to understand easily about the novel to Live (Hidup).Furthermore, transposition technique is more suitable to deal with the involute Mandarin compound sentence.Secondly, the translation techniques that are found in this research can be a consideration to the readers when they deal with Mandarin compound sentence.
a foreign word or phrase; it can be lexical or structural.ST : Vice president TT : Wakil presiden e. Compensation To introduce the ST element of information or stylistic effect in another place in the TT, because it cannot be reflected in the same place as in the ST.This technique is commonly used in literature translation.It shifts stylistics element of ST in the different structure of the TT.ST : It is hard though to be a woman.TT : Bagaimanapun, menjadi wanita itu hal yang sulit.
ST: A farewell to arms.TT: Pertempuran penghabisan.h.Established EquivalenceTo use a term or expression recognized (by dictionaries or language in use) as an equivalent in the TT.ST : Sincerely yours TT : Hormat kami i. Generalization Generalization is translation technique that replaces the complicated terms of ST by more general or neutral terms in the TT.It is used when there is no specific term in the target language.ST : He drank a half tumbler of cognac.TT : Dia minum setengah gelas bir.j.Linguistic AmplificationTo add linguistic elements in the TT.This is often used in consecutive interpreting and dubbing.ST : Everything is up to you! TT : Semuanya terserah anda sendiri!k.Linguistic Compression To synthesize linguistic elements in the TT.This is often used in simultaneous interpreting and in subtitling.This technique is shorten the TT without changing the meaning of ST.ST : Are you sleepy?TT : Ngantuk?l.Literal Translation To translate a word or an expression word for word.In this technique, a single word of ST does not always translate into a single word of TT.ST : To kill two birds with one stone.TT : Membunuh dua burung dengan satu batu.. m.Modulation To change the point of view, focus or cognitive category in relation to the ST; it can be lexical or structural.ST : Get moving or I'll be doing the firing.TT : Keluar atau kau akan kupecat.n.Particularization To use a more precise or concrete term.It is in opposition to generalization.ST : He likes playing water sport.TT : Dia suka bermain olahraga dayung.o.Reduction To suppress a ST information item in the TT.It is in opposition to amplification.ST : She got a car accident.TT : Dia mengalami kecelakaan.p. Substitution To change linguistic elements for paralinguistic elements (intonation, gestures) or vice versa.
|
2019-01-05T09:23:40.348Z
|
2018-12-26T00:00:00.000
|
{
"year": 2018,
"sha1": "2a3f66b40202627b8b2215c70720898b39bccf77",
"oa_license": "CCBYSA",
"oa_url": "https://journal.unhas.ac.id/index.php/jish/article/download/4356/3197",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2a3f66b40202627b8b2215c70720898b39bccf77",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
44630432
|
pes2o/s2orc
|
v3-fos-license
|
Mechanism of the activation step of the aminoacylation reaction: a significant difference between class I and class II synthetases
In the present work we report, for the first time, a novel difference in the molecular mechanism of the activation step of aminoacylation reaction between the class I and class II aminoacyl tRNA synthetases (aaRSs). The observed difference is in the mode of nucleophilic attack by the oxygen atom of the carboxylic group of the substrate amino acid (AA) to the αP atom of adenosine triphosphate (ATP). The syn oxygen atom of the carboxylic group attacks the α-phosphorous atom (αP) of ATP in all class I aaRSs (except TrpRS) investigated, while the anti oxygen atom attacks in the case of class II aaRSs. The class I aaRSs investigated are GluRS, GlnRS, TyrRS, TrpRS, LeuRS, ValRS, IleRS, CysRS, and MetRS and class II aaRSs investigated are HisRS, LysRS, ProRS, AspRS, AsnRS, AlaRS, GlyRS, PheRS, and ThrRS. The variation of the electron density at bond critical points as a function of the conformation of the attacking oxygen atom measured by the dihedral angle ψ (Cα–C′) conclusively proves this. The result shows that the strength of the interaction of syn oxygen and αP is stronger than the interaction with the anti oxygen for class I aaRSs. This indicates that the syn oxygen is the most probable candidate for the nucleophilic attack in class I aaRSs. The result is further supported by the computation of the variation of the nonbonded interaction energies between αP atom and anti oxygen as well as syn oxygen in class I and II aaRSs, respectively. The difference in mechanism is explained based on the analysis of the electrostatic potential of the AA and ATP which shows that the relative arrangement of the ATP with respect to the AA is opposite in class I and class II aaRSs, which is correlated with the organization of the active site in respective aaRSs. A comparative study of the reaction mechanisms of the activation step in a class I aaRS (Glutaminyl tRNA synthetase) and in a class II aaRS (Histidyl tRNA synthetase) is carried out by the transition state analysis. The atoms in molecule analysis of the interaction between active site residues or ions and substrates are carried out in the reactant state and the transition state. The result shows that the observed novel difference in the mechanism is correlated with the organizations of the active sites of the respective aaRSs. The result has implication in understanding the experimentally observed different modes of tRNA binding in the two classes of aaRSs.
Introduction
Protein biosynthesis takes place in successive stages such as aminoacylation, initiation, elongation, termination, release, folding, and posttranslational processing (Berg, Tymoczko, & Stryer, 2002;Nelson & Cox, 2002). The relationship between an amino acid (AA) and its cognate tRNA is established in the aminoacylation reaction. This vital reaction links the realm of the protein and the RNA world. The reaction occurs in two steps (Cavarelli, Delagoutte, Eriani, Gangloff, & Moras, 1998;Delarue, 1995;Perona, Rould, & Steitz, 1993). The first step is the AA activation with the formation of aminoacyl adenylate and release of inorganic pyrophosphate ( Figure 1). The second step is the charging of tRNA in which the aminoacyl group is transferred to the tRNA. Both activation and charging of 20 natural AAs are catalyzed by 20 aminoacyl tRNA synthetases (aaRSs).
It is proposed, based on crystallographic studies that the activation step follows an inline displacement mechanism in which the oxygen atom of the carboxylic acid group of AA attacks to the α-phosphorous atom (αP) of the ATP for both class I and class II aaRSs. The common reaction mechanism of the activation step of aminoacylation reaction is suggested for both class I and class Step 1: Amino acid activation: Amino acid (AA) + Adenosine triphosphate (ATP) + aminoacyl-tRNA synthetase (aaRS) = aminoacyladenylate (AA-AMP) + inorganic pyrophosphate (PPi) Step 2: tRNA charging: AA-AMP• aaRS + tRNA = aminoacyl-tRNA (AA-tRNA) + Adenosine monophosphate (AMP) +aaRS Figure 1. The reaction scheme of the first step (activation step) and second step (tRNA charging) of aminoacylation reaction. II aaRSs (Cavarelli et al., 1998;Delarue, 1995;Perona et al., 1993) as shown schematically in Figure 1. Electronic structure based analysis also confirmed the mechanism of the activations step (Dutta Banik & Nandi, 2009, 2011Nandi, 2011) and tRNA charging step (Liu & Gauld, 2008) for HisRS. In principle, two oxygen atoms of the carboxylic acid group of the substrate AA may adopt four probable conformations with respect to α-amino group of AA. Such probable conformations are syn-periplaner, sp; syn-clinal, sc (collectively denoted as syn) and anti-clinal, ac; and anti-periplaner, ap (collectively denoted as anti) (Eliel & Wilen, 1993). The carboxylic oxygen making nucleophilic attack may have any conformations in the above mentioned ranges such as sp, sc, ac, or ap during the reaction. The preferred conformation of the attacking oxygen can be followed from the preferred interaction between the attacking nucleophile and the electrophilic center. The interaction between the two atoms can be understood from the analysis of topology of electron density, (ρ) using robust quantum mechanical method ('Atoms in Molecule' theory or AIM theory) (Bader, 1991;Sjoberg & Politzer, 1990;Wiberg, Bader, & Lau, 1987). The presence of bond critical point (BCP) and bond path connection between two nuclei indicates an interaction which can be computed using AIM theory. The electron density at BCP (ρ b ) is used as a measure of the strength and the nature of the interaction (covalent, nonbonded or electrostatic) can be followed from the Laplacian of the electron density at BCP (Bader, 1991).
In the present work, we calculated the variation of ρ b between the oxygen atom of the carboxylic acid group and the αP of ATP as a function of the conformation of the attacking oxygen atom. This is measured by the dihedral angle ψ (C α -C′) or ψ (N-C α -C′-O) following standard convention (Adrian-Scotto & Vasilescu, 2008;Altona & Sundaralingam, 1972;IUPAC-IUB Nomenclature, 1970;Saenger, 1984). The ψ (C α -C′) represent the dihedral angle between the plane containing nitrogen atom of α-amino group, chiral carbon atom, carbonyl carbon atom of carboxylic group; and the plane containing chiral carbon atom, carbonyl carbon atom of carboxylic group, oxygen atom of carboxylic group of substrate AA ( Figure S1). The variation includes conformations such as sp, sc, ac, or ap. The conformation of the carboxylic oxygen atom having strongest interaction with the αP of ATP should have the higher value of ρ b compared to the other conformations. The computation is carried out for both class I and class II aaRSs. The result is supported by the calculation of the variation of interaction energy between syn oxygen atom of AA and αP atom of ATP and that between the anti oxygen of AA and αP atom of ATP as a function of mutual separation in the reactant state for class I and class II aaRSs. To understand the mode of nucleophilic attack as well as the influence of the active site organization on the mode of nucleophilic attack in class I and class II aaRSs, a comparative study of the reaction mechanisms of the activation step in a class I aaRS (GlnRS) and in a class II aaRS (HisRS) is carried out by transition state analysis. The AIM analysis of the interaction between active site residues or ions and substrates is carried out in the reactant state and the transition state.
Since the interaction between the AA and ATP is principally electrostatic in nature, the origin of the difference in the mode of attack (if any) can be followed from the analysis of the electrostatic potential near the reaction center. The difference in the mode of nucleophilic attack between various aaRSs is explained using electrostatic potential analysis (ESP) of the substrates (Popelier, 1998). The aminoacylation reaction being a nucleophilic reaction, the population difference (δq = q p À q o ) between the αP atom of ATP and oxygen atom of carboxylic acid group (the attacking atom of the nucleophile closest to the αP) can serve as a measure of the propensity of the nucleophilic reaction. The influence of the charge distributions of neighboring active site residues and ions favor the nucleophilic attack; and the computation of δq in presence and in absence of active site residues is expected to reveal the influence of the active site residues on the progress of the reaction.
The results are interesting and reveal a significant difference in the mode of nucleophilic attack between the class I and class II aaRSs for the first time. The syn oxygen atom of the carboxylic group attacks the αP of ATP in class I aaRSs, while the anti oxygen atom attacks in the case of class II aaRSs. The present study shows that the mutual arrangements of the AA and ATP in class I and class II are such that modes of the approaches of the oxygen atoms for nucleophilic attack in the two cases need to be opposite. The analysis of the transition state using quantum mechanical/semi-empirical (QM/SE) method and δq, respectively show that the subtle difference in mode of attack is correlated with the striking difference in the active site organization among the two classes of aaRSs which remained enigmatic till date.
The better efficiency of the enzymatic reaction over the same reaction occurring in bulk solvent is a subject of much interest (Warshel et al., 2006). In the bulk solvent, the syn or the anti oxygen attacks are equally probable on an average. Further, the adenylate geometries resulting from syn or anti attack are free for conformational rearrangements in bulk solvent. In contrast, interactions between syn oxygen and active site residues are not equivalent with those between the anti oxygen and active site residues. This is due to the spatial heterogeneity and dissymmetry of the organization of active site. Similarly the product structure of the adenylate in each aaRS is stabilized and retained by the active site and conformational rearrangement within the enzyme is less probable than in the bulk. The aforesaid interaction pattern observed between active site residues of aaRS and substrates (Nandi, 2011) are described in section A of the supplementary material. The difference in the reaction mechanism between class I and class II aaRSs is expected to be correlated with the organization of the active site structure of the respective aaRS. However, studies are unavailable at present. This unresolved problem provides further impetus in the present study. In the following section, we present the computational methods.
Methods
The available crystal structures of class I and class II aaRSs of the reactant state as well as adenylate state of activation step of aminoacylation reaction are used in the present work. Class I aaRSs considered in the reactant states are GluRS, GlnRS, TyrRS, and TrpRS; and class II aaRSs considered are HisRS, LysRS, and ProRS. Class I aaRS considered in the adenylate state are CysRS, GluRS, GlnRS, LeuRS, IleRS, MetRS, Tyr-RS, and ValRS; and class II aaRSs considered are AlaRS, AspRS, AsnRS, GlyRS, HisRS, PheRS, and ThrRS. Details of the model built are described in the section B in the supplementary material. Since there exist a pair of oxygen atoms of the carboxylic group and only one can participate in the nucleophilic reaction, each atom is a possible candidate to attack the αP atom of ATP. These pair of oxygen atoms are referred to as O(1) and O(2) for convenience and the numbering is arbitrary. The oxygen atoms can have conformations such as sp, sc, ac, or ap which are denoted by the ψ (C α -C′) dihedral angle as mentioned in the introduction section (shown in Figure S1). The oxygen atom closest to the αP atom in reactant state is the probable candidate to attack the ATP. The separation between the pair of oxygen atoms of the carboxylic acid group of substrate AA and αP atom of ATP in reactant state is measured and the conformation of corresponding oxygen atoms is noted (Table 1 of the supplementary material). The attacking oxygen atom of the carboxylic acid group of substrate AA develops bonding with the αP atom of ATP in the product state and not with the other oxygen. Hence, the geometry of the adenylate state is suggestive of the conformation of the attacking oxygen (shown in the Table 2 as well as Figure S2 and S3, respectively).
As mentioned in the introduction, ρ b can be used as a measure of the strength of interaction and its nature (covalent, ionic, or electrostatic) can be followed from the Laplacian of the electron density at the critical point (Bader, 1991) from AIM theory. To confirm the strength of the interaction between each of the pair of oxygen atoms and the αP atom, we calculated the variation of ρ b between oxygen atom of the carboxylic acid group and αP of ATP (which indicates the strength of interaction) as a function of the ψ (C α -C′) dihedral angle using AIM method. It is expected that among the pair of carboxylic oxygen in the adenylate state, the ρ b value would be greater for the bonded oxygen atom compared to the nonbonded one. The variation of ρ b with the conformation of oxygen atom is computed for various class I and class II aaRSs as shown in Figure 2 and Figure 3, respectively. The calculations are performed using HF/6-31G ⁄⁄ level of theory.
To further explore the difference in the modes of nucleophilic attack in class I and class II aaRSs, we analyzed the variation of the interaction energy as a function of mutual separation (ΔR) between the αP atom of ATP and the anti oxygen, and that between αP atom and the syn oxygen. The difference between the two interaction energies (denoted as ΔE antiÀsyn ) as a function of ΔR is plotted in Figure 4 for all aaRSs. The variation represents the relative ease of approach of each oxygen towards the αP atom during the course of the reaction. The ΔE anti-syn values in the reactant state is computed for various class I and class II aaRSs at the HF/6-31G ⁄⁄ level of theory. We also included the effect of electron correlation in the computation using MP2/6-31G ⁄⁄ level of theory (Figure 4(b)). The detailed results are shown in the figures S6 and S7 of the supplementary material for class I and class II aaRS, respectively.
We carried out the electronic structure-based analysis of the transition state of the activation step for a class I (GlnRS) and class II aaRS (HisRS). The QM/SE model for the GlnRS includes the ATP bound with one Mg 2+ ion, substrate AA (Gln), the Pro32-Pro33-Gly34 tripeptide, the side chains of His40, His43, Asp66, Arg260, and Lys270 as well as the ribose ring of A76 of tRNA (total number of atoms present is 136). The model of HisRS includes the side chains of Glu83, Arg113, Gln127, and Arg259, two Mg 2+ ions, two water molecules and the substrate AA (His), and ATP (total number of atoms present as 100). The model structures are optimized using two-level ONIOM (HF/6-31G ⁄ :PM3) method. The carboxylic acid group of Gln, αP atom of ATP with attached oxygen atoms, the imidazole group of His43, amino group of Lys270, and 2′OH group as well as 3′OH group of ribose ring of A76 of tRNA are included in ab initio level of theory in the model of GlnRS. Remaining atoms of the reactant and surrounding residues are considered at the semi-empirical level (PM3). In the case of HisRS, the carboxylic acid group of His, αP of ATP with attached oxygens, and the guanidinium group of Arg113 and Arg259 are included in the ab initio level of theory. Remaining atoms of the reactant and surrounding residues are considered at the semiempirical (PM3) level. QST3 method is used for transi-tion state calculation. The transition state is identified by the presence of a single imaginary frequency. The schematic representation of the transition state geometries of GlnRS and HisRS are shown in Figure 5(a) and (b), respectively. The Cartesian coordinates for GlnRS and HisRS are given in Table 3(A) and (B), respectively. AIM calculation for the reactant state and transition state.
To understand the origin of the difference in the mode of nucleophilic attack, the electrostatic potential (ESP) at the molecular surface of ATP and AA (in the absence of all active site residues) is computed. The electrostatic potential, V(r) that the electrons and nuclei of a molecule create at each pointr in the surrounding space is given by Equation (1).
Z A is the charge of the nucleus A, located atR A : The electronic density function of the molecule is denoted by q(r) (Popelier, 1998). The ESP analysis is carried out for both class I and class II aaRSs based on the crystal structure of the reactant state using HF/6-31G ⁄⁄ level of theory. The ESP surfaces for class I and class II aaRSs are shown in Figure 6(a) and (b), respectively.
The population difference (δq = q p À q o ) between the αP atom of ATP (q p ) and oxygen atom of carboxylic acid group (closest to the αP and is the attacking atom of the nucleophile) of substrate AA (q o ) in presence and in absence of active site residues is computed using Mul- liken population analysis scheme (MPA) and Natural population analysis scheme (Reed, Curtiss, & Weinhold, 1988;Reed, Weinstock, & Weinhold, 1985;Sun, Li, Zhang, Ma, & Liu, 2008). The NPA scheme utilizes the natural atomic orbital (NAO) and natural bond orbital.
where Γ is the density matrix and the atomic population q (A) is given by the summation over all orbital centered on the atom A and is given by, q The population difference (δq) is measured for both class I and class II aaRSs in absence and presence of active site residues and Mg 2+ ion(s). The active site residues and ions influence the δq and concomitantly dictate the progress of the nucleophilic reaction. The available crystal structures of the reactant state are used for the computation. The set of active site residues belong to the vicinity of the reaction center are included to understand the influence of active site on the propensity of nucleophilic reaction. The reaction center is the location of the nucleophilic attack between the carboxylic acid group of AA and the α-phosphorous atom of ATP, denoted as αP. Further, the active site residues and ions near the βP and γP region of ATP are also included in computation.
The residues considered in each aaRSs are as follows. The active site residues considered for GlnRS are Phe31, Pro32, Pro33, Glu34, His40, His43, Asp66, Ile81, Arg260, Lys270, A76, and single Mg 2+ ion. For GluRS, Ile6, Ala7, Pro8, Ser9, Pro10, Asp13, Pro14, His15, Thr18, Glu41, Glu208, Trp209, Lys243, Lys246, Arg247, single Mg 2+ ion, and single H 2 O are considered. For TrpRS, Ile8, Gln9, Gln107, Lys111, Lys192, Val143, Asp146, Gln147, His150, Lys195, Tyr125, single Mg 2+ ion, and two H 2 O molecules are considered. The active site residues included for TyrRS are Ala44, Asp45, Thr47, Leu51, His52, His55, Gly81, Gly84, Asp85, . Combined plots of the variations of the differences in the respective interaction energies between αP atom and anti oxygen and that between αP atom and syn oxygen (denoted as ΔE anti-syn ) in kcal mol À1 , for various class I and II aaRSs (a) using HF/6-31G ⁄⁄ level of theory and (b) MP2/6-31G ⁄⁄ level of theory. The variation is measured as a function of the separation between individual oxygen atom of the carboxylic acid group and αP atom of ATP (ΔR). When ΔE anti-syn is positive, the approach of syn oxygen atom is more favorable and when ΔE anti-syn is negative, the approach of anti oxygen atom is more favorable. Figure 7. Gauss- The conclusions are valid for both oxygen atoms of the carboxylic acid group. The result indicates that when the conformation of either of the oxygen atom of carboxylic group is syn, it has the highest probability to form bond-ing interaction with αP atom of ATP leading to the adenylate state in class I aaRSs. The oxygen with anti conformation has the lowest probability to form the bond with the αP atom of ATP in class I aaRSs.
In contrast, the maximum ρ b values are observed corresponding to anti conformation for all class II aaRSs investigated and minimum in the range of syn conformations. The results are shown in Figures 3(a) (b), respectively. The variation of ρ b value is passing through a minima in the range of syn conformation of the respective oxygen atoms (the ψ (C α -C′) dihedral angle range of ± 90°includes both sc and sp conformations) for all class II aaRSs investigated. The maxima of ρ b values are observed in the range of anti conformation for all class II aaRSs. The conclusions are valid for both oxygen atoms of the carboxylic acid group. The oxygen with syn conformation has the lowest probability to form the bond with the αP atom of ATP. The result indicates a novel difference in the mode of inline attack by the oxygen atom of the carboxylic group of substrate AA to the αP atom of ATP between class I and class II aaRSs.
Although TrpRS belongs to class I, the mode of nucleophilic attack in TrpRS is exceptional. In TrpRS, the anti oxygen atom acts as attacking oxygen during the reaction, whereas the syn oxygen of substrate AA is the most probable attacking conformation for all other class I aaRSs investigated. Trp has the largest side chain (Zamyatin, 1972). In case, the syn oxygen atom of Trp acts as attacking atom, such a mode of nucleophilic attack leads to unfavorable nonbonded interaction (shortrange repulsion) between the side chain of Trp and sugar ring of ATP and does not occur for the same reason. In contrast, the anti conformation is suitable for the nucleophilic attack in this case. Consequently, the anti oxygen atom acts as the attacking atom during the activation step in case of TrpRS.
The foregoing results, which show two different and distinct modes of attack in the two classes of aaRSs, are further confirmed by studying the variation of interaction energy as a function of mutual separation (ΔR) between oxygen atoms of AA and αP atom of ATP. The variation of the interaction energy between the anti oxygen and αP atom, and that between the syn oxygen and αP atom are computed as a function of mutual separation (ΔR). The difference between the two energy quantities (denoted as ΔE antiÀsyn ) is the difference in interaction energies of the carboxylic oxygen atoms (with syn and anti) as they approach to the αP atom of ATP for nucleophilic attack. In other words, ΔE anti-syn represents the relative ease of the anti or syn attack. Plots of ΔE anti-syn as a function of ΔR for both classes of aaRSs are shown in Figure 4(a) and (b). The positive value of ΔE antiÀsyn for class I aaRSs (except TrpRS) confirms that the approach of the syn oxygen atom is more favorable for class I aaRSs. The exceptional behavior in the case of TrpRS is due to its largest side chain, which prevents the approach of the syn oxygen as mentioned earlier. The negative value of ΔE anti-syn for class II aaRSs confirms that the approach of anti oxygen atom is more favorable for class II aaRSs. The incorporation of electron correlation at the MP2 level of theory does not change the conclusions made based on HF/6-31G ⁄⁄ level of theory (shown in Figure 4(b)). The effect of electron correlation is 0.21% of the total energy on an average (over all separations considered for various aaRSs) computed at the MP2/6-31G ⁄⁄ level. The individual plots are shown in Figures S6 and S7 of the supplementary material. Thus, the variation in energy shown in Figure 4(a) and (b) strongly supports the conclusions made from the variation in electron density as shown in Figures 2(a), (b) and Figures 3 (a), (b), respectively.
The forgoing conclusion about the two different modes of attack is further corroborated by the observed conformations of the oxygen atoms in various class I and class II aaRSs based on the crystal structures. The separations between each of the oxygen atoms of carboxylic acid group of substrate AA and αP atom of ATP in the reactant states are shown in the Table 1 in the supplementary material based on the corresponding crystal structures. Out of the four available crystal structures of class I aaRSs, the conformation of the closest oxygen is syn for three and anti for one (TrpRS). In contrast, the conformation of the closest oxygen is anti for all out of three available crystal structures of class II aaRSs. As mentioned before, the oxygen atom closer to the αP atom of ATP is the most probable candidate for the attack. The result is confirmed from the conformation of the oxygen atom in the adenylate state of various aaRSs as shown in Table 2 in the supplementary material. Out of the eight available crystal structures of adenylate state of class I aaRSs, the conformation of the oxygen atom covalently linked with αP (attacking oxygen) is sp for six and sc for two aaRSs. This is shown in Figure S2 in the supplementary material. In contrast, the conformation of the oxygen covalently linked with αP (attacking oxygen) is ap for six and ac for three out of the nine crystal structures of adenylate state of class II aaRSs. This is shown in Figure S3 in the supplementary material. Consequently, the attacking oxygen atom of substrate AA in both reactant and adenylate state has syn (either sc or sp) conformation with respect to the α-amino group for class I aaRSs. In contrast, the attacking oxygen atom of the substrate AA has anti (either ap or ac) conformation with respect to the α-amino group in all class II aaRSs investigated.
Our quantum mechanical calculation of the transition states in GlnRS and HisRS within the respective active sites confirms that although the reactions in both cases progress through a common in-line displacement mechanism via a penta-coordinated transition state, the syn oxygen atom of the carboxylic group attacks the αP of ATP in GlnRS (class I) and in contrast the anti oxygen atom attacks in the case of HisRS (class II). The schematic representations of the transition state geometries of GlnRS and HisRS computed at the ONIOM(HF/6-31G ⁄ : PM3) level of theory are shown in Figure 5(a) and (b), respectively. The transition states are characterized by imaginary frequency. The transition state geometry and its comparison with the reactant state geometry show that different set of residues are involved in the two aaRSs. The involvement of different set of residues in the two cases is related with the syn attack in the case of GlnRS and anti attack in HisRS. The analysis of the interactions between the active site residues and substrates in the reactant state and transition state are presented later.
To understand the unprecedented difference in the mode of nucleophilic attack by the substrate AA, we analyzed the electrostatic potential at molecular surfaces (ESP) of the substrates in absence of active site residues as present in the active site of the respective aaRSs. Computations of electrostatic potential of the substrates in two enzymes show that the difference in the spatial arrangement of ATP relative to the AAs between the two classes of aaRSs is responsible for this difference in mode of attack. Explicitly, the spatial arrangement of the triphosphate group of ATP with respect to the AA in class I and class II aaRSs is strikingly different. An imaginary plane is considered which is almost parallel to the sugar ring of ATP and nearly perpendicular to the adenosine base as well as passing through the O 0 5 atom of sugar ring. The carboxylic acid group of the substrate AA lies perpendicular to the imaginary plane and the amino group remains above the plane for both aaRSs. Note that the computation of ESP has no bearing on the choice of the imaginary plane, the later plane being only a guide for the eye. The carboxylic acid group of the AA is perpendicular to the plane shown in Figure 6(a) and (b), respectively and the amino group remains above the plane for all aaRSs. Although the orientations of AAs relative to the plane are same in all aaRSs in Figure 6(a) and (b), the orientations of the ATP are opposite in class I and class II aaRSs. The triphosphate group of ATP in class I aaRSs is above the plane (as shown in Figure 6(a)) while the same is below the plane in the case of class II aaRSs (as shown in Figure 6(b)). Since both the triphosphate group of ATP and carboxylic acid group of AA contains negative charges, it is favorable for the syn oxygen atom to be proximal with αP and the anti oxygen atom is directed away from the charge distribution of ATP in class I aaRSs to minimize the repulsive interaction. In contrast, the orientation of ATP being exactly opposite in class II aaRSs, it is favorable for the anti oxygen atom to be proximal with αP and the syn oxygen atom remains away from the charge distribution of ATP. Thus, our ESP analysis shows that the different modes of attack in two classes of aaRSs are due to the difference in the mutual arrangements of ATP with respect to AAs in the active site of respective aaRSs.
We now proceed to show that the difference in the arrangements of the substrates in two aaRSs is coupled with the dissimilar active site organizations of two classes of aaRSs as examplified by the studies in GlnRS and HisRS, respectively. Our calculation of the reaction mechanism by transition state analysis conclusively showed that residues such as A76 of tRNA, Lys270, and His43 are located in the close vicinity of the region of nucleophilic attack in GlnRS. In contrast, Arg259, Arg113, and Gln127 are located around the substrates in HisRS. The differences in the structural composition of residues are related with the different mode of attack in two cases. The organization of residues or ions and resulting interaction pattern hold the substrate in their respective mutual arrangements during the course of reaction. It is necessary to quantify the interactions between the substrates and active site residues which might facilitate the reaction. The influence of the active site residues on the reaction mechanism is quantitatively investigated in two ways. First, the interactions between the residues and substrates in the two aaRSs are measured by the computations of the electron densities at bond critical points between active site residues or ions and substrates. Secondly, we analyzed the influence of the active site residues and ions on the propensity of nucleophilic attack in the aaRSs using δq.
In order to explore the role of active site residues in the nucleophilic attack in two representative aaRSs (GlnRS for class I and HisRS for class II), we computed the electron density at BCP between active site residues or ions and substrates in GlnRS and HisRS for reactant state as well as transition state. The changes in BCP values ( Table 4 in the supplementary material) while going from the reactant state to the transition state quantitatively show the pattern of interaction driving the nucleophilic reaction. In the reactant state of GlnRS, A76 forms BCP with the substrate Gln only. The same residue forms BCP with both Gln and ATP in the transition state. Thus, A76 shifts from the proximity of Gln towards the reaction center to anchor Gln and ATP. Analogous role is performed by Arg113 and Arg259 in the case of HisRS. Both residues have BCP with ATP in reactant state and develop BCP with both substrates (His and ATP) in the transition state. The geometry of the transiton state shows that in addition to A76, Lys270 and His43 develop BCPs in GlnRS. Similarly, Arg113, Gln127, and Arg259 form BCP in HisRS in the transition state. Most importantly, the arrangement of A76, Lys270, and His43 in GlnRS perfectly complements the spatial arrangement of atoms of the substrates during the attack by the syn oxygen. The same arrangement of A76, Lys270, and His43 is not at all complementary to the spatial arrangement of atoms of the substrates during the attack by anti oxygen in HisRS. The anti oxygen attack in HisRS is supported by pattern of interaction created by Arg113, Gln127, and Arg259. It remained a paradox that the different catalytic residues are present near the location of the nucleophilic attack in GlnRS and HisRS, respectively. Our result shows that the different set of residues in GlnRS and HisRS complement the spatial arrangements of the substrates in the respective transition states which are related with the nucleophilic attack by syn and anti oxygen in the respective aaRSs.
The composition of the active site residues near the reaction center varies from one aaRS to the other. This is puzzling since the chemical structures of the respective substrate like AA differ only in the side chain parts. Despite the fact that both groups are away from the reaction center and the chemical structures of the substrates at the reaction center being identical in both aaRS, unexpectedly the residues present in the first shell of active site of two aaRSs are different. It is a paradox that the molecular organization of active sites of both classes largely differ especially near the reaction center (location of the nucleophilic attack between the carboxylic acid group of AA and the α-phosphorous atom of ATP, denoted as αP). The importance of studying the molecular details of the reaction within active site is of major interest to understand the aminoacylation reaction within its biological perspective. Our analysis of the transition state in GlnRS and HisRS indicates that the molecular organization of the active sites in respective aaRSs complement the difference in spatial arrangement of atoms of substrates in each case and thereby facilitates the reaction. However, such analysis is yet to be carried out in other aaRSs and expected to reveal the origin of the variation of the active site organization from one aaRS to the other. This unresolved problem provides impetus for future studies about the correlation between active site organization and reaction therein.
The influence of the active site residues on the nucleophilic attack is also analyzed in terms of the δq. First, the δq is computed when substrates are present only. This is shown in column A of the Table 1. The δq values enhance significantly for both class I and class II aaRSs when the active site residues close to the region of nucleophilic attack are incorporated. The resulting δq values are shown in column B in Table 1. The enhancement of δq implies that the propensity of the nucleophilic attack is facilitated by the presence of the added residues or ions. The comparison of the enhanced value of δq obtained by adding all residues and ions near the reaction center and the δq obtained in absence of all residues or ions are shown in Figure 7. The result indicates that the active site residues and ions reduce the unfavorable potential to facilitate the nucleophilic attack. It may be noted that the mode of nucleophilic attack and the active site geometries are greatly complementary in the sense that their relative position is vital to the enhancement of δq. The observed influences of the active site residues or ions on δq have the same trend using MPA and NPA schemes. A recent quantum chemical modeling study of Glycyl adenylate (Adrian-Scotto & Vasilescu, 2008) showed that the conformation of glycyl adenylate is stabilized by the electrostatic interactions with ionic residues and the Mg 2+ inside the active site and reduction of electrostatic potential of the phosphate group by the Mg 2+ ion and cationic Arg is suggested. This proposal corroborates our result from the charge population calculation that the active site residues and ions stabilize the substrates and facilitates the nucleophilic reaction in various aaRSs such as GlnRS, GluRS, TrpRS, TyrRS, LysRS, ProRS, and HisRS.
In principle, a limitation of the present study is that the complete macromolecular structure of the enzyme is not considered in the transition state calculation or calculation of the bond critical points using AIM theory and only first-shell neighbors are considered. Since the result of the present paper is primarily dependent on the residues or ions in the immediate vicinity of the substrate (due to the 1=r n dependence of the interactions therein), the effect of second-shell neighbors and beyond that is less significant. This can be confirmed from the computational study where the complete enzyme structure is considered comparing the results from the smaller quantum mechanical models with larger models (Dutta Banik & Nandi, 2011). However, the incorporation of complete macromolecular structure is desirable considering the hydrophobic environment of the enzymatic active site structure and its influence on the reactions therein (Kahn & Bruice, 2003). Hypothetically, the reaction between AA and ATP leading to the formation of adenylate in bulk water is not only dependent on the electrostatic interaction between the two species, but also on the hydrophobic interaction (the process required to bring two solutes from infinity to a separation to a final configuration within a solvent at constant temperature and pressure) (Ben Naim, 1992;Pratt, 1985;Tanford, 1980). Similarly, the same reaction in enzyme active site is dependent on the electrostatic interaction between the two species in the environment of active site but also on the solvophobic interaction (Yaacobi & Ben Naim, 1974) therein when the active site is approximated by a nonaqueous solvent medium. The efficiency of the reaction in the enzyme over the aqueous medium is dependent on the how favorable in later compared to former case.
Due to the syn and anti nucleophilic attacks in class I and class II, respectively the conformations of the product adenylate is different for class I and class II aaRSs (also noted in the present work from crystal structures as shown in supplementary material). This might has an implication in facilitating the second step (tRNA charg-ing). The A76 (adenosine end) of tRNA occupy different sides of adenylate in class I (proximal to the syn oxygen) and class II (proximal to the anti oxygen) aaRSs. A76 approach the carboxyl group of the adenylate via 2′OH group in case of class I aaRSs and via 3′OH group in case of class II aaRSs. One of the free oxygen atom attached with the αP of the aminoacyl adenylate acts as proton attractor from the hydroxyl group. In order to accept the proton, the αP needs to be proximal with the hydroxyl group of tRNA. The αP being attached with the syn oxygen in class I aaRS (an outcome of syn attack in the activation step) can easily accept the proton attached with the proximal OH group of tRNA. In contrast, the αP attached with the anti oxygen (which is a result of the anti attack) can accept the proton attached with the proximal OH group of tRNA. Thus, the syn/anti attack in two aaRS discovered in this work is expected to have a greater significance in the aminoacylation process. The hitherto unknown difference in the reaction mechanism between two class of aaRSs at the molecular level have implication in the understanding of the organization of the active site structure of the enzyme as the active site is the closest vicinity of the reactants as mentioned in the introduction. The difference in the mode of nucleophilic attack at molecular level observed in the present paper might have implication in the observed variation in the structure of the active site, particularly near the reaction center and is worth investigating. This is the subject of future study. Such understanding may be utilized to develop novel enzyme-mimetic systems and understanding the diverse roles of aaRSs beyond aminoacylation (Kim, You, & Hwang, 2011).
Concluding remarks
The present work reveals a novel difference in the mode of inline attack at the molecular level by the oxygen atom of the carboxylic group of substrate AA to the αP atom of ATP in the common scheme of nucleophilic reaction of the activation step of aminoacylation reaction between class I (GluRS, GlnRS, TyrRS, TrpRS, LeuRS, ValRS, IleRS, CysRS, and MetRS) and class II (HisRS, LysRS, ProRS, AspRS, AsnRS, AlaRS, GlyRS, PheRS, and ThrRS) aaRSs for the first time. The syn oxygen atom of the carboxylic group attacks the αP of ATP in class I aaRSs, while the anti oxygen atom attacks in the case of class II aaRSs. The relative arrangement of the negative charge distribution of the oxygen atoms of the ATP and that of the carboxylic acid group of AA is directed in opposite directions in class I and class II aaRSs. The dissimilar relative orientations of the AA and ATP in the two classes of aaRSs and the concomitant variations in ESP result in the observed different modes of attack in the two cases. It is favorable for the syn oxygen atom to be proximal with αP and to act as the attacking atom. Consequently, the corresponding anti oxygen atom can be directed away from the charge distribution of ATP in class I aaRSs to minimize the repulsive interaction. In contrast, the orientation of ATP being exactly opposite in class II aaRSs, it is favorable for anti oxygen atom to be proximal with αP acts as the attacking atom and the syn oxygen atom remain away from the charge distribution of ATP. This is confirmed from the variation of the strength of the interaction between oxygen atoms attached with the carboxylic acid and αP of ATP as a function of variation of the conformation of oxygen. The maximum ρ b at the syn conformation in the adenylate state for class I aaRSs indicates that it is more probable for the syn oxygen atom to attack the αP atom to form covalent bond. In contrast, the ρ b is maximum at the anti conformation for class II aaRSs indicating the attack by anti oxygen atom is more probable and can form covalent bond with the αP atom in the adenylate state. The study of the variation of interaction energy as a function of separation between a pair of oxygen atoms of the carboxylic acid group of substrate AA and αP atom of ATP in the reactant state for class I and class II aaRSs confirms that the approach of the oxygen with syn (either sc or sp) conformation is favorable for class I aaRSs (except TrpRS) and approach by the anti (either ap or ac) conformation oxygen atom is favorable in all class II aaRSs. A comparative study of the reaction mechanisms of the activation step in a class I aaRS (Glutaminyl tRNA synthetase) and in a class II aaRS (Histidyl tRNA synthetase) is carried out by the transition state analysis. The AIM analysis of the interaction between active site residues or ions and substrates is carried out in the reactant state and the transition state.The result shows that the observed novel difference in the mechanism is correlated with the organizations of the active sites of the respective aaRSs. The result has implication in understanding the experimentally observed different modes of tRNA binding in the two classes of aaRSs.
|
2018-04-03T05:49:58.263Z
|
2012-09-20T00:00:00.000
|
{
"year": 2012,
"sha1": "a194d81ddca73d55ce853ffd4c6a14d0fb22f060",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Mechanism_of_the_activation_step_of_the_aminoacylation_reaction_a_significant_difference_between_class_I_and_class_II_synthetases/825413/1/files/1240564.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "7e5534260f503851095512ad65b4267ed2289596",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
210804338
|
pes2o/s2orc
|
v3-fos-license
|
Effectiveness of using pozzolanic material for concrete canal blocks in tropical peatland
The canal blocking system is commonly applied in to control water table in peatland recovery during prolonged dry weather in tropical peatland. The canal blocking is an effective strategy in preventing peat fire in many areas in Riau Province. Various local material such as earth, timber, sand, stone, rock, and concrete have been used in canal blocks. The concrete canal blocks are usually built at the river downstream and need to be stronger, durable and stable than other materials. Tropical peatland is a highly organic and highly acidic environment that could damage and reduce concrete structures service life in the long term. In this research, strength properties and porosity were evaluated to indicate the effectiveness of using pozzolanic material to increase resistance to the acid attack in concrete canal blocks. The concrete was produced by incorporating 10% Palm Oil Fuel Ash (OPC POFA) as a pozzolanic material from biomass to replace the Ordinary Portland Cement (OPC) cement content. The locally available commercial Portland Composite Cement (PCC) that contains pozzolanic material was also studied. The OPC concrete was used as a control mix with a target strength of 35 MPa. Specimens were cast and cured for 28 days in a laboratory water pond before subsequently placed in peat water canal nearby Rimbo Panjang Regency in Riau Province. The reduction rate in compressive strength, tensile strength, Young’s Modulus of Elasticity, and porosity of concrete at 28, 91, 120 and 150 days were determined. Results show that the inclusion of pozzolanic material in concrete has increased the compressive strength, tensile strength, Young’s Modulus of Elasticity and reduced the porosity. PCC and OPC POFA had better performance than the OPC concrete after immersed in the canal up to 150 days. Hence, it can be concluded that the pozzolanic material is effective in improving properties and acid resistance of concrete canal blocks in tropical peatland.
Introduction
As a strategy to restore peatland and to prevent peat fire, the Indonesian government has launched a regulation to build canal blocks in the area that is prone to nature and human-made disaster [1]. The canal blocking system is known very useful to raise the water table, to reduce runoff through the canals, to reduce flow velocity in the canals and to avoid erosion in the peatland [2,3]. The canal block is enabling good water management and allowing peat restoration takes place to minimize carbon losses due to fire and oxidation and induce regrowth of vegetation in the peat ecosystem. Peat Restoration Agency (BRG) has given many pieces of training to the community in various locations in Indonesia on how to build effective canal blocks [4]. Despite its effectiveness in controlling peat fire, there are some problems in canal blocks application such as high cost of construction, unavailability of materials, lack of quality of materials, non-standardized canal block design, and lack of a map of canal block location [5]. Furthermore, Giessen [6] highlighted the effectiveness of current canal blocks design such as dam type, dam condition, maintenance, capital cost and hydrological condition.
Canal blocks are usually built from locally available materials such as soil, sand, timber, rock, and concrete. From this study [6], the canal blocks need to be built on more solid footing to reduce premature deterioration due to fast subsidence of compressible peat soil. The quality of canal blocks depends on the material used in construction. Canal blocks from soil and sand only last for less than one year, while timber and rock blocks could be used for two years. Based on the Guidelines for Design and Construction of Check Dams for Prevention and Control of Peatland Fire [7], the concrete block has various type of construction such as precast stacked blocks, precast post-panel system and cast-in-situ concrete with a service life of more than five years. Concrete canal blocks are usually built in the river downstream that is adjacent to the coastal area. Figure 1 shows a typical concrete canal block used in peatland. Concrete is known to be vulnerable in acidic environments, both natural or synthetic acid environments. The type of acid, exposure time, acid concentration and degree of association of the acid determine the rate of concrete degradation. Generally, the acid dissolves calcium from the concrete that leads to an increase of porosity, a decrease of strength, and concrete hydration product damage [8]. Organic acid from natural water or land, such as peatland, could still be damaging to the concrete at a slower rate than a strong acid. This gradual attack is more noticeable after a long time of exposure, and the concrete degradation depends mostly on the concentration and composition of the organic acid. Due to the complexity of peat and its interaction with cement and metal, there are still not many studies in this area, especially exposure to the real environment. Most studies were carried out in a laboratory that covered degradation mechanism, cement type, attack of different kind of organic acids, and properties of concrete after exposure to the various variety of organic acids [9][10][11].
Properties of concrete in an organic acid in the peatland area was an important subject in the past few years. Some studies reported strength and porosity reduction of concrete, and corrosion of steel reinforcement exposed to organic acid in the first few weeks [12][13][14][15][16]. Inclusion of the pozzolanic materials such as fly ash, palm oil fuel ash (POFA), rice husk ash and silica fume in the concrete is advantageous to reduce the pores sizes and densify the microstructure of the concrete matrix. Furthermore, POFA is potential as a low-cost pozzolanic material for the concrete structure exposed to peatland, since Riau Province is one of the largest palm oil producers in Indonesia. In this research, the effectiveness of a pozzolanic material, such as palm oil fuel ash was studied by placing the concrete in the peat canal up to 150 days. The efficacy of the pozzolanic material can also be emphasized by calculating the change in strength properties and porosity after exposure to water in the canal.
Materials and methods
Three types of the binder were used in this research, i.e. Ordinary Portland Cement (OPC), Portland Composite Cement (PCC) and OPC-Palm Oil Fuel Ash (OPC-POFA). A control mix was OPC concrete. PCC is a new type of commercial cement that contains less calcium than the OPC. The pozzolanic material used was POFA from a local industry incinerator in Pelalawan Regency. POFA was included as supplementary cementitious material by 10% of the cement weight for the OPC. The chemical composition of POFA is presented in table 1. Peat water was taken from Rimbo Panjang Regency in Riau Province. The physical and chemical characteristics of peat water are shown in table 2. In general, the peat water composition was not within the permissible limits of drinking water. The water has a very low pH of 3.85 and high organic content of 328 mg/L. Table 3 shows the mixture composition for the OPC, PCC and OPC POFA concrete with a target strength of 35 MPa. The samples were cast in a 100x200mm mould and cured in a water pond for 28 days. The specimens were then placed in acidic peat water canal, Rimbo Panjang Regency, Riau Province up to 150 days until a testing date. Compressive and tensile strength was determined at 28, 91, 120 and 150 days in compliance with SNI 03-1973-1990 and SNI 03-2491-2002. Porosity values were measured at 28, 91 and 150 days after immersed in the water canal. The water canal has 5m width and approximately 1.0-1.50m depth. The pH of the water canal at the age of concrete testing were 3.6 between 3.89 at the time of testing. The pH values depend on the weather and seasons in Rimbo Panjang Regency, since the concrete exposure was carried out during transition between dry and rainy seasons.
The reduction rate of strength properties and porosity can be calculated using a similar expression for the following compressive strength equation: R C = reduction rate of change of compressive strength of cement concrete specimen R C0 = compressive strength value after cured in a standard curing room for 28 days (initial strength value) R Ce = compressive strength value after exposure to peat canal water By comparing the strength and porosity values with the initial strengths, the reduction rate could be calculated. Positive values mean the properties after exposure were lower than the initial properties, and vice versa. The reduction rate of change in compressive strength (RC), change in tensile strength (RT), change in Young's Modulus of Elasticity (RY), and change in porosity (RP) were plot as a function of time and type of cement. The correlation between those mechanical properties was also presented.
Compressive strength
Reduction rate of compressive strength of the OPC, PCC and OPC-POFA exposed in the peat water canal after 28, 91, 120 and 150 days is presented in figure 2. After exposure, the OPC concrete reduction rate gradually increased from 9.10%, 17.62% and 22.73% at 91, 120, and 150 days, respectively. The high reduction rate development is indicating a high decline in compressive strength of the OPC than other mixes. Both the OPC-POFA and PCC concrete performed lower reduction rate than the OPC. When comparing the reduction rate of the concrete produced with OPC-POFA and PCC, it was observed that the OPC-POFA had a lower rate of compressive strength reduction of 16.72% at 150 days, whereas the PCC concrete possessed 12.94%. At early ages, however, considerable strength reduction of 3.08% was observed in PCC concrete or the natural pozzolana blended cement. The test results revealed that the compressive strength reduction of concrete produced using pozzolanic material was lower compared to those with pure CaO cement. The reason behind this finding was that the pozzolanic material rich in SiO2 that react with Ca(OH) 2 to form Calcium Silicate Hydrate (CSH) physically improves the pore structure of the cement matrix. High CaO content made the OPC more susceptible to acid attack. Acid ions attack the hydration product of OPC, decompose the hydration product, and increase the porosity followed by the reduction of mechanical properties of the concrete [20]. On the other hand, the addition of pozzolanic material significantly increases concrete resistance to acid attack from peatland environment. Figure 3 displays the reduction rate of the tensile strength of OPC, OPC-POFA and PCC at 28, 91, 120 and 150 days. The rate of tensile strength loss of the OPC concrete was gradually increased from 41.18%, 45.16%, and 22.53% at 91, 120 and 150 days, respectively. The tensile strength reduction caused by a bonding destruction a the interface of the OPC matrix and aggregates in the control mix. Although peat water has weak organic acid, however, it still gives a considerable attack into the concrete using pure and high calcium content cement. Continuous exposure to the peat water of canal block made from OPC concrete certainly is not recommended, since it deteriorates the strength even after 28 days exposure to the peatland. In reverse, mix PCC and the OPC-POFA had gradually increased in the tensile strength than the OPC concrete after exposed to peat water up to 150 days. It was observed that the tensile strength followed the same trend as the compressive strength because there is a direct relationship between the pore densification and increased strength after exposure in peat water. Both OPC-POFA and PCC gained as considerable strength from 28 days to 150 days by 70% and 85.95%, respectively. Inclusion of pozzolanic material in the concrete mix produces a pozzolanic reaction and pozzolanic product (Calcium Silicate Hydrate gels) that improve the weak region of Interfacial Transition Zone (ITZ) of the concrete microstructure. The ITZ or the zone between aggregate surface and cement matrix is responsible for improving the microstructure, thus the elastic properties of concrete. The results were in line with a previous study about the effect of using pozzolanic materials such as slag to enhance the bond between the aggregates and cement matrix trough microstructure improvement [21]. From this research, it is obviously recommended to opt for pozzolanic material or using pozzolan based cement to build canal block that is continuously exposed to peat water for a very long time.
Young's modulus of elasticity
Young's Modulus of Elasticity (YM) represents a stiffness of material under certain exposures. Figure 4 shows the elastic modulus of OPC concrete decreased by 52.33% after exposed to peat water at 150 days. A gradual reduction of the Young's Modulus of Elasticity of the OPC concrete was observed. This behaviour aligns with the tensile strength since the OPC contains calcium hydroxide that reacts with the acid in peat water; thus, it has an adverse impact on the modulus of elasticity. A change in elastic modulus of OPC POFA and PCC concrete was considered significant, varied between 49.79-96.86% at 150 days. This proves that the peat water has a minor effect on modulus elasticity of OPC POFA and PCC. An increase of stiffness or modulus of elasticity of both type of concrete despite a long immersion in peat water. The PCC concrete performed a significant change in stiffness than the OPC concrete due to pozzolanic material in the concrete. This behaviour is favourable for canal block in the river downstream.
Porosity
The reduction rate of porosity for OPC, OPC POFA and PCC specimens at 28, 91 and 120 days are presented in figure 5. It can be observed that the porosity's reduction rate of OPC concrete increased at 28 days by 3.64% but continues to experience a decline up to 150 days by 16,54%. This clearly shows the acid attack influences the porosity of the plain cement (OPC). An increase in porosity of the OPC concrete under continuous immersion in water canal might be due to the microstructure alteration after the acid ion decomposed CaO and hydration product. It is also shown that there was an opposite trend for OPC POFA and PCC concrete with a considerable decrease of porosity after 91 days. The porosity of OPC POFA and PCC decreased by 24.39% and 15.81% at 120 days, with a reduced rate for each specimen was 18.14% and 17.85%, respectively. These values continue to increase up to 150 days with the highest reduction rate of porosity shown in OPC POFA by 22 be due to continuous hydration and the pozzolanic reaction of the specimens after being subjected in the peat water canal. This clearly shows that the pore development of concrete in peat water canal immersion is affected more by the cement composition, such as the inclusion of pozzolanic material in blended cement.
Effectiveness of pozzolanic material in concrete canal blocks
Canal blocks in peat water exposed to low bearing capacity and aggressive environment peatland. Concrete canal blocks in downstream are usually made from concrete due to the ease of operation, maintenance site accessibility, and service life of the structures. Most of the blocks in the downstream has a dual function as a culvert, road culvert, and bridge crossing. Despite a high capital cost for manufacturing concrete, a permanent concrete canal block such as stacked blocks and the post-panel system has better life service of more than five years. Compared to the low-cost canal blocks from earth, timber or stones, the concrete canal blocks have better water retention and low maintenance cost. Based on the guideline [7], the concrete canal blocks need to be built permanently, and it needs to have the strength of more than 25 MPa with a concrete cover of 40mm and uses pozzolanic material to increase the acid resistance. Furthermore, the canal blocks must be useful structurally, such as excellent water retention, good mechanical strength and stability (solid footing) against the water stream and flooding.
The influence of pozzolanic material in concrete canal blocks to increase acid resistance based on enhancing strength properties such as compressive and elastic properties (tensile and modulus of elasticity), and improve the water tightness as shown in figure 6. Figures 6(a) and 6(b) displays the tensile strength and modulus of elasticity increases with an increase the compressive strength. This clearly shows that the strength grade has a significant influence on the elastic properties development of the concrete. OPC POFA and PCC still perform a gradual strength gain after immersed in peat water up to 150 days, which is beneficial for concrete canal blocks structural resistance and stability. Moreover, in figure 6(c), a reduction of porosity with an increase in compressive strength shows the mixture composition of cement influences the porosity in concrete. The pozzolanic material in OPC POFA and PCC effectively induced a further reaction that improved the size of the pores and types in the concrete through the formation of Calcium-Silicate-Hydrate (C-S-H). The continuous formation of the hydration product (C-S-H) has improved the strength, reduced the porosity, and increase the effectiveness of concrete after exposure to the peat canal water. Thus, the pozzolanic material is an effective material for concrete canal block exposed to an aggressive environment in the long term.
Conclusions
This study aims to investigate the effectiveness of using pozzolanic material to improve mechanical properties and porosity of the OPC, PCC and OPC POFA concrete as canal blocks material in tropical peatland. The specimens were subjected to peat water up to 150 days in the peat canal. In general the OPC POFA and PCC concrete that contains pozzolanic material performed strength gain up to 150 days, while OPC concrete showed a significant strength reduction. The rate of decrease in compressive strength, tensile strength and Young's Modulus of concrete using pozzolanic material were less than the OPC concrete. There was a high reduction of porosity of the OPC POFA and PCC than the OPC concrete. This improvement was due to the influence of pozzolanic material that increased the strength of concrete and reduced its porosity through the formation of Calcium-Silicate-Hydrate (C-S-H). Thus, the pozzolanic material is an effective material for concrete that was directly exposed to an aggressive environment in the long term.
|
2019-10-17T09:05:39.106Z
|
2019-10-15T00:00:00.000
|
{
"year": 2019,
"sha1": "16c24a1792ff9f46952fc087731122a7483dab49",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/615/1/012111",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "09069dd9115e4dc435accb7af8444a347ca60d64",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
261043836
|
pes2o/s2orc
|
v3-fos-license
|
Search for long-lived stopped R-hadrons decaying out-of-time with pp collisions using the ATLAS detector
An updated search is performed for gluino, top squark, or bottom squark R-hadrons that have come to rest within the ATLAS calorimeter, and decay at some later time to hadronic jets and a neutralino, using 5.0 and 22.9 fb-1 of pp collisions at 7 and 8 TeV, respectively. Candidate decay events are triggered in selected empty bunch crossings of the LHC in order to remove pp collision backgrounds. Selections based on jet shape and muon-system activity are applied to discriminate signal events from cosmic ray and beam-halo muon backgrounds. In the absence of an excess of events, improved limits are set on gluino, stop, and sbottom masses for different decays, lifetimes, and neutralino masses. With a neutralino of mass 100 GeV, the analysis excludes gluinos with mass below 832 GeV (with an expected lower limit of 731 GeV), for a gluino lifetime between 10 microseconds and 1000 s in the generic R-hadron model with equal branching ratios for decays to qqbar+neutralino and gluon+neutralino. Under the same assumptions for the neutralino mass and squark lifetime, top squarks and bottom squarks in the Regge R-hadron model are excluded with masses below 379 and 344 GeV, respectively.
R-hadron interactions in matter are highly uncertain, but some features are well predicted.The gluino, stop, or sbottom can be regarded as a heavy, non-interacting spectator, surrounded by a cloud of interacting quarks.R-hadrons may change their properties through strong interactions with the detector.Most R-mesons would turn into R-baryons [29], and they could also change their electric charge through these interactions.At the Large Hadron Collider (LHC) at CERN [30], the R-hadrons would be produced in pairs and approximately back-toback in the plane transverse to the beam direction.Some fraction of these R-hadrons would lose all of their momentum, mainly from ionization energy loss, and come to rest within the detector volume, only to decay to a neutralino ( χ0 ) and hadronic jets at some later time.A previous search for stopped gluino R-hadrons was performed by the D0 Collaboration [31], which excluded a signal for gluinos with masses up to 250 GeV.That analysis, however, could use only the filled crossings in the Tevatron bunch scheme and suppressed collisionrelated backgrounds by demanding that there was no non-diffractive interaction in the events.Search techniques similar to those described herein have also been employed by the CMS Collaboration [32,33] using 4 fb −1 of 7 TeV data under the assumptions that m g − m χ0 > 100 GeV and BR(g → g χ0 )= 100%.The resulting limit, at 95% credibility level, is m g > 640 GeV for gluino lifetimes from 10 µs to 1000 s.ATLAS has up to now studied 31 pb −1 of data from 2010 [34], resulting in the limit m g > 341 GeV, under similar assumptions.
This analysis complements previous ATLAS searches for long-lived particles [35,36] that are less sensitive to particles with initial β 1.By relying primarily on calorimetric measurements, this analysis is also sensitive to events where R-hadron charge flipping may make reconstruction in the inner tracker and the muon system impossible.Detection of stopped R-hadrons could also lead to a measurement of their lifetime and decay properties.Moreover, the search is sensitive to any new physics scenario producing large out-of-time energy deposits in the calorimeter with minimal additional detector activity.
II. THE ATLAS DETECTOR AND EVENT RECONSTRUCTION
The ATLAS detector [37] consists of an inner tracking system (ID) surrounded by a thin superconducting solenoid, electromagnetic and hadronic calorimeters and a muon spectrometer (MS).The ID consists of silicon pixel and microstrip detectors, surrounded by a transition radiation straw-tube tracker.The calorimeter system is based on two active media for the electromagnetic and hadronic calorimeters: liquid argon in the inner barrel and end-cap/forward regions, and scintillator tiles (TileCal) in the outer barrel region for |η| < 1.7 [38].The calorimeters are segmented into cells that have typical size 0.1 by 0.1 in η-φ space in the TileCal section.The MS, capable of reconstructing tracks within |η| < 2.7, uses toroidal bending fields generated by three large superconducting magnet systems.There are inner, middle, and outer muon detector stations, each consisting of several precision tracking layers.Local muon track segments (abbreviated to simply "muon segments" from now on) are first found in each station, before being combined into extended muon tracks.
For this analysis, events are reconstructed using "cosmic" settings for the muon system [39], to find muon segments with high efficiency for muons that are "out-oftime" with respect to the expected time for a muon created from a pp collision and traveling at near the speed of light.Such out-of-time muons are present in the two most important background sources.Cosmic ray muons are present at a random time compared to the bunchcrossing time.Beam-halo muons are in time with proton bunches but may appear early if they hit the muon chamber before particles created from the bunch crossing.Using cosmic settings for the muon reconstruction also loosens requirements on the segment direction and does not require the segment to point towards the interaction point.
Jets are constructed from clusters of calorimeter energy deposits [40] using the anti-k t jet algorithm [41] with the radius parameter set to R = 0.4, which assumes the energetic particles originate from the nominal interaction point.This assumption, while not generally valid for this analysis, has been checked and still accurately quantifies the energy released from the stopped R-hadron decays occurring in the calorimeter, as shown by comparisons of test-beam studies of calorimeter energy response with simulation [42].Jet energy is quoted without correcting for the typical fraction of energy not deposited as ionization in the jet cone area, and the minimum jet transverse momentum (p T ) is 7 GeV.ATLAS jet reconstruction algorithms are described in more detail elsewhere [43].The missing transverse momentum (E miss T ) is calculated from the p T of all reconstructed physics objects in the event, as well as all calorimeter energy clusters not associated with jets.
III. LHC BUNCH STRUCTURE AND TRIGGER STRATEGY
The LHC accelerates two counter-rotating proton beams, each divided into 3564 slots for proton bunches separated by 25 ns.When protons are injected into the LHC, not every bunch slot (BCID) is filled.During 2011 and 2012, alternate BCIDs within a "bunch train" were filled, leading to collisions every 50 ns, but there were also many gaps of various lengths between the bunch trains containing adjacent unfilled BCIDs.Filled BCIDs typically had > 10 11 protons.Unfilled BCIDs could contain protons due to diffusion from filled BCIDs, but typically < 10 8 protons per BCID [44,45].The filled and unfilled BCIDs from the two beams can combine to make three different "bunch crossing" scenarios.A paired crossing consists of a filled BCID from each beam colliding in ATLAS and is when R-hadrons would be produced.An unpaired crossing has a filled BCID from one beam and an unfilled BCID from the other.Finally, in an empty crossing the BCIDs from both beams are unfilled.
Standard ATLAS analyses use data collected from the paired crossings, while this analysis searches for physics signatures of metastable R-hadrons produced in paired crossings and decaying during selected empty crossings.This is accomplished with a set of dedicated low-threshold calorimeter triggers that can fire only in the selected empty or unpaired crossings where the background to this search is much lower.The type of each bunch crossing is defined at the start of each LHC "fill" using the ATLAS beam monitors [46].Crossings at least six BCIDs after a filled crossing are selected, to reduce background in the muon system.
ATLAS has a three-level trigger system consisting of one hardware and two software levels [47].Signal candidates are collected using a hardware trigger requiring localized calorimeter activity, a so-called jet trigger, with a 30 GeV transverse energy threshold.This trigger could fire only during an empty crossing at least 125 ns after the most recent paired crossing, such that the detector is mostly free of background from previous interactions.The highest-level software trigger then requires a jet with p T > 50 GeV, |η| < 1.3, and E miss T > 50 GeV.The software trigger is more robust against detector noise, keeping the final trigger rate to < 1 Hz.After offline reconstruction, only 5% of events with more than two muon segments are saved, and no events with more than 20 muon segments are saved, to reduce the data storage needs since events with muon segments are vetoed in the analysis.A data sample enriched with beam-halo muons is also accepted with a lower-threshold jet trigger that fires in the unpaired crossings, and a sample is collected using a trigger that accepts random events from the empty crossings to study background conditions.
IV. DATA SAMPLES
The data used are summarized in Table I, where the corresponding delivered integrated luminosity and recorded live time in the selected empty BCIDs are provided.The early periods of data taking in 2011 are selected as a "cosmic background region" to estimate the rate of background events (mostly from cosmic ray muons, as discussed below).This is motivated by the low integrated luminosity and small number of paired crossings during these initial periods.For a typical signal model that this analysis excludes, less than 3% of events in the cosmic background region are expected to arise from signal processes.As discussed in detail in
V. SIMULATION OF R-HADRONS
Monte Carlo simulations are used primarily to determine the reconstruction efficiency and stopping fraction of the R-hadrons, and to study associated systematic uncertainties on the quantities used in the selections.The simulated samples have gluino or squark masses in the range 300-1000 GeV, to which the present analysis is sensitive.The Pythia program [48], version 6.427, with CTEQ6L1 parton distribution functions (PDF) [49], is used to simulate pair production of gluinos, stops, or sbottoms.The string hadronization model [50], incorporating specialized hadronization routines [1] for Rhadrons, is used to produce final states containing two R-hadrons.
To compensate for the fact that R-hadron scattering is not strongly constrained, the simulation of R-hadron interactions with matter is handled by a special detector response simulation [29] using Geant4 [51,52] routines based on several scattering and spectrum models with different sets of assumptions: the Generic [29,53], Regge [54,55], and Intermediate [56] models.Each model makes different assumptions about the R-hadron nuclear interactions and the spectrum of R-hadron states.
Generic: Limited constraints on allowed stable states permit the occurrence of doubly-charged R-hadrons and a wide variety of charge-exchange scenarios.
The nuclear scattering model is purely phase-space driven.This model is chosen as the nominal model for gluino R-hadrons.
Regge: Only one (electrically neutral) baryonic state is allowed.The nuclear scattering model employs a triple-Regge formalism.This model is chosen as the nominal model for stop and sbottom R-hadrons.
Intermediate: The spectrum is more restricted than the generic model, while still featuring charged baryonic states.The scattering model used is that of the generic model.
In the simulation, roughly equal numbers of singlycharged and neutral R-hadrons are generated.They undergo an average of 4-6 nuclear interactions with the detector, depending on the R-hadron model, during which their charge can change.The R-hadrons are created on average with about 200 GeV of kinetic energy.Those created with less than about 20 GeV of kinetic energy tend to lose it all, mostly through ionization, and stop in the detector, as shown in Fig. 1.Those that stop in the detector are all charged when they stop, with roughly equal numbers of positive and negative singly-charged states.If the doubly-charged state is allowed (as in the generic model), about half of the stopped R-hadrons would be doubly positive charged.
If a simulated R-hadron comes to rest in the ATLAS detector volume, its location is recorded.Such an Rhadron would bind to a heavy nucleus of an atom in the detector, once it slows down sufficiently, and remain in place indefinitely [57].Table II shows the probability for a simulated signal event to have at least one R-hadron stopped within the detector volume, for the models considered.The stopping fraction shows no significant dependence on the gluino, stop, or sbottom mass within the statistical uncertainty of the simulation.The stopping locations are used as input for a second step of Pythia where the decays of the R-hadrons are simulated.A uniform random time translation is applied in a 25 ns time window, from −15 to +10 ns, relative to the bunch-crossing time, since the R-hadron would decay at a random time relative to the bunch structure of
Using the randomly-triggered data, the calorimeter activity due to preceding interactions is found to be negligible compared to the jet energy uncertainty and is ignored.Different models allow the gluinos to decay via the radiative process, g → g χ0 , or via g → q q χ0 .The results are interpreted assuming either a 50% branching ratio to g χ0 and 50% to q q χ0 , or 100% to t t χ0 as would be the case if the top squark was significantly lighter than the other squarks.Reconstruction efficiencies are typically ≈ 20% higher for q q χ0 compared to g χ0 decays.The stop (sbottom) is assumed to always decay to a top (bottom) quark and a neutralino.The neutralino mass, m χ0 , is fixed either to 100 GeV, or such that there is only 100 GeV of free energy left in the decay (a compressed scenario).
VI. CANDIDATE SELECTION
First, events are required to pass tight data quality constraints that verify that all parts of the detector were operating normally.Events with calorimeter noise bursts are rejected; this has negligible impact on signal efficiency.The basic selection criteria, imposed to isolate signal-like events from background events, demand at least one high energy jet and no muon segments reconstructed in the muon system passing selections.Since most of the R-hadrons are produced centrally in η, the analysis uses only the central barrel of the calorimeter and requires that the leading jet satisfies |η| < 1.2.In order to reduce the background, the analysis demands the leading jet energy > 50 GeV.Up to five additional jets are allowed, more than expected for the signal models considered.
The fractional missing E T is the E miss T divided by the leading jet p T and is required to be > 0.5.This eliminates background from beam-gas and residual pp events, and has minimal impact on the signal efficiencies.To remove events with a single, narrow energy spike in the calorimeter, due to noise in the electronics or data corruption, events are vetoed where the smallest number of cells containing 90% of the energy deposit of the leading jet (n90) is fewer than four.This n90>3 requirement also reduces other background significantly since most large energy deposits from muons in the calorimeter result from hard bremsstrahlung photons, which create short, narrow electromagnetic showers.Large, broad, hadronic showers from deep-inelastic scattering of the muons off nuclei are far rarer.To further exploit the difference between calorimeter energy deposits from muons and the expected signal, the jet width is required to be > 0.04, where jet width is the p T -weighted ∆R average of each constituent from the jet axis and ∆R = (∆η) 2 + (∆φ) 2 .The fraction of the leading jet energy deposited in the TileCal must be > 0.5, to reduce background from beam-halo where the incoming muon cannot be detected due to lack of MS coverage at low radius in the forward region.
The analysis then requires that no muon segment with more than four associated hits in a muon station be reconstructed in the event.Muon segments with a small number of measurements are often present from cavern background, noise, and pile-up, as studied in the randomly-triggered data.The events before the muon segment veto, only requiring the leading jet energy > 50 GeV, are studied as a control sample, since the expected signal-to-background ratio is small.Comparisons of the distributions of several jet variables between backgrounds and data can be seen in Fig. 2 for events in this control sample.The backgrounds shown in these figures are estimated using the techniques described in Sec.VII.To remove overlap between the cosmic ray and beamhalo backgrounds in these plots, an event is not considered "cosmic" if it has a muon segment with more than four hits and an angle within 0.2 radians from parallel to the beamline.The same distributions are shown for events after the muon segment veto in Fig. 3. Finally, a leading jet energy requirement of > 100 or > 300 GeV defines two signal regions, sensitive to either a small or large mass difference between the R-hadron and the neutralino in the signal model, respectively.Table II shows the efficiencies of these selections on the signal simulations, and Table III presents the number of data events surviving each of the imposed selection criteria.
VII. BACKGROUND ESTIMATION A. Beam-Halo Background
Protons in either beam can interact with residual gas in the beampipe, or with the beampipe itself if they stray off orbit, leading to a hadronic shower.If the interaction takes place several hundred meters from ATLAS, most of the shower is absorbed in shielding or surrounding material before reaching ATLAS.The muons from the shower can survive and enter the detector, traveling parallel to the beamline and in time with the (filled) proton BCIDs [58,59].The unpaired-crossing data with a jet passing the selection criteria are dominantly beam-halo background.Figure 4 (left) shows an event display of an example beam-halo background candidate event.
To estimate the number of beam-halo events in the empty crossings of the search region, an orthogonal sample of events from the unpaired crossings is used.The ratio of the number of beam-halo events that pass the jet criteria but fail to have a muon segment identified III are applied except for leading jet energy > 100 GeV and the muon segment veto.For the quantity being plotted, the corresponding selection criterion is not applied.Histograms are normalized to the expected number of events in the search region.Note that only 5% of data events with more than two muon track segments were kept; they are scaled on these figues by a factor of 20.The top hashed band shows the total statistical uncertainty on the background estimate.The background estimate does not correctly describe beam-halo with low TileCal energy fraction; this region is not used in the analysis.The TileCal energy fraction can be negative or greater than one due to pileup-subtraction corrections.
to those that do have a muon segment identified is measured.This ratio is then multiplied by the number of beam-halo events observed in the signal region that do have an identified muon segment to give the estimate of the number that do not have a muon segment and thus contribute to background in the signal region.Beamhalo events in the unpaired-crossing data are identified by applying a modified version of the search selection criteria.The muon segment veto is removed, events with leading jet energy > 50 GeV are used, and the n90>3 22.9 fb FIG. 3. The event yields in the signal region for candidates with all selection criteria applied (in Table III) including the muon segment veto, but omitting the jet energy > 100 GeV requirement.All samples are scaled to represent their anticipated yields in the search region.The top hashed band shows the total statistical uncertainty on the background estimate.
requirement is not applied.Studies show that the muon efficiency is not significantly correlated with the energy or shape of the jet in the calorimeter for beam-halo events.A muon segment is required to be nearly parallel to the beam pipe, θ < 0.2 or θ > (π − 0.2), and have more than four muon station measurements.Next, beam-halo events that failed to leave a muon segment are counted, allowing the ratio of beam-halo events with no muon seg-ment identified to be calculated.Then the number of beam-halo muons in the search region (the empty crossings) that did leave a muon segment is counted.The same selection criteria as listed in Table III are used, omitting the 100 GeV requirement.However, instead of a muon segment veto, a parallel muon segment is required.If no events are present, the uncertainty is taken as ±1 event.Findings are summarized in Table IV.TABLE III.Number of events after selections for data in the cosmic background and search regions, defined in Table I.The cosmic background region data are shown before and after scaling to the search region, which accounts for the different detector live time and accidental muon segment veto efficiency (if applicable) between the background and search regions.The cumulative efficiency after each cut for a simulated signal of g → g/qq χ0 decays with a gluino mass of 800 GeV and fixed neutralino mass of 100 GeV is also shown.The uncertainties are statistical only.
Selection criteria
Cosmic The background from cosmic ray muons is estimated using the cosmic background region (described in Sec.IV).The beam-halo background is estimated for this data sample as described above, and this estimate is subtracted from the observed events passing all selections.Finally, this number of cosmic ray events in the cosmic region is scaled by the ratio of the signal-region to cosmic-region live times to estimate the cosmic ray background in the signal region.Additionally, the cosmic background estimate is multiplied by the muon-veto efficiency (see Sec. IX) to account for the rejection of background caused by the muon veto.An example cosmic-ray muon background event candidate is shown in Figure 4 (right).
VIII. EVENT YIELDS
Some candidate event displays are shown in Fig. 5. Distributions of jet variables are plotted for events in TABLE IV.Estimate of beam-halo events entering the search region, as described in Sec.VII A. The ratio of the number of beam-halo muons that do not leave a segment to the number that do leave a segment is calculated from the unpaired data.This ratio is then applied to the number of events in the search region where a segment was reconstructed to yield the beam-halo estimate.The quoted uncertainties are statistical only.the jet energy > 100 GeV signal region after applying all selection criteria and are compared to the estimated backgrounds in Fig. 6.The shapes of these distributions and event yields are consistent.Table V shows the signal region event yields and background estimates.There is no evidence of an excess of events over the background estimate.
IX. CONTRIBUTIONS TO SIGNAL EFFICIENCY
Quantifying the signal efficiency for the stopped Rhadron search presents several unique challenges due to the non-prompt nature of their decays.Specifically there are four sources of inefficiency: stopping fraction (Sec.V), reconstruction efficiency (Table II), accidental muon veto, and probability to have the decay occur in an empty crossing (timing acceptance).Since the first two have been discussed above, only the accidental muon veto and timing acceptance are described here.
A. Accidental Muon Veto
Operating in the empty crossings has the significant advantage of eliminating collision backgrounds.However because such a stringent muon activity veto is employed, a significant number of events are rejected in the offline analysis due to spurious track segments in the muon system, which are not modeled in the signal simulation.Both β-decays from activated nuclei and δelectrons could produce segments with more than four muon station measurements.This effect is separate from a signal decay producing a muon segment that then vetoes the event.To study the rate of muon segments, events from the empty random trigger data in 2011 and 2012 are examined as a function of run number, since the effect can depend strongly on beam conditions.The rate of these events that have a muon segment from noise or other background is calculated.The efficiency per run to pass the muon segment veto is applied on a live-time weighted basis to the cosmic background estimate and varies from 98% at the start of 2011 to 70% at the end of 2012.It is still applied to the cosmic background estimate after the muon veto, since the probability to have the cosmic background event pass the analysis selections and contribute to the signal region events depends on it passing the muon veto.The beam-halo background estimate already implicitly accounts for this effect across run periods.For the signal, this effect is accounted for inside the timing acceptance calculation, on a per-run basis.
B. Timing Acceptance
The expected signal decay rate does not scale with instantaneous luminosity.Rather, at any moment in time, the decay rate is a function of the hypothetical R-hadron lifetime and the entire history of delivered luminosity.For example, for longer R-hadron lifetimes the decay rate anticipated in today's run is boosted by luminosity delivered yesterday.To address the complicated time behavior of the R-hadron decays, a timing acceptance is defined for each R-hadron lifetime hypothesis, T (τ ), as the number of R-hadrons decaying in an empty crossing divided by the total number that stopped.The T (τ ) factor thus accounts for the full history of the delivered luminosity and live time recorded in empty crossings.This means the number of R-hadrons expected to be reconstructed is L × σ × stopping × recon × T (τ ), where L is the integrated luminosity, σ is the R-hadron production cross section weighted by integrated luminosity at 7 and 8 TeV, stopping is the stopping fraction, and recon is the reconstruction efficiency.
To calculate the timing acceptance for the actual 2011 and 2012 LHC and ATLAS run schedule, measurements are combined of the delivered luminosity in each BCID, the bunch structure of each LHC fill, and the live time recorded in empty crossings during each fill, all kept in the ATLAS online conditions database.The efficiency calculation is split into short and long R-hadron life- times, to simplify the calculation.For R-hadron lifetimes less than 10 seconds, the bunch structure is taken into account, but not the possibility that an R-hadron produced in one run could decay in a later one.For longer R-hadron lifetimes, the bunch structure is averaged over, but the chance that stopped R-hadrons from one run decay in a later one is considered.The resulting timing acceptance is presented in Fig. 7.
X. SYSTEMATIC UNCERTAINTIES
Three sources of systematic uncertainty on the signal efficiency are studied: the R-hadron interaction with matter, the out-of-time decays in the calorimeters, and the effect of the selection criteria.The total uncertainties, added in quadrature, are shown in Table II.In addition to these, a 2.6% uncertainty is assigned to the luminosity measurement [60], fully correlated between the 2011 and 2012 data.To account for occasional dead-time due to high trigger rates, a 5% uncertainty is assigned to the timing acceptance; this accounts for any mismodeling of the accidental muon veto as well.The gluino, stop, or sbottom pair-production cross-section uncertainty is not included as a systematic uncertainty but is used when extracting limits on their mass by finding the intersection with the cross section −1σ of its uncertainty.
A. R-hadron-Matter Interactions
The various simulated signal samples are used to estimate the systematic uncertainty on the stopping fraction due to the scattering model.There are two sources of theoretical uncertainty: the spectrum of R-hadrons and nuclear interactions.To estimate the effect from different allowed R-hadron states, three different scattering models are employed: generic, Regge, and intermediate (see Sec. V).Each allows a different set of charged states that affect the R-hadron's electromagnetic interac- 22.9 fb FIG. 6.The event yields in the signal region for candidates with all selections (in Table III) except jet energy > 300 GeV.All samples are scaled to represent their anticipated yields in the search region.The top hashed band shows the total statistical uncertainty on the background estimate.
tion with the calorimeters.Since these models have large differences for the R-hadron stopping fraction, limits are quoted separately for each model, rather than including the differences as a systematic uncertainty on the signal efficiency.There is also uncertainty from the modeling of nuclear interactions of the R-hadron with the calorimeter since these can affect the stopping fraction.The effect is estimated by recalculating the stopping fraction after doubling and halving the nuclear cross section.The difference gives a relative uncertainty of 11%, which is used as the systematic uncertainty in limit setting.TABLE V.The number of expected and observed events (with statistical uncertainties) corresponding to each of the selection criteria.The number of expected cosmic events in the search region is calculated by subtracting the expected number of beam-halo background events in the cosmic region (see Table IV) from the number of events observed in the cosmic region (see Table III) and then scaling to the search region.The scaling is simply the ratio of recorded empty live times for the two regions (see Table I) before the muon segment veto is applied.After the muon segment veto is applied, the scaling also accounts for the difference in the muon segment veto efficiency between the cosmic and search regions.
Leading jet Muon
Cosmic region Number of events in search region energy (GeV) veto Events Beam-halo bkgd.
B. Timing in the Calorimeters
Since the R-hadron decay is not synchronized with a BCID it is possible that the calorimeters respond differently to the energy deposits in the simulated signals than in data.The simulation considers only a single BCID for each event; it does not simulate the trigger in multiple BCIDs and the firing of the trigger for the first BCID that passes the trigger.In reality, a decay at −15 ns relative to a given BCID might fire the trigger for that BCID, or it may fire the trigger for the following BCID.The reconstructed energy response of the calorimeter can vary between these two cases by up to 10% since the reconstruction is optimized for in-time energy deposits.To estimate the systematic uncertainty, the total number of simulated signal events passing the offline selections is studied when varying the timing offset by 5 ns in each direction (keeping the 25 ns range).This variation conservatively covers the timing difference observed between simulated signal jets and cosmic ray muon showers.The minimum and maximum efficiency for each mass point is calculated, and the difference is used as the uncertainty, which is always less than 3% across all mass points.
C. Selection Criteria
The systematic uncertainty on the signal efficiency due to selection criteria is evaluated by varying each criterion up and down by its known uncertainty.The uncertainties from each criterion are combined in quadrature and the results are shown in Table II.Varying only the jet energy scale produces most of the total uncertainty from the selection criteria.The jet energy scale uncertainty is taken to be ±10% to allow for non-pointing R-hadron decays and is significantly larger than is used in standard ATLAS analyses.Although test-beam studies showed the energy response agrees between data and simulation for hadronic showers to within a few percent [42], even for non-projective showers, a larger uncertainty is conservatively assigned to cover possible differences between single pions and full jets, and between the test-beam detectors studied and the final ATLAS calorimeter.
D. Systematic Uncertainties on Background Yield
The systematic uncertainty on the estimated cosmic background arises from the small number of events in the cosmic background region.This statistical uncertainty is scaled by the same factor used to propagate the cosmic background region data yield into expectations of background events in the search regions.Similarly, for the beam-halo background, a systematic uncertainty is assigned based on the statistical uncertainty of the estimates in the search regions.
XI. RESULTS
The predicted number of background events agrees well with the observed number of events in the search region, as shown in Table V.Using these yields, upper limits on the number of pair-produced gluino, stop, or sbottom signal events are calculated with a simple event-counting method and then interpreted as a function of their masses for a given range of lifetimes.
A. Limit Setting
A Bayesian method is used to set 95% credibility-level upper limits on the number of signal events that could have been produced.For each limit extraction, pseudoexperiments are run.The number of events is sampled from a Poisson distribution, with mean equal to the signal plus background expectation.The systematic uncertainties are taken into account by varying the Poisson mean according to the effect of variations of the sources of the systematic uncertainties [61].The latter variations are assumed to follow a Gaussian distribution, which is convolved with the Poisson function.A flat prior is used for the signal strength, to be consistent with previous analyses.A Poisson prior gives less conservative limits that are within 10% of those obtained with the flat prior.Since little background is expected and no pseudo-experiment may produce fewer than zero observed events, the distribution of upper limits is bounded from below at −1.15σ.The input data for the limit-setting algorithm can be seen in Table V.The leading jet energy > 300 GeV region is used to set the limits, except for the compressed models with a small difference between the gluino or squark mass and m χ0 , where the leading jet energy > 100 GeV signal region is used.
Signal cross sections are calculated to next-to-leading order in the strong coupling constant, adding the resummation of soft gluon emission at next-to-leadinglogarithmic accuracy (NLO+NLL) [62][63][64][65][66].The nominal cross section and the uncertainty are taken from an envelope of cross-section predictions using different PDF sets and factorization and renormalization scales, as described in Ref. [67].The number of expected signal events is given by the signal cross sections at 7 and 8 TeV, weighted by the integrated luminosities in the 2011 and 2012 data.Figures 8 and 9 show the limits on the number of produced signal events for the various signal models considered, for R-hadron lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds.
To provide limits in terms of the gluino, stop, or sbottom masses, the mass is found where the theoretically predicted number of signal events, using the signal cross sections at −1σ of their uncertainties, intersects the experimental limit on the number of signal events produced.The gluino, stop, and sbottom mass limits for each of the signal models, for lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds, can be seen in Ta-ble VI. Figure 10 shows the mass limits for various signal models as a function of lifetime.Limits are also studied as a function of the mass splitting between the gluino or squark and the neutralino.Figure 11 shows the total reconstruction efficiency for a gluino mass of 600 GeV and mass limits for gluino R-hadrons, in the generic R-hadron model with lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds, as a function of this mass splitting between the gluino and the neutralino.A similar dependence exists for stop and sbottom efficiencies and mass limits.
XII. SUMMARY
An updated search is presented for stopped longlived gluino, stop, and sbottom R-hadrons decaying in the calorimeter, using a jet trigger operating in empty crossings of the LHC.Data from the ATLAS experiment recorded in 2011 and 2012 are used, from 5.0 and 22.9 fb −1 of pp collisions at 7 and 8 TeV, respectively.The remaining events after all selections are compatible with the expected rate from backgrounds, predominantly cosmic ray and beam-halo muons where no muon track segment is identified.Limits are set on the gluino, stop, and sbottom masses, for different decays, lifetimes, and neutralino masses.With a neutralino of mass 100 GeV, the analysis excludes m g < 832 GeV (with an expected lower limit of 731 GeV), for a gluino lifetime between 10 µs and 1000 s in the generic R-hadron model with equal branching ratios for decays to q q χ0 and g χ0 .For the same m χ0 and squark lifetime assumptions, stop and sbottom are excluded with masses below 379 and 344 GeV, respectively, in the Regge R-hadron model.FIG. 8. Bayesian upper limits on gluino events produced versus gluino mass for the various signal models considered, with gluino lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds, compared to the theoretical expectations.FIG. 9. Bayesian upper limits on stop or sbottom events produced versus stop/sbottom mass for the various signal models considered, with stop/sbottom lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds, compared to the theoretical expectations.The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide.FIG.11.Total reconstruction efficiency and Bayesian lower limits on gluino mass, as a function of the mass splitting between the gluino and the neutralino, in the generic R-hadron model with gluino lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds.A 600 GeV gluino is used as a reference for the reconstruction efficiencies.
FIG. 1 .
FIG.1.The kinetic energies of simulated gluino R-hadrons with a mass of 800 GeV in the generic model are shown at initial production (black).The energy lost through hadronic interactions with the detector (red, dotted), electromagnetic ionization (red, dashed), and total (red, dashed-dotted) are also shown, for those R-hadrons that have stopped.
FIG. 2 .
FIG. 2. Jet variables for the empty crossing signal triggers.The requirements in TableIIIare applied except for leading jet energy > 100 GeV and the muon segment veto.For the quantity being plotted, the corresponding selection criterion is not applied.Histograms are normalized to the expected number of events in the search region.Note that only 5% of data events with more than two muon track segments were kept; they are scaled on these figues by a factor of 20.The top hashed band shows the total statistical uncertainty on the background estimate.The background estimate does not correctly describe beam-halo with low TileCal energy fraction; this region is not used in the analysis.The TileCal energy fraction can be negative or greater than one due to pileup-subtraction corrections.
FIG. 4 .
FIG. 4. Left:A beam-halo candidate event during an unpaired crossing in data.This event passed all the selection criteria except for the muon segment veto.Right: A cosmic ray muon candidate event during an empty crossing in data.This event passed all the selection criteria except for the muon segment veto.In both plots, white squares filled with red squares show reconstructed energy deposits in TileCal cells above noise threshold (the fraction of red area indicates the amount of energy in the cell), purple bars show a histogram of total energy in projective TileCal towers, jets are shown by open red trapezoids, muon segments by green line segments in each muon station, and muon tracks by continuous thin yellow lines.E miss T is shown as an orange arrow.
FIG. 5 .
FIG. 5. Some candidate event displays from 2011 (top) and 2012 (bottom) data passing all selections.White squares filled with red squares show reconstructed energy deposits in TileCal cells above noise threshold (the fraction of red area indicates the amount of energy in the cell), purple bars show a histogram of total energy in projective TileCal towers, and jets are shown by red semi-transparent trapezoids.Muons segments are drawn but none are reconstructed in these events.E miss T is shown as an orange arrow.
TABLE I .
The data analyzed in this work and the corresponding integrated delivered luminosity, center-of-mass energy, and live time of the ATLAS detector in the selected empty BCIDs during those periods.
TABLE II .
The selection efficiency after all selection criteria have been applied, its systematic uncertainty, and the stopping fraction, for all signal samples.
TABLE VI .
Bayesian lower limits on gluino, stop, and sbottom masses for the various signal models considered, with lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds.Bayesian lower limits on gluino, stop, or sbottom mass versus its lifetime, for the two signal regions, with R-hadron lifetimes in the plateau acceptance region between 10 −5 and 10 3 seconds.An 800 GeV gluino (stop or sbottom), in the generic (Regge) R-hadron model is used as a reference for the stopping fraction and reconstruction efficiency.MSSR, Slovakia; ARRS and MIZ Š, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America.
|
2013-12-02T12:29:12.000Z
|
2013-10-24T00:00:00.000
|
{
"year": 2013,
"sha1": "6d1c5504051a3b51519c4310dc9c3ef1299eeba4",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.88.112003",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "6d1c5504051a3b51519c4310dc9c3ef1299eeba4",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
258255044
|
pes2o/s2orc
|
v3-fos-license
|
The Promise and Pitfalls of Government Guidance Funds in China
Abstract In 2005, the Chinese government deployed a new financial instrument to accelerate technological catch-up: government guidance funds (GGFs). These are funds established by central and local governments partnering with private venture capital to invest in state-selected priority sectors. GGFs promise to significantly broaden capital access for high-tech ventures that normally struggle to secure funding. The aggregate numbers are impressive: by 2021, there were more than 1,800 GGFs, with an estimated target capital size of US$1.52 trillion. In practice, however, there are notable gaps between policy ambition and outcomes. Our analysis finds that realized capital fell significantly short of targets, particularly in non-coastal regions, and only 26 per cent of GGFs had met their target capital size by 2021. Several factors account for this policy implementation gap: the lack of quality private-sector partners and ventures, leadership turnover and the inherent difficulties in evaluating the performance of GGFs.
Since China's market opening, the drivers of state-led development have undergone a structural evolution from export-driven mass industrialization in the 1980s and 1990s to infrastructure construction in the 2000s and 2010s.Entering the present decade, the engine of growth shifted to innovation and technology.Under Xi Jinping's 习近平 leadership, technological catch-up has been elevated to the centrepiece of China's long-term development strategy.Spurred on by United States-China competition and by technological blockades from the US government, Xi is determined to free China from its dependence on foreign actors by advancing core technology indigenously.
In an effort to boost China's indigenous innovation, in 2005 the National Development and Reform Commission (NDRC)then under the leadership of Hu Jintao 胡锦涛 and Wen Jiabao 温家宝introduced a new financial instrument: government guidance funds (zhengfu yindao jijin 政府引导基金, GGFs hereafter).These are funds established by central and local governments partnering with private capital to invest in state-selected priority sectors.The aggregate numbers are impressive.By the end of 2021, there were more than 1,800 GGFs throughout China, with an estimated target capital of 10.18 trillion yuan (US$1.52 trillion). 1To put this figure in context, US federal funding for research and development (R&D) in 2021 was only US$157.8 billion. 2 Despite the strategic importance and formidable stakes of GGFs for China's technological ambitions and US-China competition, these funds have received surprisingly little scholarly attention.One notable exception is a 2021 study by Fenghua Pan, Fangzhu Zhang and Fulong Wu, who introduced the policy goals behind GGFs as a new financial instrument and the critical role of government entities in designing, seed funding and setting capital targets for GGFs. 3 However, their study does not examine how GGFs perform in practice or whether they meet their ambitious targets.Other policy studies, such as that by Tianlei Huang,4 bring attention to the most prominent fundsparticularly the National Integrated Circuit Industry Investment Fund (NICIIF) (Guojia jicheng dianlu chanye touzi jijin 国家集成电路产业投资基金), which focused on semiconductors but these studies have rarely examined implementation outcomes across all GGFs. 5 Building on this nascent literature, our study examines the promise and pitfalls of GGFs.In principle, by leveraging government seed funding and private venture investment, GGFs can significantly broaden capital access for early-stage, risky, high-tech ventures that normally struggle to secure private funding.But, in practice, there are notable gaps between policy goals and outcomes.Realized capital fell significantly short of targets, particularly in non-coastal regions.More precisely, we find that by 2021, only 26 per cent of GGFs had met their target capital size, and about only one-third of the existing GGFs had made at least one investment.Several factors account for this gap: the lack of quality private-sector partners and ventures, leadership turnover and the inherent difficulties of evaluating the performance of GGFs.
Our findings contribute to the literature on policy implementation gaps.Scholars have long documented the gap between policy ambitions and achieved outcomes in China.During the reform era, such gaps arose in many areas, including energy efficiency policies, family planning and poverty relief.The gaps can arise from corruption,6 limited financial resources for implementation,7 weak supervision from above, and historical legacies and structural problems specific to certain regions. 8We find similar dynamics at play in the context of GGFs, except they are more complex than in traditional policy arenas owing to the interaction among the economic interests of private capitalists, the geopolitical ambitions of the central government, and the career incentives of local leaders.In addition, the technical, multi-faceted nature of tech financing makes the supervision of GGFs even more difficult than traditional policies.
Our study draws mainly on descriptive statistics from Zero2IPO (Qing ke 清科), a leading research institute of venture capital (VC) and private equity (PE) investment in China.Zero2IPO's database is widely acknowledged as the most comprehensive data source on such investment in China, including GGFs and their investments.It collects and constantly updates information on all investors, transactions and firms that have public records.The database has been used in many academic studies, including Pan, Zhang and Wu's study, and has been featured in the media. 9o provide a more holistic picture of this complex policy tool, we supplement the descriptive statistics with interviews with five respondents, conducted in Beijing, from February to August 2022.Interviews were carried out in Beijing because policymakers and major VCs/PEs are in Beijing and the authors were able to hold direct, in-person conversations with them.The authors conducted three in-depth interviews with two executives, one from a leading VC (named W) and one from a PE firm (named D), involved in GGFs, and three shorter interviewswith an executive of another PE (named Y), a deputy director of a major government-affiliated think tank and a technology entrepreneur, all of whom were knowledgeable about GGFs. 10 We also provide a case study of Guizhou's bet using GGFs with the big data industry.
Financing Technological Innovation 101
China's ambition to "catch up and surpass" Western powers in technology dates back to the founding of the People's Republic of China.Under Deng Xiaoping 邓小平, China pursued the goal of technological catch-up through global integration and marketization of state-owned companies.However, by the early 2000s, policymakers realized that these approaches were unsatisfactory.As Yu Zhou and Xielin Liu observed, "If China were to truly become an innovation nation, then the state would have to reassert itself." 11n 2006, Hu Jintao's administration launched a national campaign to accelerate "indigenous innovation" (zizhu chuangxin 自主创新) and to reduce China's reliance on foreign technologies.The central government imposed a variety of innovation-related targets, such as patents production, on local governments.As the US-China rivalry around technology intensified during the Trump administration, the leadership under Xi became more determined than ever to speed up domestic technological capabilities, particularly in "core technologies" (guanjian hexin jishu 关键核心技术) such as semiconductor chips, 5 G communication and aerospace. 12GFs are a key financial instrument in Xi's drive for technological catch-up.To understand how they work, consider some basic financial concepts.Imagine a company called ABC that makes semiconductor chips.In order for ABC to start operating, it needs an initial investment.Traditionally, such an investment comes from bank loans.But as a start-up with little track record and a high risk of failure, ABC is unlikely to secure a loan.Another funding approach is equity investment, which means that an investor provides funds in exchange for a percentage of shares in a company.
Investors who make equity investments in start-ups are VC/PE firms.VCs make smaller equity investments in early-stage ventures that are believed to have growth potential and exceptional payoff.VCs play an intermediary role in pooling capital from outside investors and then investing on their behalf.General partners (GPs) manage VC funds, and outside investors, including pension funds, companies or governments, are limited partners (LPs) who do not make investment decisions directly. 13PEs operate similarly to VCs, except they invest larger amounts of capital in the mature stage of a company to acquire a controlling interest.
The evaluation of "performance" by VCs/PEs is different from that of traditional banking and equity models of financing.Banks measure their success by whether their loans are repaid and by the amount of interest earned.Both VCs/PEs, on the other hand, realize profits through the growth of new ventures in which they have invested.Their endgame is to "exit"that is, to sell their stakes during the invested ventures' initial public offering (IPO), when the companies issue new shares to the public or are acquired at a higher price by other investors.
Through GGFs, China's central and local governments can inject their own money and attract VCs/PEs to co-invest in high-tech and advanced manufacturing sectors that are of policy interest. 14hese sectors normally struggle to attract private investment because they are capital-intensive, risky and rarely yield short-term gains.From the government's perspective, partnering with VCs/PEs not only pools capital from the private sector but it also has the benefit of leveraging their investment expertise, which government officials lack. 15The VCs/PEs that partner with GGFs can be private or state-owned, as long as they are registered in mainland China.
Table 1 lists examples in our dataset of top ventures that have received several rounds of capital infusions from GGFs and VC/PE partners.GGFs tend to partner with VCs/PEs in the same geographical region, although there are some exceptions.
GGFs can be compared with government financing vehicles (GFVs), which were the primary vehicle for financing China's infrastructure boom during the 2000s and 2010s.GFVs were state-owned enterprises established by local governments to borrow funds from banks and invest directly in construction projects.GGFs, on the other hand, particularly invest in high-tech or strategic industries, which are not limited to state-owned companies.High-profile examples of private companies that have received funding from GGFs include BOE Technology (Jingdongfang keji 京东 方科技), the world's largest manufacturer of display devices, and CATL (Ningde shidai 宁德时代), the world's largest manufacturer of lithium-ion batteries.Furthermore, whereas GFVs involved only public funds and loans, GGFs are a hybrid model that combines state direction and investment with private venture capital and expertise.In comparing GGFs to the traditional model of direct government investment, the economist Barry Naughton sees GGFs as a milestone and a qualitative shift in China's industrial policies. 16eyond China, many countries have financed technological ventures through public-private partnerships.One prototype is the Yozma model in Israel, where the government leverages private VC through indirect co-investment rather than through direct public equity investment in high-tech sectors.17Similar models have been adopted in other developed countries, including Canada, New Zealand and the United Kingdom. 18What distinguishes GGFs in China is the enormous, nationwide scale of state investment and extensive regional decentralization.This is not a voluntary exercise in which only those with interest and capital participate.Rather, consistent with the Chinese Communist Party's "campaign" or "bee hive" style of policy implementation,19 all provinces and major cities have set up GGFs.As a result, in terms of targets, the cumulative scale of this financial mobilization is staggering.
The Promise of GGFs
The historical origins of GGFs can be traced back to the mid-1990s when the central government decided to use "industrial investment funds" as a financial instrument to accelerate industrial upgrading.But it was not until 2005 that the central government officially introduced the term GGF.Issued by the NDRC, Decree No. 39 was only three paragraphs long; it pronounced: ."The national and local government can establish venture capital guidance funds and support the growth of ventures through equity financing and by serving as a financial guarantor." 20Different from The China Quarterly earlier industrial investment funds, this decree underscored the partnership between VC and GGFs.Its stated objectives were to support the development of VC firms and to help small and medium enterprises broaden their access to investment.
In short, the mission of GGFs is to leverage government funding to increase the supply of private VC to ventures in emerging and high-tech sectors and to overcome market failures in venture financing. 21The government's long-term objective is to shift the drivers of economic growth towards homegrown advanced technology and innovation.According to Decree No. 668, GGFs should focus their investment on strategic sectors such as new medicine, new energy and new materials. 22ubsequent regulations added that GGFs should invest in early-stage firms. 23n principle, GGFs should make investments according to market principles, free from the government's intervention.Decree No. 116 states that "GGFs should not directly conduct venture investment activities." 24Instead, their purpose is to oversee, guide and support VCs in making investments.But, in reality, as we discuss below, governments do intervene in the investment decisions of GGFs directly and indirectly; for example, they may appoint local government-owned investment companies to manage the funds. 25plosive growth followed by decline Since their official introduction in 2005, the number of GGFs has grown at an explosive rate.Pan, Zhang and Wu documented the boom of GGFs until 2016, which is the year that recorded the highest number of new GGFs (more than 400). 26Updating the data to 2021, we find that since 2017, the growth rate of GGFs has slowed, but the number of GGFs is still higher than before 2015 (see Figure 1).The number of GGFs jumped in 2008 and may have peaked in 2015 and 2016 because a series of regulations paved the way for setting up GGFs at various levels of government.Their slower growth after 2016 may reflect China's economic slowdown, tightened regulations on financial institutions as part of de-leveraging policies, the US-China trade war that started in 2018, and the COVID-19 pandemic.By 2021, there was a total of 1,849 GGFs across China.
Immense targets
In China, governments set targets for the amount of capital, termed "target capital size," that GGFs should raise.This target increased from 15.67 billion yuan in 2010 to 46.03 billion yuan in 2013.It peaked in 2016 at a total target capital size of 2.99 trillion yuan.From 2017 to 2020, however, it began to decline (see Figure 2).By 2021, the aggregated target capital size stood at 10.18 trillion yuan (US$1.52 trillion).In the peak years from 2015 through 2017, the target capital size of existing GGFs surpassed direct government financing in science and technology (S&T) (see Figure 2). 27As we show below, realized capital falls short of these immense targets.
Investment in state priority sectors
GGFs have invested predominantly in state priority sectors.As shown in Figure 3, by 2021 the top sectors, which accounted for more than 48 per cent of all GGF investments, were in telecommunications, computers, and medical and pharmaceutical products.Sectors that accounted for around 20 21 NDRC, MOF and MOC 2008.22 MOF and NDRC 2011.23 MOF 2015.24 NDRC, MOF and MOC 2008.25 per cent of all GGF investments included instruments, special purpose equipment, software and chemical products. 28Notably, many of these sectors overlap with the key industries listed in Made in China (MIC) 2025, an industrial plan to boost China's competitiveness in cutting-edge industries and advanced manufacturing. 29Although since 2018 the Chinese government has downplayed MIC 2025 to avoid pushback from the US, GGFs continued to invest in sectors prioritized in the campaign. 30lected success cases A few high-profile success cases suggest that GGFs have helped to broaden capital access for hightech ventures.One well-known fund is the National Emerging Industry Venture Capital Guidance Fund (NEIVCGF) (Guojia xinxing chanye chuangye touzi yindao jijin 国家新兴产业创业投资引 导基金), which was established by the central government in 2015 with a target capital size of 40 billion yuan.The Ministry of Finance (MOF), state-owned firms and private venture investors contributed funds.Acting as a "mother fund," the NEIVCGF invests in sub-funds managed by VC firms that reinvest in ventures in strategic industriessuch as new materials, new energy vehicles, energy-saving and environmental technologies, and digital technologieslisted in the 13th Five-Year Plan on National Strategic Emerging Industries ("Shisanwu" guojia zhanlüe xing xinxing chanye fazhan guihua "十三五"国家战略性新兴产业发展规划).
One of the NEIVCGF's major sub-funds is the State Development and Investment Corporation (SDIC) Chuanghe NEIVCGF (Guo tou chuang he guojia xinxing chanye chuangye touzi yindao jijin 国投创合国家新兴产业创业投资引导基金), which was created in 2017 and funded by the SDIC, MOF, Postal Savings Bank of China and other state and private investors.It raised around 18 billion yuan at its founding and invested 4.5 billion yuan in its sub-funds.Early-stage ventures received 80 per cent of its investments, and the remaining 20 per cent went to ventures in the expansion stage The China Quarterly The NICIIF has invested in both promising semiconductor firms and local GGFs (for example, the Shanghai Integrated Circuit Industry Fund), registering an impressive investment of 138.72 billion yuan (US$21.31 billion) in its first phase of fundraising, well above its target capital size of 120 billion yuan (US$17.33 billion). 35As of 2019, 67 per cent of its investments had gone to semiconductor manufacturing firms, with the remaining being channelled to firms specializing in design, 36 This has made it one of the most successful GGFs.
The Pitfalls of GGFs
The earlier examples of success, however, likely represent only a small percentage of all GGFs.In principle, GGFs promise to provide a powerful jump-start to nascent tech ventures.As Jonas Short, who heads an investment bank in Beijing, commented in the Financial Times, "in an economy where start-ups and SMEs are struggling for funding, these offer one way of plugging the gap." 37 Similarly, in his bullish appraisal, tech entrepreneur Lee Kaifu argued that generous government support gives China a strong edge in the US-China race for artificial intelligence.In his words, "now it seemed like any smart and experienced young person with a novel idea and some technical chops could throw together a business plan and find funding to get his or her start-up off the ground." 38But in practice, GGFs may fall short of policy goals, and they may even create some unintended problems, as we discuss below.
Fundraising gap
Many GGFs face a big gap between target and realized capital size.As of 2021, only 480 GGFs, accounting for 26 per cent of total existing GGFs, had completed fundraising and achieved their target size.More than 61 per cent of GGFs were still in the process of fundraising.The remainder have a different status (for example, liquidated, newly setup).This pattern is consistent with an earlier report by Zero2IPO, which found that as of 2019, the target size of existing GGFs was around 10 trillion yuan, but the capital raised by GGFs was only around 4 trillion yuana gap of 6 trillion yuan. 39Indeed, as Figure 2 shows, the target size of newly established GGFs has declined since 2017, partly because the target size of existing GGFs is still unmet.
The gap also varies by year.As shown in Figure 4, the annual percentage of GGFs that completed target capital size fell steadily from 2012 to 2018 when GGFs took off nationwide, but the completion rate began to bounce back after 2018.It is possible that as more GGFs were created, they competed in attracting private capital and fewer were able to meet the target size.More monitoring and cleaning up of existing GGFs has helped to reduce the fundraising gap in recent years.
Local policy pronouncements corroborate this national pattern.In 2021, the Shenzhen government terminated 19 GGF sub-funds that were inactive and reclaimed its committed investment of 5.72 billion yuan. 40GGFs issued by the city of Beijing have also cleaned up their underperforming sub-funds. 41iven that Shenzhen and Beijing are two of China's most technologically advanced and prosperous cities, we can expect similar situations to be far more prevalent in less developed parts of the country.
Lack of quality partners
One reason behind the shortfall in fundraising is that it has become increasingly difficult for GGFs to attract quality VC/PE partners to help raise private capital.According to the National Audit Leading VCs/PEs are less likely to participate in GGFs since they have plenty of opportunities to raise capital from the market and do not need government seed funds.Government funds are not small in absolute numbers, but are relatively insignificant compared to the capital pool managed by leading VCs/PEs.For example, Matrix Partners China (Jingwei Zhongguo 经纬中 国) has a capital pool of over 50 billion yuan, compared to the million-dollar seed funds that most GGFs offer. 43reover, VCs/PEs must comply with strict investment terms imposed by state investors in GGFs.Funds established by local governments are normally required to invest in designated locations or sectors.Local governments also expect the amount of private investment to be two to three times that of GGF investment.Such a high ratio does not attract participation in GGFs or GGF-invested projects by leading VCs/PEs.In contrast, lower-tier VCs/PEs, which need government funding, are more willing to participate in GGFs.But such companies are less able to attract additional capital from the market to meet financing targets. 44he unwillingness of VCs/PEs, particularly higher-quality ones that can easily raise funding in the market, to partner with GGFs appears to vary by regional levels of development.As PE D's executive elaborated: In coastal provinces such as Zhejiang, Jiangsu or Fujian, local governments are able to mobilize a variety of financial resources to make sure up to 70 per cent of GGFs' target size can be realized.Hence, GGFs set up in these provinces are more attractive since VCs or PEs can get more money from governments and reduce the workload of raising capital for GGFs.On the other hand, in other provinces, like northern provinces, local governments may contribute up to only 10 per cent of the target capital size to their GGFs and leave the majority of fundraising to VCs or PEs. 45Owing to the economic slowdown in recent years, VCs/PEs have been facing severe financial pressures and experiencing difficulties in fundraising.This is particularly true for lower-tier VCs/PEs.In this regard, it is much harder for GGFs, except those in a handful of the most prosperous provinces, to meet their target capital size and to leverage private capital to invest in local ventures. 46ur data indicate patterns consistent with the observation above.The percentage of GGFs that completed fundraising is generally low nationwide (an average of 26 per cent by the end of 2021), but the share is higher in the east and central regions than in the west and north-eastern regions (see Figure 5). 47There are more VCs/PEs to partner with GGFs in economically developed provinces than in lower-income provinces.North-eastern regions, which include China's rust belt, may have fared the worst because of a lack of dynamic private investment and innovative ventures.
Since 2018, tightened regulations on financial institutions, the so-called de-leveraging policy, have further constrained VCs and PEs from participating in GGFs. 48As PE D's executive explained: Before 2018, VCs or PEs that managed GGFs could easily raise money from investors in the market as they were endorsed by local governments and could get debt financing from state-owned banks or financial institutions.After 2018, however, the de-leveraging policy prohibited state-owned banks or financial institutions from lending easy money, and banks were restricted from using financial products to invest in GGFs.The effects of these policies were felt not only by the private sector but also by state investors of GGFs. 49rthermore, under Xi's exceptionally vigorous anti-corruption campaign, 50 local governments have become cautious about deploying state capital.They like to brag about the target size of GGFs but, in practice, they refrain from making substantial GGF investments that may result in accusations of state-asset mismanagement or loss.Owing to the lack of government commitment to GGFs, VCs/PEs are less willing to take risks in raising capital. 51hether VCs/PEs choose to participate in GGFs appears to be highly dependent on the quality of state partners and local institutional contexts.Yifan Wei, Nan Jia and Shaoqing Wang find that the greater the local government's extractive power, the less likely experienced VCs/PEs will partner with them as management firms. 52As discussed above, in principle GGFs should follow market rules and government investors should not intervene in the investment decisions of VC firms.However, in practice, state partners often interfere in the investment decisions and management of GGFs, 53 especially in the inland regions where bureaucrats have less economic expertise and professionalism than those in the coastal regions.As PE Y's executive said, "GGFs in coastal provinces are better since governments there follow market rules, but inland governments do not and always ask about GGFs' investments." 54 Another interviewee, a tech entrepreneur, agreed as much, saying, "GGFs' operation very much depends on the local business environment." 55This observation is corroborated by a Zero2IPO survey: among the surveyed GGFs, 31 per cent involved government fiscal departments or state asset supervisory bodies as observers, 29 per cent involved them as final approvers, and 25 per cent involved them in investment committees. 56The combination of large amounts of capital, extensive state intervention and lack of transparency makes GGFs susceptible to corruption.
Lack of quality ventures
A second problem is the lack of quality ventures.GGFs can hardly raise capital if there are few worthy investment targets for VCs/PEs.An earlier report, released by the NAB in 2018, found that 6 of 36 GGFs sampled in 11 provinces did not make a single investment. 57Our data reveal an even more dire situation.We find that by 2021, only 633 (or 34 per cent) of the 1,849 GGFs had made at least one investment in either a firm or a sub-fund.This means that about two-thirds of the existing GGFs had not made any investment, despite having raised funds.
The difficulty of finding quality ventures is particularly acute in provinces with weak entrepreneurial bases and high-tech industries.Among active GGFsi.e. those that made at least one investment -52 per cent are in the eastern region where investment opportunities are abundant (see Figure 6).As shown in Figure 7, the number of firms that received GGF investment is significantly higher in the eastern region than in other regions.Evidently, both in terms of fundraising and investment, high-performing GGFs are concentrated in the wealthy coastal regions.PE D's executive explained the regional disparity with an example: All provinces have set up GGFs in sectors highlighted in MIC 2025.For instance, almost every province has set up one or more GGF in the biomedical and pharmaceutical sectors, particularly since the COVID-19 pandemic.But not all provinces have well-developed biomedical and pharmaceutical sectors that host quality local ventures.There are only a few cities with relatively developed biomedical and pharmaceutical sectors and high potential ventures, such as Xiamen, Shanghai and Beijing.You know, the domestic COVID-19 vaccine manufacturer Sinovac is based in Beijing. 58The China Quarterly Indeed, as shown in Figure 8, since 2015 there has been a sizable gap between the eastern provinces and other regions in GGF-invested medical and pharmaceutical firms.This gap became even more pronounced when the COVID-19 pandemic struck in 2020.
Another aspect of regional inequality is the varying ability of local governments to withstand the pressures of the economic slowdown.As the executive explained: In the mid-2010s, when most local governments had abundant public funds from tax revenue and land sales, they were able to attract and nurture high potential ventures by using GGFs as equity investment (i.e.holding shares in these ventures) and providing free land.However, as local governments became financially constrained because of the economic slowdown in the late 2010s, they replaced equity investment with debt investment in the form of low-interest loans and significantly reduced the length of free land use.This change occurred in many localities, except for in the wealthiest cities like Beijing, Shanghai or To sum up, one clear implication is that when assessing the efficacy of GGFs in helping the top leadership achieve its technological ambitions, analysts must not view China as a monolith but must instead investigate and compare results across regions.
Leadership turnover
A third contributor to the gap between target and realized capital of GGFs is leadership turnover.Studies have shown that local leaders in China have relatively short time horizons. 60Most are rotated after only a few years in office.During their tenure, their chief priority is to achieve politically salient targets such as constructing signature landmarks and attracting prominent businesses.GGFs have become another means of demonstrating a leader's developmental achievements.
As one interview revealed, since the central government under Xi is determined to promote technological capabilities in strategic industries, local leaders, who are vying for promotion, compete to attract ventures in these industries.For them, GGFs are a key instrument to incentivize VCs/PEs to partner with the local governments.Announcing ambitious targets for GGFs demonstrate local leaders' enthusiasm for meeting the central planners' priorities.However, after incumbent leaders leave office, their GGFs are usually left unattended, similar to white elephant construction projects.Successors do not want to be responsible for their predecessors' GGFs and simply create new GGFs to showcase their own performance. 61We provide a case study below to illustrate this problem.
Elusive evaluation
Another factor exacerbating the policy implementation gap is that it is inherently difficult to evaluate the performance of GGFs.The central government did not define the evaluation criteria for GGF performance until 2018, when the NDRC enacted the Notice of the Matters Concerning Accomplishing Performance Evaluation for Government-sponsored Industry Investment Funds (Decree No. 1043).This decree stated that, at the investment stage, a GGF should be evaluated according to policy performance (i.e.size of capital leveraged and whether the GGF directs private capital to sectors of policy interest), management performance (i.e.qualification of management firms, investment progress, risk control) and credit performance (i.e.accuracy of operation information and credit history of management firms).Then, at the exit stage (i.e. when a GGF matures), GGFs should also be evaluated on financial performance, which is measured as the return on investment and on the performance of invested ventures.Table 2 summarizes the evaluation criteria for GGFs at both the investment and exit stages, based on Decree No. 1043.
This evaluation system may not be appropriate in several ways.First, the criteria were designed by government officials without sufficient knowledge of VC industries.Therefore, the evaluation criteria emphasize the conservation of state assets (as evident from the fact that policy performance accounts for 50 per cent of the overall performance evaluation) rather than the efficiency of GGF-participated investments (the expected outcome compared to the costs), or the growth of invested industries. 62econd, the criteria are not customized according to early and late investment stages.For example, early-stage GGF investments are risker and require more time for return compared to latestage investments, but the early-stage investments play a more important role in helping private partners hedge market uncertainties.This is also the experience of Yozma in Israel, where GGFs intentionally focus on early-stage investment to promote innovation.The current evaluation criteria ignore such differences within investment stages.
Third, appropriate exit modes have not been considered in the criteria.It was not until 2021 that Decree No. 46 added compulsory withdrawal from invested sub-funds to the exit options for GGFs when invested sub-funds have a material breach, a failure to perform the obligations. 63However, compulsory withdrawal is different from other normal exit modes (such as liquidation or equity transfer) becausein most cases of compulsory withdrawalsub-funds lack sufficient cash to repay investing GGFs. 64Such a situation hardly fits the design of current criteria, which are based on normal exit modes.
Fourth, conducting the evaluation requires time and effort to communicate with the different parties involved in the investment process (for example, management firms, sub-funds and other investors) to obtain all the necessary information.In practice, it is difficult to collect such information because of collaboration or privacy issues.Besides, third-party institutes that are qualified to perform the evaluation are rare because the financial industry in China is still young.Thus, the transparency of GGFs is inherently problematic.Lastly, the wider economic and social impacts of GGFs are difficult to assess under the existing criteria. 65wing to the factors identified above, assessing whether GGFs "perform" is an elusive task and much trickier than the traditional tasks of evaluating economic growth or fiscal revenue. 66This reflects a general trend where evaluation becomes more difficult as the Chinese economy becomes highly complex and financialized.If GGFs fail to generate returns or if invested ventures fail, it is hard to ascertain whether these results reflect poor decisions or corruption or are a normal cost of investing in risky ventures.
Guizhou's Bet on Big Data
The case of Guizhou in the big data industry illustrates the pitfalls of GGFs, particularly the problems of turnover in local political leadership and the lack of quality ventures.In 2016, the Guiyang municipal government (the capital government of Guizhou province) and the China Insurance Investment Fund (CIIF) jointly established the Big Data Industry Fund (BDIF) (Dashuju chanye jijin 大数据产业基金), the first GGF with a focus on big data. 67The BDIF was established to leverage private capital to finance new ventures in big data, cloud computing, the internet of things, and other emerging sectors.The CIIF contributed 20-30 billion yuan to the BDIF in the first two years.The fund was expected to attract 100-150 billion yuan in investment to support the development of the big data industry in Guiyang. 68he founding of the BDIF was part of Guizhou province's ambitious initiative to become the country's big data hub as well as its high-tech frontline.Guizhou was among the least developed provinces in China, underpinned by four traditional industries: coal, electricity, tobacco and alcohol. 69In January 2013, Chen Min'er 陈敏尔 was officially appointed as governor of Guizhou province.Touted as a protégé of Xi Jinping, Chen was an up-and-coming politician. 70In Guizhou, he was presented with a special opportunity to simultaneously help Xi accomplish his signature policy of alleviating poverty (as the province had a high share of poverty) and accelerating technological innovation.Under Chen's leadership, the provincial Party committee forged a new development strategy centred on big data.Their plan was sensible, given Guizhou's relatively low electricity costs, energy savings from the year-round mild temperature and rich resources in coal and water.
Both the central and provincial governments strongly supported this plan.The provincial government offered tax deductions, rent-free office space and talent-recruitment bonuses to subsidize and nurture the growth of high-tech ventures.In 2016, the NDRC and the Ministry of Industry and Information Technology endorsed Guizhou's establishment of the first national big data experimental zone in the country. 71The central government also helped to build the Global Big Data Exchange (GBDE) for data-related asset and service trading in Guizhou.This was the first of its kind in the world and, by 2018, had more than 2,000 members, including Huawei and JD. 72hese government efforts attracted major high-tech firms, including Tencent, Alibaba, Huawei and Apple, to store their data in Guizhou.However, after much fanfare, in July 2017 Chen left Guizhou and was transferred to Chongqing.Since then, there has been no media coverage of substantial investment activities by the BDIF.Indeed, our data do not show any investment made by the BDIF as of 2021.Backed by the central government, Chen was able to convince star ventures to locate their data centres and related businesses in Guizhou, which helped Guizhou's GGFs attract VC/PE investments in the big data industry.Disappointingly, this effort did not last: Chen stayed in Guizhou for only one year after the GGF was created, and it was hard to see returns on these investments within such a short period of time.After he left Guizhou for Chongqing, people cared less and talked less about the big data industry in Guizhou, let alone about those GGFs. 73All the attention has shifted to Chen's political career in Chongqing. 74eak investment activities of the BDIF are also down to the lack of qualified ventures in the big data industry in Guizhou.As PE firm D's executive told us: The benefits of big data cannot be realized unless it is integrated with the local economy.But the manufacturing sector in Guizhou remains underdeveloped and concentrated at the low end of the value chain.Within the province, there are not enough businesses in need of big data services, leading to a lack of new ventures offering such services for GGF investment. 75 fact, the big data industry in Guizhou is dominated by investments in data storage made by established firms headquartered in coastal provinces.This industry adds little value to the local economy as the data are not used by local businesses and do not help nurture new ventures in data generation, data mining or application.By 2020, one Chinese newspaper declared that Guiyang's big data "fever" had retreated. 76
Conclusion
China under Xi Jinping has taken a decidedly statist turn, with a single-minded focus on innovation.Margaret Pearson, Meg Rithmire and Kellee Tsai characterize the current Chinese economic model as "party-state capitalism," which includes not only state ownership and state intervention in the economy but also "involves private firms as both targets of investment and managers of state capital." 77In this respect, GGFs are a perfect illustration of party-state capitalism.Instead of investing directly in high-tech ventures on its own, the Chinese government has enlisted private VCs/PEs as partners.This apparent fusion of public and private actors, along with the staggering target size of GGFs, has triggered alarm among US policymakers, who worry that China can achieve technological domination through "a heavy government role in directing and funding Chinese firms." 78ur study yields a mixed picture.On the one hand, the Chinese economy appears to exhibit guojin mintui 国进民退the state (sector) advances, the private retreats. 79GGFs have become a dominant investor in industries related to national security and frontier technologies.Before the pandemic, most leading firms in China's pharmaceutical industry were privately ownedfor example, Hengrui Pharmaceuticals Company (Heng rui yi yao 恒瑞医药).But since the pandemic, large state-owned firms such as Sino Pharm (Guo yao 国药) and Sinovac Biotech (Ke xing 科兴) have quickly replaced privately owned firms through the injection of state funds for R&D. 80Another example is the semiconductor industry.In its phase-two capital raising, NICIIF invested in Shenzhen Longsys Electronics (Shenzhen jiangbo long dianzi 深圳江波龙电子), a leading domestic storage product manufacturer dedicated to memory chip architecture and application, becoming the largest shareholder of the firm in 2019. 81As the deputy director of a major government-affiliated think tank noted, GGFs will continue to lead investment in state-prioritized sectors such as semiconductor chips and new energy that are critical for national security and long-term technological competitiveness, especially now the US has passed the CHIPS and Science Act to counter China's rise. 82n the other hand, GGFs face sharp limits.Our analysis finds a sizable gap between target and actual capital raised, and about two-thirds of GGFs have not made a single investment.The most successful GGFs are concentrated in the prosperous coastal provinces, but in other regions, especially in the north-eastern rust belt, many established GGFs have been inactive.Even in the hyped-up case of Guizhou's big data industry, the GGF did not make any investment.It is impossible for the state to completely replace the role of private investors in technological innovation because market-oriented VCs/PEs are still the most effective means of financing for high-potential ventures.Assessing China's position in the US-China tech race requires a balanced assessment of both its strengths and its weaknessesand attention to the gap between ambition and outcomes.
Figure 1 :
Figure 1: The Rise and Decline of GGFs
Figure 2 :
Figure 2: Aggregated Target Capital Size of GGFs and National Fiscal Expenditure on Science and Technology (S&T)
Figure 3 :
Figure 3: Sectoral Distribution of GGF-invested Ventures Bureau (NAB), in 2016 private capital accounted for only 15 per cent of the capital raised among a sample of 235 GGFs in 16 provinces.42As VC W's executive told us:
Figure 4 :
Figure 4: Percentage of GGFs Meeting Target Capital Size over Time
Figure 5 :
Figure 5: Share of GGFs Meeting Capital Targets across Regions
Figure 6 :
Figure 6: Percentage of Active GGFs (with at Least One Investment) across Regions
Luong, Arnold and Murphy 2021.26 Pan, Zhang and Wu 2021.27 Data from the National Bureau of Statistics of China (NBSC) 36 Eastmoney Securities 2019.37 "China's state-owned venture capital funds battle to make an impact."Financial Times, 24 December 2018, https://www.
|
2023-04-21T15:18:36.666Z
|
2023-04-19T00:00:00.000
|
{
"year": 2023,
"sha1": "a37760990f40703a3d453a8cbc8f377f778c83a2",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1017/s0305741023000280",
"oa_status": "CLOSED",
"pdf_src": "Cambridge",
"pdf_hash": "0bbf77c1fe078521023609f87313340d741bb2b9",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": []
}
|
258666716
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing
Single-pixel imaging (SPI) uses a single-pixel detector instead of a detector array with a lot of pixels in traditional imaging techniques to realize two-dimensional or even multi-dimensional imaging. For SPI using compressed sensing, the target to be imaged is illuminated by a series of patterns with spatial resolution, and then the reflected or transmitted intensity is compressively sampled by the single-pixel detector to reconstruct the target image while breaking the limitation of the Nyquist sampling theorem. Recently, in the area of signal processing using compressed sensing, many measurement matrices as well as reconstruction algorithms have been proposed. It is necessary to explore the application of these methods in SPI. Therefore, this paper reviews the concept of compressive sensing SPI and summarizes the main measurement matrices and reconstruction algorithms in compressive sensing. Further, the performance of their applications in SPI through simulations and experiments is explored in detail, and then their advantages and disadvantages are summarized. Finally, the prospect of compressive sensing with SPI is discussed.
For SPI, the target to be projected is modulated by a series of patterns with spatial resolution, and then the reflected or transmitted intensity is collected by the single-pixel detector to reconstruct the target image. The modulation patterns can be realized by a diffuser, a spinning mask, or a spatial light modulator. The single-pixel detector can be a photodiode, a photon multiplier, or a conventional image detector used as a bucket detector. SPI can be traced back to quantum ghost imaging and thermal ghost imaging. According to the position of the modulation device, SPI can be divided into computational ghost imaging (CGI, which uses active illumination) and single-pixel cameras (SPC, which uses passive detection) [27,28]. Despite being commonly treated as separate research fields, it has become obvious that, from an optical perspective, CGI and SPC are the same. Therefore, in this paper, we uniformly call it SPI.
SPI can be divided into orthogonal SPI [29][30][31][32][33] and compressed sensing SPI (CSSPI) [1,[34][35][36][37]. Firstly, orthogonal SPI often seeks to solve an inverse problem or perform a reconstruction from an ensemble average. Secondly, CSSPI often seeks the sparse estimation of an optimization problem. CSSPI was first realized by Duarte et al. [1]. The target to be imaged is modulated by a series of random patterns. Then, the reflected or transmitted intensity is compressively sampled by the single-pixel detector, which reduces the measurement number. Finally, the target image is reconstructed using compressed sensing (CS) theory.
The CS theory was proposed in 2006 [38,39], which holds that it is possible to recover the original signal from under-sampled data when the original target signal is sparse or is sparse in the transform domain. This breaks the limitation of the Nyquist sampling theorem in data acquisition and thus reduces the sampling rate.
CS mainly focuses on how to obtain the whole information of the target signal by sampling the data much less than the Nyquist sampling method does, as well as how to recover the target signal from the down-sampled data. Recently, on these two key issues, researchers have developed a variety of sampling frameworks, i.e., different measurement matrices, and a variety of signal reconstruction algorithms. It is necessary to explore the application of these methods in SPI.
Additionally, refs. [40,41] review the development of CS; refs. [42,43] review the main measurement matrices in CS; and refs. [44][45][46] review the reconstruction algorithms in CS. In addition, refs. [2,47] review the development of SPI. Refs. [48,49] review the algorithms of SPI; however, for CS, only the TVAL3 algorithm is involved. There are no reviews on compressive sensing SPI and, especially, no detailed investigations on the performance of SPI using these different CS methods. As a result, this paper reviews the concept of CSSPI and summarizes the main measurement matrices and reconstruction algorithms. Further, their performance is explored in detail through simulations and experiments, and then their advantages and disadvantages are summarized. Finally, the prospect of CSSPI is discussed. Table 1 shows the comparison between the works of this paper and the existing works. Table 1. A comparison between the work of this paper and the existing work.
Works Content
Our works -Reviews the concept of CSSPI.
-Summarizes the main measurement matrices in CSSPI.
-Summarizes the main reconstruction algorithms in CSSPI.
-The performance of measurement matrices and reconstruction algorithms in CSSPI is discussed in detail through simulations and experiments.
-The advantages and disadvantages of mainstream measurement matrices and reconstruction algorithms in CSSPI are summarized.
Refs. [48,49]: Review the algorithms of SPI. For CS, only the TVAL3 algorithm is involved.
The paper is structured as follows: In Section 2, we review the principles of CSSPI and point out that the measurement matrix and reconstruction algorithm are the two important factors affecting its performance. In Section 3, we classify some main measurement matrices and briefly introduce how to generate them. In Section 4, we classify the existing CS reconstruction algorithms based on sparse and image gradient sparse and review their principles of reconstruction. Sections 5 and 6 compare the performance of these measurement matrices and reconstruction algorithms through simulations and experiments, respectively. Finally, Section 7 summarizes the work of this paper, and discussions on the prospect of CSSPI are given.
Compressed Sensing SPI
As shown in Figure 1, assuming that a target object with a spatial resolution of u × v pixels is to be captured by the CSSPI, the imaging process is comprised of two procedures. The first procedure is to modulate the object with a set of patterns with a spatial resolution of u × v pixels, and measure the corresponding reflected light y ∈ R M×1 , which can be mathematically described as, where each row of the measurement matrix Φ ∈ R M×N represents a 2-D modulation pattern in its 1-D representation, x ∈ R N×1 is the signal of the 2-D target object in its 1-D representation, N = u × v is the length of the 1-D representations, and M is the number of measurements and also equals the number of the modulation patterns. measurement matrices and briefly introduce how to generate them. In Section 4, we classify the existing CS reconstruction algorithms based on sparse and image gradient sparse and review their principles of reconstruction. Sections 5 and 6 compare the performance of these measurement matrices and reconstruction algorithms through simulations and experiments, respectively. Finally, Section 7 summarizes the work of this paper, and discussions on the prospect of CSSPI are given.
Compressed Sensing SPI
As shown in Figure 1, assuming that a target object with a spatial resolution of u v × pixels is to be captured by the CSSPI, the imaging process is comprised of two procedures. The first procedure is to modulate the object with a set of patterns with a spatial resolution of u v × pixels, and measure the corresponding reflected light 1 M y × ∈ , which can be mathematically described as, where each row of the measurement matrix M N × Φ∈ represents a 2-D modulation pattern in its 1-D representation, is the signal of the 2-D target object in its 1-D representation, N u v = × is the length of the 1-D representations, and M is the number of measurements and also equals the number of the modulation patterns. The second procedure is to reconstruct the image of the target object using CS [50,51].
Normally, the number of measurements is less than the length of the signal, i.e., M N < , which is demanded. This makes solving the signal x from Equation (1) an undetermined question. Fortunately, CS suggests that when the signal x is sparse or has some sparse representation in some sparse dictionaries, e.g., discrete cosine transform (DCT) and wavelet transform (WT) [52][53][54][55][56], the joint matrix is composed of the measurement matrix, and the sparse dictionary satisfies the restricted isometry property (RIP) [38,57], it is possible to reconstruct the original signal x [58]. As shown in Figure 1, using s Ψ to replace x in Equation (1) as the object vector, the equivalent form of Equation (1) can be written: , is the joint matrix, and is the sparse vector. In this case, solving the signal x from Equations (1) and (2) can be transformed into an optimization problem for solving l0-norm minimization [39,59]: The second procedure is to reconstruct the image of the target object using CS [50,51]. Normally, the number of measurements is less than the length of the signal, i.e., M < N, which is demanded. This makes solving the signal x from Equation (1) an undetermined question. Fortunately, CS suggests that when the signal x is sparse or has some sparse representation in some sparse dictionaries, e.g., discrete cosine transform (DCT) and wavelet transform (WT) [52][53][54][55][56], the joint matrix is composed of the measurement matrix, and the sparse dictionary satisfies the restricted isometry property (RIP) [38,57], it is possible to reconstruct the original signal x [58]. As shown in Figure 1, using Ψs to replace x in Equation (1) as the object vector, the equivalent form of Equation (1) can be written: where Ψ ∈ R N×N is the sparse dictionary, A ∈ R M×N is the joint matrix, and s ∈ R N×1 is the sparse vector. In this case, solving the signal x from Equations (1) and (2) can be transformed into an optimization problem for solving l 0 -norm minimization [39,59]: In order to solve the optimization problem of Equation (3), there are two requirements. First, the matrix A that satisfies the RIP of the following equation is demanded. where δ ∈ (0, 1) is the restricted isometry constant (RIC) value of the matrix A. In practical use, the random matrix is a method to obtain joint matrices following RIP conditions, however, it is difficult to verify whether these matrices satisfy the RIP property with a low RIC value. Most of the time, it only needs to ensure that the coherence between the measurement matrix and the sparse dictionary [60] is minimized. The correlation between measurement matrix Φ and sparse dictionary Ψ is defined as: To put it simply, it is to find the maximum correlation between the two matrix elements coherently. If there are correlation elements between Φ and Ψ, the correlation is very large, otherwise, the correlation is very small. There are three types of measurement matrices for CSSPI [46]: random measurement matrices, partial orthogonal measurement matrices, and semi-deterministic random measurement matrices. Second, finding a suitable reconstruction algorithm to solve the optimization problem of Equation (3) is necessary. Over the past years, several typical sparse recovery algorithms have been proposed [50], which can be classified into five main categories: convex optimization algorithms, greedy algorithms, non-convex optimization algorithms, Bregman distance minimization algorithms, and total variation minimization algorithms. Comments on the measurement matrices and the reconstruction algorithms will be made in Sections 3 and 4, respectively.
Random Measurement Matrix
Each element of the random measurement matrix is independent and obeys the same distribution, e.g., Gaussian distribution, Bernoulli distribution, etc. It has been proven in [61] that such matrices have no coherence with most sparse signals and sparse dictionaries; that is, a small number of measurements are needed to accurately reconstruct the target object. However, such matrices almost do not exist in reality and can only be generated in the laboratory. At the same time, the drawbacks of the high computational complexity and large storage space of the reconstructed object limit its use in practice. Common random measurement matrices include the Gaussian random measurement matrix [62], the Bernoulli random measurement matrix [63], etc.
Gaussian Random Measurement Matrix
A Gaussian random measurement matrix is the most widely used measurement matrix in compressed sensing, which constructs a Gaussian distribution matrix in which each element independently obeys the mean of 0 and the variance of M. That is: Each element of the matrix is distributed independently and has strong randomness, which is not related to most sparse signals and sparse bases. When the Gaussian random matrix is used to measure the signal, if the number of measured values satisfies M ≥ ck log N k , the RIP condition (c is a constant) is greatly satisfied, and the signal is reconstructed accurately.
Bernoulli Random Measurement Matrix
The difference between the Bernoulli random measurement matrix and the Gaussian random measurement matrix is that each element in the matrix is independent and uniformly distributed and obeys the binomial Bernoulli distribution with a probability of 1/2, that is: Similar to the Gaussian random matrix, each element of the matrix is independently distributed, which also has strong randomness and is not related to most sparse signals and sparse bases. Additionally, since the matrix elements are only 1 and −1, it is easy to implement on hardware devices; hence, it is more widely used than a Gaussian random matrix in practical applications.
Partial Orthogonal Measurement Matrix
Given the drawbacks of high computational complexity, large storage space, and high uncertainty of the random measurement matrices, it is particularly important to find or design a deterministic measurement matrix to reduce the computational complexity and storage space. The partial orthogonal matrix is derived from the existing orthogonal matrix with some special properties in the field of signal processing.
Partial Hadamard Matrix
In refs. [65,66], the partial Hadamard matrix is proposed as the measurement matrix of CS. It is mainly composed of M-row vectors in the N × N Hadamard matrix. The entries of the Hadamard matrix are 1 and −1, and its columns are orthogonal. The matrix satisfies the following properties: where I N is the N × N identity matrix and H T is the transpose of the matrix H. This measurement matrix is incoherent for most sparse signals and sparse dictionaries. However, since the order of the Hadamard matrix must satisfy 2n, n = 1, 2, 3, · · · , there are strict requirements for the dimension of the target signal, which limits its use in practice.
Partial Fourier Matrix
In addition, the partial Fourier matrix is proposed as the measurement matrix of CS in Refs [47]. It can reduce the complexity of the algorithm by using a fast Fourier transform. However, it is usually only incoherent to time-domain sparse signals, and most natural images do not meet this condition, so this kind of matrix is rarely used in CSSPI.
Semi-Deterministic Random Measurement Matrix
The semi-deterministic random measurement matrices are designed to follow a deterministic construction to satisfy the RIP or to have low mutual coherence, which can be regarded as a combination of random measurement matrices and deterministic orthogonal measurement matrices. At present, this kind of matrix mainly includes Toeplitz and circulant matrices [67,68], structured random matrices [68,69,72], sparse random matrices [70][71][72][73], binary random matrices [74], block-diagonal matrices [75,76], etc.
Toeplitz and Circulant Matrix
The Toeplitz and circulant measurement matrices are generated based on random measurement matrices and have the following forms: where the diagonal elements of each matrix are the same (T i,j = T i+1,j+1 ). A circulant matrix is a special form of the Toeplitz matrix. The elements in the first row of the matrix obey the same random distribution as the random measurement matrix.
Sparse Random Matrix
The structure of the sparse random matrix is simple, and it is easy to generate and save in the experiment. The elements of d random positions in each column are 1, and the rest are all 0, and d ∈ {4, 8, 10, 16} [72].
In summary, the three types of measurement matrices have their advantages and disadvantages. The random measurement matrices are close to the characteristics of RIP to a great extent; however, they are not easy to generate in reality and require high computational complexity and storage capacity. Part of the orthogonal measurement matrices has a very low computational complexity according to its orthogonal transformation, but it also has some limitations in its transformation. The semi-deterministic random measurement matrices combine the properties of random measurement matrices and partial orthogonal measurement matrices to some extent. Table 2 shows the advantages and disadvantages of each measurement matrix in detail. Table 2. Analysis and summary of various measurement matrices in CSSPI.
Type of Measurement Matrix Definition Advantages Disadvantages References
Random Matrix Gaussian -Each coefficient obeys a random distribution separately.
-The RIP property is satisfied with high probability.
-Fewer measurements and noise robustness.
-Large storage space.
-Difficult to implement in hardware.
-No explicit constructions. [62] Bernoulli [63] Semi-deterministic random matrix Toeplitz and Circulant -Each coefficient is generated in a particular way.
-Easy hardware implementation and robustness.
-Sparse random matrix retains the advantage of an unstructured random matrix.
-It is fast to generate and easy to save.
-Easy hardware implementation and robustness.
-Fourier matrix needs more recovery time and measurement times.
-The dimension limit of the Hadamard matrix. [47] Partial Hadamard [65,66] The measurement matrices constructed by different construction methods are proposed [77][78][79][80][81][82][83]. Researchers try to improve the RIP property of the existing measurement matrix. David L. Donoho proposed in Ref. [61] that the minimum singular value of the submatrix composed of the column vectors of the measurement matrix must be greater than a positive constant, i.e., the column vectors of the matrix satisfy certain independence. The QR decomposition of a matrix can increase the singular value of the matrix without changing its properties. In addition, methods for matrix optimization are also proposed, such as an optimal projection matrix that optimizes the measurement matrix by using the sparse dictionary of signals [84] and an optimal measurement matrix based on effective projection [85]. Researchers are also trying to use machine learning to train and optimize the sampling framework of CS [86][87][88][89].
Selection of the Reconstruction Algorithm
The previous section gives requirements for the measurement matrices Φ, so that x can be recovered from the given measurements y = Φx = ΦΨs = As. Since knowledge of s is equivalent to knowledge of x. Here, we only need to discuss the reconstruction algorithms for solving the equation y = As.
As mentioned in Section 2, the unique solution of Equation (2) can be obtained by posing the reconstruction problem as an l 0 -minimization problem given by Equation (3). However, solving the l 0 -minimization program is an NP-complete problem. Therefore, in practice, it is unable to solve it. The crux of CS is to propose faster algorithms that can solve Equation (3) with high probability. Over the past years, several typical sparse recovery algorithms have been proposed [50,[90][91][92][93], which can be classified into five main categories: Sensors 2023, 23, 4678 7 of 28 convex optimization algorithms, greedy algorithms, non-convex optimization algorithms, Bregman distance minimization algorithms, and total variation minimization algorithms.
Basis Pursuit
Basis pursuit (BP) is a signal processing technique [101,102], which aims to decompose a signal into a superposition of dictionary elements that have the smallest l 0 -norm of the coefficients, subject to the equality constraint given in Equation (9). Since BP is based on global optimization, it can be solved stably in many ways. Especially, it is a principle of global optimization without any specified algorithm, which is closely connected with linear programming. Therefore, Equation (9) can be expressed as: where c Tŝ is the objective function,ŝ ≥ 0 is a set of bounds, O = (A, -A), and c = (1,1). Over the past few decades, many algorithms have been proposed to solve linear programming problems, such as the simplex method and interior point method [103].
Basis Pursuit Denoising/Least Absolute Shrinkage and Selection Operator
If the measurements are corrupted by noise in CSSPI, strategies for noise suppression must be sought. Based on the principle of global optimization of BP, the widely used algorithms for robust data recovery from noisy measurements are basis pursuit denoising (BPDN) [101] and least absolute shrinkage and selection operator (LASSO) [104][105][106], etc. The BPDN and LASSO algorithms consider the sparse estimation problem as shown in Equation (11): min ŝ 1 s.t. y = As + e, where e denotes noise and s is a pure signal without noise contamination. The system can be translated into the following optimization problem: where λ is a scalar parameter, which determines the magnitude of the signal estimation residual and has a great impact on the performance of BPDN and LASSO algorithms. As λ → 0 the residual goes to zero, this problem translates into the same problem as BP. As λ → ∞ the residual gets larger, the measurements are completely drowned out by noise. In ref. [101], the author suggests that λ can be set as λ p = σ 2 log(p), where σ is the level of noise and p is the number of dictionary bases.
Decoding by Linear Programming
Ref. [107] proposed a faster method to solve sparse solutions of underdetermined equations: decoding by linear programming (DLP). It is similar to BPDN and LASSO, which consider the existence of noise in CSSPI. To recover s from corrupted data y = As + e, DLP considers solving the following l 1 -minimization problem: where g is the estimate and s is the unique solution of Equation (13). Equation (13) is a linear programming with inequality constraints and can be solved efficiently using standard optimization algorithms, see ref. [108].
Dantzig Selector
Dantzig selector (DS) is another solver for CS [109], which estimates the target signal s from measurements contaminated by noise. DS considers solving the convex optimization problem: where r = y − Aŝ is the residual vector, σ is the standard deviation of the additive white Gaussian noise, λ N > 0 and √ 1 + δ 1 is the maximum Euclidean norm of A. In addition, the program DS can easily be recast as linear programming.
The toolbox proposed in ref. [108] uses a primal-dual algorithm to solve all kinds of linear programming transformed from convex optimization problems.
Greedy Algorithms
Greedy algorithms [110][111][112] are the second class of CS reconstruction algorithms. These algorithms are different from the convex optimization algorithms, which try to find the global optima; they try to find the best local optima in the immediate neighborhood for each iteration of the optimization.
Orthogonal Matching Pursuit
Orthogonal matching pursuit (OMP) computes the best nonlinear approximation for the sparse solution of Equation (3), which is proposed by Y. C. Pati et al. [113][114][115][116][117]. By taking the higher absolute value of the inner product calculated between each column and the residue, it locates the column in the matrix A with the largest correlation to the residue r = y − Aŝ. In addition, OMP fits the original function to all the already selected dictionary elements via least squares or projects the function orthogonally onto all selected dictionary atoms, so it does not repeatedly select the same atom.
OMP has become one of the most widely used CS algorithms in recent years. A couple of improved OMPs have been proposed, such as stagewise orthogonal matching pursuit (StOMP) and regularized orthogonal matching pursuit (ROMP) [118,119].
Compressive Sampling Matching Pursuit/Subspace Pursuit
Each iteration of OMP selects only one atom, and is also called the serial greedy algorithm. To overcome the instability of serial greedy algorithms, researchers propose parallel greedy algorithms, i.e., compressive sampling matching pursuit (CoSaMP) [120] and subspace pursuit (SP) [121]. This kind of algorithm has stricter limits on convergence and performance, selecting multiple atoms once in each iteration and allowing the previously selected wrong atoms to be discarded. CoSaMP selects 2k atoms, and SP selects k atoms in each iteration.
Iterative Hard Thresholding
Iterative hard thresholding (IHT) is yet another greedy algorithm that was proposed by Blumensath and Davies [122][123][124]. It introduces the thresholding function in each iteration to maintain the k maximum non-zero entries in the estimated signal, and the remaining entries are set to zero.
The critical update step of IHT is, where H k is the hard thresholding function and λ denotes the step size.
Non-Convex Optimization Algorithms
All CS reconstruction algorithms based on signal sparsity try to find the approximate solution that satisfies the minimum norm. Non-convex optimization algorithms recover signals from fewer measurements by replacing l 1 -norm with l p -norm where p ≤ 1 [125][126][127].
Iterative Reweighted Least Square Algorithm
The equivalent variant form of Equation (9) is considered in the non-convex optimization algorithm: min s∈R n ŝ l p subject to As = y, where p < 1. The iterative reweighted least squares algorithm (IRLS) [128,129] is to replace the l p objective function in (16) with a weighted l 2 -norm: where the weights are computed from the previous iterate s (n−1) so that the objective in (17) is a first-order approximation ω i = s The solution of (17) can be given explicitly, giving the next iteration s (n) : where Q n is the diagonal matrix with entries 1/ω i = s
Bayesian Compressed Sensing Algorithm
The Bayesian approach applies to the input signals, which belong to some known probability distributions. The Bayesian compressed sensing algorithm (BCS) [130][131][132][133] provides an estimate of the posterior density function of the additive noise encountered when performing compressive measurements, e.g., y = As + n. Let σ 2 be the noise variance; there is a Gaussian likelihood model: In this way, the traditional sparse weight inversion problem of CS is transformed into a linear regression problem with sparse constraints, and the BCS seeks a complete posterior density function.
Bregman Distance Minimization Algorithms
For the large-scale and completely dense matrix A, the As and A T s can be calculated by fast transformation, which makes it possible to solve the unconstrained problem: By iteratively solving a series of unconstrained subproblems shown in Equation (20) generated by the Bregman iterative regularization scheme, the exact solution of the constrained problem is given [134][135][136].
In [137], the split Bregman (SB) algorithm has been proposed, which can be used in CS. Due to its parallelizing nature, it can be efficiently implemented to have faster computation.
Total Variation Minimization Algorithms
For two-dimensional image signals, Rudin, Osher, and Fatemi [138] first introduced the concept of total variation (TV) for image denoising in 1992. As a result, a new restoration model is proposed that is based on the sparse image gradient. Research has confirmed that the use of TV minimization in CS makes the recovered image quality sharper by preserving the edges or boundaries more accurately, which is essential to characterizing images. Different from other algorithms, the TV minimization algorithms do not need a specific sparse dictionary to represent image signals. A detailed discussion of TV minimization algorithms can be found in [139], and different versions of the TV minimization algorithm for image restoration are proposed in [140][141][142].
Definition: Let x ij denote the pixel in the i-th row and j column of an n × n image, and define the operators and where the 2-vector D ij x can be interpreted as a kind of discrete gradient of the digital image x. The total variation of x is simply the sum of the magnitudes of this discrete gradient at every point:
Min-TV with Equality Constraints
If D ij x is nonzero for only a small number of indices ij for the image x, the image signal can be restored by solving the following equality constraint problems, which are called min-TV with equality constraints (TV-EQ): min TV(x) subject to Φx = y.
Min-TV with Quadratic Constraints
Additionally, for the image signal with a gradient sparse property (i.e., only a small amount of D ij x is non-zero) and the measured value of the single pixel is polluted by noise, the equality constraint problem of Equation (24) can be transformed into the following min-TV with quadratic constraints (TV-QC) problem:
TV Dantzig Selector
The solver of the DS proposed in [109] can also solve the TV minimization of the CSSPI problem when considering noise pollution. The TV Dantzig selector (TV-DS) considers the following TV minimization issues:
Total Variation Augmented Lagrangian Alternating Direction Algorithm
Compared with the traditional convex optimization algorithms based on signal sparsity, the above three kinds of TV minimization algorithms are still much slower (e.g., TV-DS) or have poor image reconstruction quality (e.g., TV-QC). The total variation augmented Lagrangian alternating direction algorithm (TVAL3) has successfully overcome this difficulty and accepts a vast range of measurement matrices [142].
The TV minimization model is very difficult to solve directly due to the non-differentiab ility and non-linearity of the TV term. TVAL3 minimizes augmented Lagrangian functions through an alternating minimization scheme and updates multipliers after each sweep. In-stead of employing the augmented Lagrangian method to minimize Equation (23) directly, consider an equivalent variant of Equation (23):
Simulation
In terms of several aspects such as imaging quality, running time, and robustness to noise, to show their pros and cons, this section compared the performance of the above CSSPI measurement matrices and reconstruction algorithms on both simulated and experimental data. Without losing generality, we used "cameraman" with a size of 64 × 64 pixels as the test image in the simulation. "Cameraman" is a natural image with a continuous tone, which is extensively employed in MATLAB image databases and the digital image processing field.
Here, multiple experimental settings need to be clarified. (1) Concerning the quantitative comparison of image quality, we employ peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as metrics. (2) When the measurement matrix is compared with the results of the OMP algorithm, the number of iterations is set to 1/4 of the sampling patterns. (3) In comparison with the performance of the reconstruction algorithm, the iteration number of all greedy algorithms is set to 200.
Comparison of Measurement Matrix Performance
When testing the performance of the CSSPI measurement matrices mentioned above, we first selected DCT as a sparse dictionary. Subsequently, the OMP algorithm was adopted to reconstruct the image. Due to the fact that there is some uncertainty in the random measurement matrix, 10 times for each test, the average data were recorded.
The simulation results are exhibited in Figures 2 and 3. Table 2 summarizes the six measurement matrices universally used in CSSPI. Figure 2a depicts the PSNR, SSIM, and running time under the sampling rate of 20%-100%. In the case of full sampling, the reconstruction quality of the partial Fourier matrix is better than that of other measurement matrices. Under sampling, the reconstruction quality of the partial Fourier and Toplitz matrices is worse than that of other measurement matrices. Apart from that, the Gaussian random matrix, Bernoulli random matrix, sparse random matrix, and partial Hadamard matrix show the same performance in terms of reconstruction quality. As conspicuously revealed in Figure 2a, the running time of the partial Fourier matrix is the longest. Figure 2b illustrates the reconstructed images of each measurement matrix at several different sampling rates.
Furthermore, different levels of Gaussian noise were used to pollute the measurements, and the image was reconstructed at a sampling rate of 80%. When the signal-to-noise ratio (SNR) is 35 dB to −5 dB, the performance of different measurement matrices is shown in Figure 3. The degradation rates of PSNR and SSIM values of Gaussian random matrices, Bernoulli random matrices, sparse random matrices, and Hadamard matrices are almost the same under different noise levels. The image reconstruction quality degradation rate of the Toplitz matrix and Fourier matrix is the lowest at low noise levels. Additionally, the PSNR and SSIM values of the Toplitz matrix are the same as those of all other matrices at high noise levels. Therefore, the Toplitz matrix and Fourier matrix show a better ability to suppress noise. In addition, the Gaussian random matrix, Bernoulli random matrix, sparse random matrix, and Hadamard matrix have the same ability to suppress noise. From the running time curve in Figure 3a, it can be seen that the reconstruction efficiency of all measurement matrices is not affected by noise. The above results can be further confirmed in Figure 3b.
Comparison of Reconstruction Algorithm Performance
In this work, we test the CSSPI algorithm mentioned above at different sampling rates and noise levels. It should be noted that not all algorithms are directly comparable because many algorithms make different assumptions about the measurement matrix at the beginning of their design. Therefore, to compare the performance of the algorithm in universality, the measurement matrix is the most widely used Gaussian random matrix, and the sparse dictionary is the DCT dictionary. At the same time, to reduce the influence of randomness on the results, the simulation of each algorithm is repeated 10 times, and the results are averaged. Furthermore, different levels of Gaussian noise were used to pollute the mea ments, and the image was reconstructed at a sampling rate of 80%. When the sign noise ratio (SNR) is 35 dB to −5 dB, the performance of different measurement matr shown in Figure 3. The degradation rates of PSNR and SSIM values of Gaussian ra matrices, Bernoulli random matrices, sparse random matrices, and Hadamard ma are almost the same under different noise levels. The image reconstruction quality d dation rate of the Toplitz matrix and Fourier matrix is the lowest at low noise levels ditionally, the PSNR and SSIM values of the Toplitz matrix are the same as those other matrices at high noise levels. Therefore, the Toplitz matrix and Fourier matrix a better ability to suppress noise. In addition, the Gaussian random matrix, Bernoull dom matrix, sparse random matrix, and Hadamard matrix have the same ability to press noise. From the running time curve in Figure 3a, it can be seen that the recon In the Appendix A, we make an intra-class quantitative comparison and compare the mentioned algorithms in detail. From the result, we get several algorithms with better comprehensive performance, which are: OMP, IHT, BP, BPDN, TVAL3, BCS, and SB. Therefore, in the following comparison, we mainly take these algorithms as a representative and make an inter-class comparison of all kinds of algorithms in detail.
Comparison of Reconstruction Algorithm Performance
In this work, we test the CSSPI algorithm mentioned above at different sam rates and noise levels. It should be noted that not all algorithms are directly compa because many algorithms make different assumptions about the measurement mat the beginning of their design. Therefore, to compare the performance of the algorith universality, the measurement matrix is the most widely used Gaussian random m and the sparse dictionary is the DCT dictionary. At the same time, to reduce the infl of randomness on the results, the simulation of each algorithm is repeated 10 times the results are averaged.
In the appendix, we make an intra-class quantitative comparison and compa mentioned algorithms in detail. From the result, we get several algorithms with Figure 4 shows the results of an inter-class comparison of representative algorithms that perform better among various types of algorithms under different sampling rates. Figure 5 shows that partial algorithms reconstruct images at 20%, 40%, 60%, and 80% sampling rates, respectively. As can be seen from Figure 4, the reconstruction quality of the TV minimization algorithms is the best. This is because the image signal cannot be perfectly sparse even on a specific sparse basis, and the TV minimization algorithms were specially designed for the gradient sparsity of the image signal, so they are more suitable for image reconstruction. TVAL3 takes the least time to reconstruct the image. The reconstruction quality of the convex optimization algorithm is better than that of the greedy algorithm because the convex optimization algorithm looks for the global optimal solution in each iteration, while the greedy algorithm looks for the local optimal solution. In addition, the running time of the greedy algorithm is less than that of the convex optimization algorithm since the greedy algorithm only seeks the best matching atom rather than an atomic set in each iteration. The reconstruction quality and time of the Bayesian algorithm and Bregman minimization algorithm are between the convex optimization algorithm and the greedy algorithm.
perfectly sparse even on a specific sparse basis, and the TV minimization algorithm specially designed for the gradient sparsity of the image signal, so they are more su for image reconstruction. TVAL3 takes the least time to reconstruct the image. The struction quality of the convex optimization algorithm is better than that of the g algorithm because the convex optimization algorithm looks for the global optimal so in each iteration, while the greedy algorithm looks for the local optimal solution. In tion, the running time of the greedy algorithm is less than that of the convex optimi algorithm since the greedy algorithm only seeks the best matching atom rather th atomic set in each iteration. The reconstruction quality and time of the Bayesian algo and Bregman minimization algorithm are between the convex optimization algorith the greedy algorithm. In the next test, we will further study the performance of various reconstruct gorithms with different noise. When the SNR is 35 dB to −5 dB, the performance of v reconstruction algorithms at a sampling rate of 80% is shown in Figures 6 and 7. Fi shows the results of the algorithms with better anti-noise performance in each gro can be seen from the graph that the deterioration rate of DLP, TV-QC, and the g algorithm is the lowest, and the reconstruction result of TVAL3 is the best. The recon tion quality of BPDN and SB is better than that of BCS and the greedy algorithm. It s be emphasized that for greedy algorithms, DLP and TV-QC, which have poor recon tion quality in noiseless, they all have a low rate of deterioration in anti-noise, espe perfectly sparse even on a specific sparse basis, and the TV minimization algorithms were specially designed for the gradient sparsity of the image signal, so they are more suitable for image reconstruction. TVAL3 takes the least time to reconstruct the image. The reconstruction quality of the convex optimization algorithm is better than that of the greedy algorithm because the convex optimization algorithm looks for the global optimal solution in each iteration, while the greedy algorithm looks for the local optimal solution. In addition, the running time of the greedy algorithm is less than that of the convex optimization algorithm since the greedy algorithm only seeks the best matching atom rather than an atomic set in each iteration. The reconstruction quality and time of the Bayesian algorithm and Bregman minimization algorithm are between the convex optimization algorithm and the greedy algorithm. In the next test, we will further study the performance of various reconstruction algorithms with different noise. When the SNR is 35 dB to −5 dB, the performance of various reconstruction algorithms at a sampling rate of 80% is shown in Figures 6 and 7. Figure 6 shows the results of the algorithms with better anti-noise performance in each group. It can be seen from the graph that the deterioration rate of DLP, TV-QC, and the greedy algorithm is the lowest, and the reconstruction result of TVAL3 is the best. The reconstruction quality of BPDN and SB is better than that of BCS and the greedy algorithm. It should be emphasized that for greedy algorithms, DLP and TV-QC, which have poor reconstruction quality in noiseless, they all have a low rate of deterioration in anti-noise, especially In the next test, we will further study the performance of various reconstruction algorithms with different noise. When the SNR is 35 dB to −5 dB, the performance of various reconstruction algorithms at a sampling rate of 80% is shown in Figures 6 and 7. Figure 6 shows the results of the algorithms with better anti-noise performance in each group. It can be seen from the graph that the deterioration rate of DLP, TV-QC, and the greedy algorithm is the lowest, and the reconstruction result of TVAL3 is the best. The reconstruction quality of BPDN and SB is better than that of BCS and the greedy algorithm. It should be emphasized that for greedy algorithms, DLP and TV-QC, which have poor reconstruction quality in noiseless, they all have a low rate of deterioration in anti-noise, especially at a low SNR (that is, SNR < 10 dB), the reconstruction quality is equal to that of other algorithms. The PSNR of OMP and IHT is higher than that of all other algorithms under low SNR.
Sensors 2023, 23, x FOR PEER REVIEW 16 at a low SNR (that is, SNR<10dB ), the reconstruction quality is equal to that of othe gorithms. The PSNR of OMP and IHT is higher than that of all other algorithms un low SNR. The same conclusion as a quantitative comparison can also be drawn from the rec structed image shown in Figure 7. It can be seen from the figure that the reconstruc quality of the greedy algorithm is less affected by noise, and it is difficult to observe difference under different SNRs. When the SNR is 0 dB, the four greedy algorithms still observe the rough contours of the characters; however, other algorithms find it d cult to distinguish the contours of the characters in the image (such as TVAL3). Based on the summary of the simulation test results of various algorithms, the minimization algorithm based on image gradient sparse shows the best comprehen performance in various cases and is more suitable for image signal reconstruction. T 3 is a summary of the simulation-tested CSSPI reconstruction algorithms in this pape
Experiment
In order to further compare the performance of different CSSPI measurement ma ces and reconstruction algorithms, we designed a laboratory experiment to test the formance indicators under real experimental data, and the experiment setup is show at a low SNR (that is, SNR<10dB ), the reconstruction quality is equal to that of other algorithms. The PSNR of OMP and IHT is higher than that of all other algorithms under low SNR. The same conclusion as a quantitative comparison can also be drawn from the reconstructed image shown in Figure 7. It can be seen from the figure that the reconstruction quality of the greedy algorithm is less affected by noise, and it is difficult to observe the difference under different SNRs. When the SNR is 0 dB, the four greedy algorithms can still observe the rough contours of the characters; however, other algorithms find it difficult to distinguish the contours of the characters in the image (such as TVAL3). Based on the summary of the simulation test results of various algorithms, the TV minimization algorithm based on image gradient sparse shows the best comprehensive performance in various cases and is more suitable for image signal reconstruction. Table 3 is a summary of the simulation-tested CSSPI reconstruction algorithms in this paper.
Experiment
In order to further compare the performance of different CSSPI measurement matrices and reconstruction algorithms, we designed a laboratory experiment to test the performance indicators under real experimental data, and the experiment setup is shown in Figure 8. A projector (ACER V36X) illuminates the target by patterns, and the reflected light of the target is incident onto a single-pixel detector (KG-PR-200K-A-FS) and The same conclusion as a quantitative comparison can also be drawn from the reconstructed image shown in Figure 7. It can be seen from the figure that the reconstruction quality of the greedy algorithm is less affected by noise, and it is difficult to observe the difference under different SNRs. When the SNR is 0 dB, the four greedy algorithms can still observe the rough contours of the characters; however, other algorithms find it difficult to distinguish the contours of the characters in the image (such as TVAL3).
Based on the summary of the simulation test results of various algorithms, the TV minimization algorithm based on image gradient sparse shows the best comprehensive performance in various cases and is more suitable for image signal reconstruction. Table 3 is a summary of the simulation-tested CSSPI reconstruction algorithms in this paper.
Experiment
In order to further compare the performance of different CSSPI measurement matrices and reconstruction algorithms, we designed a laboratory experiment to test the performance indicators under real experimental data, and the experiment setup is shown in Figure 8. A projector (ACER V36X) illuminates the target by patterns, and the reflected light of the target is incident onto a single-pixel detector (KG-PR-200K-A-FS) and transferred to the computer by the DAQ (NI DAQ USB-6216) for image reconstruction. In our experiment, we set the resolution of the projector to 1024 × 1280 pixels and projected a pattern every 0.3 s. This speed is very slow compared with the digital micromirror device (DMD), but the long acquisition time also means high SNR in the measurements. A simple picture of "four bars" and a complicated picture of a "ladybug" are used for the demonstration. The results are shown in Figures 9 and 10. All measurement matrices are projected by differential projection to further reduce the impact of noise, and the imaging resolution is 64 × 64.
Sensors 2023, 23, x FOR PEER REVIEW 17 of 29 transferred to the computer by the DAQ (NI DAQ USB-6216) for image reconstruction. In our experiment, we set the resolution of the projector to 1024 1280 × pixels and projected a pattern every 0.3 s. This speed is very slow compared with the digital micromirror device (DMD), but the long acquisition time also means high SNR in the measurements. A simple picture of "four bars" and a complicated picture of a "ladybug" are used for the demonstration. The results are shown in Figures 9 and 10. All measurement matrices are projected by differential projection to further reduce the impact of noise, and the imaging resolution is 64 64 × . Figure 8. Schematic of the experiment setup. Figure 9 shows the reconstructed image results of unstructured random, partially orthogonal, and structured random measurement matrices (that is, Bernoulli, Hadamard, and Sparse) at different sampling rates. We select DCT as a sparse dictionary, and the reconstruction algorithm adopts the OMP. It can be found that the reconstruction quality of the sparse random matrix is slightly lower than that of the other two measurement matrices. At a sampling rate of 20%, the approximate outline of the target can be distinguished from the reconstructed results. For the "ladybug" model with more details, the quality of the reconstructed image is slightly worse. Additionally, the reconstruction quality of Hadamard matrices is slightly better than that of the other two kinds of matrices. The experimental results of the measurement matrix are consistent with the conclusions of the above simulation tests. Figure 10 shows the image reconstruction results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, non-convex optimization algorithm, and Bregman minimization algorithm under different sampling rates. We use the Figure 9 shows the reconstructed image results of unstructured random, partially orthogonal, and structured random measurement matrices (that is, Bernoulli, Hadamard, and Sparse) at different sampling rates. We select DCT as a sparse dictionary, and the reconstruction algorithm adopts the OMP. It can be found that the reconstruction quality of the sparse random matrix is slightly lower than that of the other two measurement matrices. At a sampling rate of 20%, the approximate outline of the target can be distinguished from the reconstructed results. For the "ladybug" model with more details, the quality of the reconstructed image is slightly worse. Additionally, the reconstruction quality of Hadamard matrices is slightly better than that of the other two kinds of matrices. The experimental results of the measurement matrix are consistent with the conclusions of the above simulation tests. Figure 10 shows the image reconstruction results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, non-convex optimization algorithm, and Bregman minimization algorithm under different sampling rates. We use the partial Hadamard matrix as the measurement matrix and the DCT as the sparse dictionary. For the sparse "four bars", the SSIM of the reconstructed image is 0.7 at a 40% sampling rate for OMP, BP, and BCS algorithms. For the "ladybug" with more details, the reconstruction performance of the TVAL3 algorithm is the best. The performance of the SB algorithm based on Bregman minimization in the actual imaging system is not satisfactory. The experimental results support the conclusion drawn in Section 5.
Discussion and Conclusions
The different CS measurement matrices and reconstruction algorithms used in SPI have certain tradeoffs in hardware implementation, acquisition efficiency, recovery efficiency, imaging quality, and noise robustness, as shown in Tables 2 and 3. In addition, with regard to CSSPI, the mutual containment relationship between imaging quality and imaging efficiency is the predominant factor that limits its application. To deal with this problem, the author believes that it is essential to further ameliorate the performance of CSSPI in the following aspects: (1) In the aspect of signal acquisition, most of the current methods directly adopt the measurement matrix to conduct the linear measurement. If we can first take into consideration the possible noise in the actual environment and introduce some local nonlinear operations in the measurement, we can expect to get more robust measurements. (2) In terms of image reconstruction, assuming that we can combine the sparsity of the signal to deal with the optimization problem, it is expected to get a better reconstruction effect. (3) Machine learning is adopted to optimize the measurement matrix and reconstruction algorithm of CSSPI at the same time to achieve the best match, which is expected to strike a balance between measurement efficiency and imaging quality. There have been some efforts to use machine learning algorithms to improve the performance of CS, which shows the strong impact of machine learning on CS [87,[171][172][173][174].
In summary, we introduced the principle of SPI technology based on CS. Under different parameter settings, we subsequently tested and compared the main-stream measurement matrices and reconstruction algorithms in CSSPI on both simulated data and real captured data, including sampling ratio and noise. Afterward, we investigated the problems currently existing in CSSPI. Furthermore, we set forth the future research direction and development trend accordingly. Our work provides a comprehensive summary of conventional CSSPI and provides experience in the development and application of CSSPI.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A
Here, we make an intra-class quantitative comparison and compare the mentioned algorithms in detail. We use "cameraman" with a size of 64 × 64 pixels as the test image in the simulation. The iteration number of all greedy algorithms is set to 200. Figure A1 shows the PSNR, SSIM, and running time curves at different sampling ratios, and Figure A2 is the reconstructed image at partial sampling rates. Figure A3 shows the PSNR and SSIM curves with different Gaussian noises, and Figure A4 shows the reconstructed images.
As can be observed in Figure A1a, the four convex optimization algorithms show similar performance; all of them can achieve perfect reconstruction at the full sampling rate. The reconstruction quality of DLP at a low sampling rate is weaker than that of the other three. In addition, the running time of BPDN and DLP algorithms is almost not affected by the sampling rate. The running time of the BP and DS is greatly affected by the sampling rate, especially the running time of the DS, which increases greatly with the increase in sampling rate.
As can be seen from Figure A1b, the reconstruction quality of the four greedy algorithms is almost the same. The PSNR and SSIM of OMP are slightly higher than those of other algorithms at the low sampling rate, while the SP is the highest at the high sampling rate. The running time of OMP and IHT is the least affected and is almost not affected by the sampling rate, while the running time of CoSaMP and SP increases with the increased sampling rate. Figure A1c makes a quantitative analysis of three commonly used algorithms based on TV minimization, and the reconstruction quality of each algorithm improves continuously with the increased sampling rate. It should be noted that when the sampling rate is greater than 70%, the SSIM of TVAL3 and TV-DS tends to be stable, while the corresponding PSNR even decreases slightly. TV-DS takes much more time to rebuild than the other two algorithms; hence, to see the time spent by TVAL3 and TV-QC clearly, the green box area where they are located is locally enlarged. We find that the running time of TVAL3 and TV-QC is almost not affected by the sampling rate, even if there is a small increase in the range of less than 1 s. Therefore, in the reconstruction algorithm based on TV minimization, the reconstruction quality of using the Dantzig solver to solve the TV minimization problem is the best; however, it takes a lot of time, which is almost 100 times that of other algorithms.
Although the reconstruction quality of the TVAL3 algorithm is slightly lower than that of the TV-DS, the SSIM value is more than 80% at a 70% sampling rate and takes little time.
From Figure A1d, we can see that the reconstruction quality of IRLS is better than that of BCS and SB at all sampling rates; however, it takes a lot of time, which seriously reduces its practical value, especially in real-time imaging scenes. The reconstruction quality of BCS is better than that of SB at the low sampling rate, and the running time is the least among the three algorithms. However, when the sampling rate is high, the comprehensive performance of SB is better. Therefore, BCS is more suitable for low sampling rate scenarios, while SB is suitable for high sampling rate scenarios.
shows the PSNR, SSIM, and running time curves at different sampling ratios, and Figure A2 is the reconstructed image at partial sampling rates. Figure A3 shows the PSNR and SSIM curves with different Gaussian noises, and Figure A4 shows the reconstructed images. As can be observed in Figure A1a, the four convex optimization algorithms show As can be observed in Figure A1a, the four convex optimization algorithms show similar performance; all of them can achieve perfect reconstruction at the full sampling rate. The reconstruction quality of DLP at a low sampling rate is weaker than that of the As can be observed in Figure A1a, the four convex optimization algorithms show similar performance; all of them can achieve perfect reconstruction at the full sampling rate. The reconstruction quality of DLP at a low sampling rate is weaker than that of the When the SNR is 35 dB to −5 dB, the performance of various reconstruction algorithms at a sampling rate of 80% is shown in Figures A3 and A4. From Figure A3a, we can observe that with the decrease in SNR of single-pixel measurement, the image reconstruction quality of the four convex optimization algorithms will decline. The decline rate in reconstruction quality in DLP is the slowest. The degradation rates of the DS, BP, and BPDN are almost the same. From Figure A3b, we can see that the anti-noise performance of the four greedy algorithms is the same; however, there is only a slight difference. From Figure A3c, we can see that in the algorithm based on TV minimization, the deterioration rate of TV-QC is the lowest. The reconstruction quality of TVAL3 is slightly worse than that of TV-DS in cases of high SNR. The reconstruction efficiency of TVAL3 is higher, so the comprehensive performance of TVAL3 is still the best. Figure A3d gives the anti-noise performance of the non-convex optimization algorithm and Bregman minimization algorithm. The two types of algorithms have the same deterioration rate under the condition of Gaussian noise, in which the reconstruction result of IRLS, which belongs to the non-convex optimization algorithm, is the best.
|
2023-05-14T15:08:18.523Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "d91d1892e1f71df9f69036b3d62cf19c4b7e63a4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/s23104678",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0118ed7471d18bc92b65d1df2050648d7437166c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
236398590
|
pes2o/s2orc
|
v3-fos-license
|
Remote Learning Readiness and Challenges: Perceptions and Experiences among Tertiary State University Management Students
COVID-19 has disrupted the education system globally, leading education institutions to migrate into remote learning. This study on online learning readiness and competence was conducted among management students on their perceptions on the importance of, and the confidence level of their online learning competence factors. Using the Student Readiness for Online Learning (SROL) instrument, the results show that the students consider Technical Competence as very important and that they are somewhat confident with their online learning competence. Both perceptions of the importance of the online competence factors and competency levels significantly correlate with the students’ self-report of whether or not they have learned in the course. Among the eight online learning challenges, the students find the “lack of technical skills in using online learning” as the least challenging. This study concludes with the recommendations that pedagogical and technological interventions be pursued to address the inadequacies in the online teaching-learning process.
The country's premier state university took a stance to transition to "remote learning and/or the blending of remote and face-to-face learning for AY 2020-2021 if the public health situation allows. " (Bautista, 2020). The university directive detailed how the faculty members were expected to redesign courses to be delivered through remote teaching and learning modality. In this context, remote learning is a teaching/learning mode with instructors and students located in two different places and the instruction is conducted through real-time virtual interactions (synchronous) or through offline modular-based learning (asynchronous) (Moore et al., 2011).
The implementation of the remote learning modality elicited a variety of transition challenges.
To obtain first-hand assessments on the extent of student engagement as well as to examine the students' online readiness and learning adeptness, the researcher rolled out a survey, midway through the semester, intended to evaluate the effectiveness of the implemented remote learning modalities.
The courses were electronically delivered mainly through a learning management system, where the course requirements were made accessible and occasional synchronous meetings were also conducted. The researcher's overarching goal was to find out whether the learner-centric course objectives have been achieved in the new learning platform so far, and specifically, to examine the students' online readiness competencies in relation to their learning of the course.
According to Schultz and DeMers (2020), the online environment "leverages on technological tools and that the students must take the responsibility for their learning" (p. 3). To be engaged, to be satisfied with the course and to perform well, the students must possess the basic competencies of self-directed learning and the technology-mediated learning adeptness. Bouilheres et al. (2020) state that "student engagement through the use of appropriate technology "enhances student performance and course satisfaction" (p. 3052).
Because of the novelty of the remote learning implementation in the Philippines, there is dearth of extant studies on students' remote learning experiences. One of these is the study of Rotas and Cahapay (2020) which identifies the categories of difficulties in remote learning. Another two are the studies of Alvarez (2020) and Toquero (2021) which examine the challenges of and the coping strategies within the emergency remote teaching Other authors define blended learning as a "prevalent component of traditional face-to-face and online education environments (Picciano, 2017). Moore et al. (2011) In a study on the online learning readiness among university students in Malaysia, Chung et al. (2020) had the students rank the following eight challenges faced by students in an online learning environment, namely, internet connectivity; too many different online learning methods used by different lecturers; limited broadband data; slow personal laptop, devices; difficult to focus due to distractions from surroundings; lack of motivation due to absence of face to face contact with friends and lecturers; difficult to understand the content of the subjects; and, lack of technical skills in using online learning (p. 55).
Transitioning to online learning has burdened students who are required to possess a variety of skills competencies and resources (Radu et al., 2020).
Technology-mediated online learning requires students to be self-directed and independent, as they "consider new ways to prepare, organize, engage, and complete requirements" (Martin et al., 2020, p. 39). Radu et al. (2020) For the research community, the results of this study, on the bases of the variables investigated, may provide initial insights on factors affecting the students' learning in a remote learning set-up and may serve as springboard for future related studies.
Methods
A descriptive research design was used to evaluate the extent of student learning in the remote learning modalities and to assess the students' online readiness competencies. The cross-sectional study was conducted midway through the first semester, academic year 2020 -2021 after the implementation of the remote teaching / learning set-up. Fifty-three (53) The students were asked to rate on a four-point Likert scale (4 -unimportant, 3 -neither important nor unimportant, 2 -somewhat important, 1 -very important). The second instance measured the students' confidence in their readiness for online learning. The students were asked to rate on a fivepoint Likert scale (5 -very unconfident, 4 -somewhat unconfident, 3 -neither confident not unconfident, 2somewhat confident, 1 -very confident). In addition, the students were asked on whether they have been learning so far. The responses were on a fourpoint Likert scale (4 -strongly disagree, 3 -disagree, 2 -agree, 1 -strongly agree). A descriptive design was also used to summarize the students' ranking of the importance of the challenges related to online learning. The eight challenges faced by students in an online learning environment were adopted from Chung et al. (2020) where the students were asked to rank the factors they perceived as most challenging, ranking from one (1), the factor deemed most challenging, to eight (8) as the least challenging.
Results and Discussion
A total of 53 BS Management students voluntarily participated in the survey, and 43 were female. The age range is from 20 to 22, and 66% were 21 years old. These participants belong to the first cohort of the reformed basic education system which implemented the additional two years of Senior High School. Overall, the study's findings show that the students are knowledgeable of the importance of the online readiness for learning, but that they seem to undervalue themselves in terms of their confidence on the online learning competencies.
This suggests that infrastructural interventions and pedagogical strategies are needed in order to enhance the students' confidence in their online readiness for learning competencies. Table 2b shows the Pearson r correlations between the students' self-report on their having been learning so far, and their confidence on their online readiness competency levels. The last objective of the study is to find out, based on the students' remote learning experience, their ranking of the challenges related to online learning. Similarly, this finding is parallel to that of Chung et al. (2020) where Malaysian university students also ranked "lack of technical skills in using online learning" as the least challenging. Moreover, the findings are congruent with Rotas and Cahapay (2020) and Chung et al. (2020) where "unstable/unreliable internet connectivity" is ranked first, considered as the most challenging concern in remote learning. The results likewise exhibit similarities with first among the four challenges, "poor to no internet access," as evaluated by Filipino students on the shift to emergency remote learning (Alvarez, 2020).
Conclusion and Recommendations
This study intended to investigate the online learning readiness competence perceptions among students of a state university which transitioned into remote learning. The participants in this study expressed the importance of the online learning readiness competencies to be able to actively participate in the learning activities and likewise recognize their inadequacy in terms of confidence in most of the online learning readiness competencies measures. Schultz and DeMers (2020) advance the idea that the online environment "leverages on technological tools" where "students must take the responsibility for their learning" (p. 3).
The study also examined the extent of learning among the students tested the relationship between the students' online readiness competencies and the extent of their learning. To be successful in the remote learning modality, students should possess adequate technologymediated learning adeptness and should be self-directed towards learning. Hence, it is recommended that educators and administrators ensure that students are online-learning-ready, and that they review their pedagogical and technological intervention schemes which facilitate student learning.
Finally, this study evaluated the students' experiences on the challenges they faced in relation to online learning. The "unstable/unreliable internet connectivity" truly is the bane of all the online learning stakeholders in the Philippines.
Considering the gravity and urgency of this concern, government intervention is imperative. Otherwise, this unsuitable state of data access infrastructure may spawn learning achievement asymmetries, marginalizing those who have unreliable internet connectivity access. The findings also call for the appropriate up skilling among faculty members in remote pedagogical strategies and learning technologies as well as a greater expectation among educators to be more considerate in terms of students' learning and performance in the novel learning set-up.
As an exploratory study, these nascent
|
2021-07-27T00:05:05.862Z
|
2021-05-31T00:00:00.000
|
{
"year": 2021,
"sha1": "94ab451ce5dddbbde6d90be0b0b91c65e73dfd5b",
"oa_license": "CCBYNC",
"oa_url": "https://rmrj.usjr.edu.ph/rmrj/index.php/RMRJ/article/download/967/230",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "13c98ac9dd50e5ef8d9c5debc880050d1ea1ba42",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
11487903
|
pes2o/s2orc
|
v3-fos-license
|
Performance assessment of product service system from system architecture perspectives
. New business models in complex engineering products have favoured the integration of acquisition and sustainment phases in capability development. The product service system (cid:2) PSS (cid:3) concept enables manufacturers of complex engineering products to incorporate support services into the product’s manufacturing and sustainment lifecycle. However, the PSS design has imposed significant risks to the manufacturer not only in the manufacture of the product itself, but also in the provision of support services over long period of time at a predetermined price. This paper analysed three case studies using case study research design approach and mapped the service elements of the case studies to the generic complex engineering product service system (cid:2) CEPSS (cid:3) model. By establishing the concept of capability distribution for a PSS enterprise, the capability of the CEPSS can be overlaid on the performance-based reward scheme so that decision makers evaluate options related to the business opportunities presented to them.
Introduction
Recent trend around the world among the owners of complex engineering systems such as aircraft or oil refinery is to include consideration for the sustainment of the system at the very early stages of system development.According to the Defence Materiel Organisation in Australia 1 , the asset acquisition project is considered a continuum of four phases, which can be generalised as a capability systems lifecycle as shown in Figure 1.The goal is to ultimately attain desired capability levels that can be measured as a performance outcome of systems in-service.
There are two different contracting regimes in Figure 1.
i System acquisition agreements including functional and performance specification of the final system, that is, the tendering and contracting activities in the acquisition phase.
Advances in Decision Sciences
Requirements Acquisition Sustainment Disposal Need ii Sustainment agreements specifying outcomes and performance requirements for in-service support, that is, the in-service support contracts in the sustainment phase.
Although it is still early stage process development, the Australian Defence intends to adopt a more integrated approach by contracting for acquisition and sustainment simultaneously in some of their new system acquisitions.
Similarly, the Ministry of Defence 2 in UK is managing a general shift in defence acquisition away from the traditional pattern of designing and manufacturing successive generations of platforms.Instead, a new paradigm centred on support, sustainability, and the incremental enhancement of existing capabilities from technology insertions has evolved with the emphasis increasingly on through life capability management.The new approach to acquisition is built around the objective of achieving i primacy of through life considerations; ii coherence of defence spend across research; iii development, procurement, and support, iv successful management of acquisition at the departmental level.
From the industry's point of view, this shift in defence acquisition process means longer, more assured revenue streams based on long-term support and ongoing development instead of a series of big "must win" procurements 3 .Observation on other industry sectors shows similar changes in the manufacturing and sustainment of complex engineering products such as an oil refinery and mining machinery 4 .
Traditionally, management of sustainment services is the responsibility of the asset owner, after the product is commissioned.Most asset owners simply take the recommended schedule of the manufacturer, either by in house service department or by a maintenance services contractor.The strategy is to minimise expenditure that should be spent on the asset 5 .Therefore, classical services and maintenance plans are designed on the principle that mean time between failures is a constant and hence the focus is to replace components before it fails.Typically, service activities including inspection, adjustment, and replacement are scheduled in fixed intervals 6 .Due to multifaceted relationship between operating context and characteristics inherent in the complex system, these intervals may not be optimised 7 .On the other hand, reliability-centred maintenance regime has been developed to plan actions in maintenance based on advanced understanding of the reliability of the system 8 .In addition, many other factors are also influencing the operations of the asset 9 .Many service decisions on assets are therefore made on rules of thumbs rather than using analysed system performance data.Many complex systems are left vulnerable with high risks of failure.
A new service business model known as "performance-based contracting" has emerged in recent years as one of the favourable choice of contracting mechanisms for the public sector and asset intensive industries 10 .Under performance-based contracting approach, a contractor offering systems support services needs to design an operation and support system that is sustainable, fits for the purpose, and demonstrates its value for money.The advantage of performance-based contracting is the sharing of benefits for both sides of the business.Efficiency gains are shared between the contractor and the owner of the business 11 .In this regards, original equipment manufacturers have significant advantage over other service providers because they know their product well.Many equipment suppliers have taken the opportunity to expand into offering after sales services to customers 12 .
A manufacturer of complex engineering system entering into this kind of contract takes a lot of risks with the contract.For example, the new contracting framework by Defence Materiel Organisation 13 contains several elements of incentives and penalties.Application of these elements depends on the actual system performance results within four bands.
i Performance Band I. System performance result is below the required performance level.Contract payment is reduced proportionally to the actual performance outcomes as a disincentive.
ii Performance Band II.Result is poor but may be tolerable for short term only.Contract payment is significantly reduced proportionally to the actual performance outcomes with a more rapid reduction ratio until reaching zero.
iii Performance Band III.Result is totally unsatisfactory and represents an irrecoverable failure.No payment is made and other remedies may be applied.
iv Performance Band IV.Result of the achieved performance equals or exceeds the required performance level.An optional performance incentive may be paid in addition to the agreed value.
In this new service business model, risks exist throughout the whole of life of the product.How can the manufacturer know in advance what performance he/she can achieve at the conceptual design phase of the product life cycle?Which performance band is the system going to operate?There are many factors affecting system performance, for example, new deployment requirements, or change of software operating system, and so forth.The technology in the system is already very sophisticated.The model adds further complexities of sustainment and lifelong services in the commercial contract.These represent several layers of uncertainties in the commitment on the part of the service provider.A decision support methodology that can reduce such uncertainties in the life of the asset is required.This paper is motivated by the fact that new business paradigm emerging in the service sector has demanded a new set of principles and knowledge to assist the manufacturing industry.A new product and service enterprise architecture is proposed in this paper and verified by three past service systems design projects, with case study research design methodology.Based on the new architectural model, an assessment methodology that can be used for assisting decision makers to evaluate options available to them in regards to the business opportunities presented to them is proposed.
Product Service System
The shift of complex engineering products manufacture to service-oriented business environment has necessitated research in developing a new business model 14 .Abe 15 studied a service-oriented solution framework designed for Internet banking and described the new research as "service science".The concept of product service system PSS was initially developed around the optimization of sustainability criteria to operations, maintenance, and environmental related issues around the product 16 .The PSS concept extends, on the basis of an existing complex product, the provision of support services on that complex product when it is in operation.Bairnes et al. 17 presented a clinical style survey of contemporary practices in PSS and subsequently defined PSS as a special case of servitization.It is obvious that there are commercial benefits for companies to move into continuous services and support operations of the complex products they manufacture.In addition to the technical requirements of supporting a complex engineering system, a key feature of the new PSS model is the extension of product offering to service offering.A service system comprises people and technologies that adaptively adjust a system's value of knowledge while the system changes in its lifecycle 18 .
One of the key questions emerged from this approach is the uniqueness of service requirements.Every complex engineering product is different and hence it is fair to say that each PSS is customised.Johansson and Olhager 19 examined the linkage between goods manufacturing and service operations and developed a framework for process choices that enable joint manufacturing and after-sale services operations.Study showed that moving into services-oriented business could have significant financial implications to the company 20 .In a performance-oriented service system, decisions for optimization can be quite different from maintenance oriented service concepts.For example, in order to reduce time of service to customers, Shen and Daskin 21 suggested that a relatively small incremental inventory cost would be necessary to achieve significant service improvements.Hence, to develop service systems that can handle this type of business requirements, companies should build common business functionalities as shared services so that they can be reused across lines of business as well as delivery channels 22 .
When compared to traditional support arrangements, PSS concept changes a contractor's roles and responsibilities by shifting the support service to customer focus.Under service-oriented arrangements, the service provider is responsible for the full spectrum of support, including ownership, sustainment, and operation of assets.Furthermore, contracting arrangements will include incentives and penalties against levels of support service or delivery.The service provider will need to think differently and design the output solutions that deliver the desired outputs as well as generating profit.This is a different type of business with unfamiliar contractual metrics and risks.In PSS, the emphasis is on customisation of solution designs to meet service needs and create new values of use for the customer 23 .
A performance-based contract in PSS will include incentives and penalties against levels of support service or delivery as discussed before.Hence, the service contractor should have a thorough understanding of how the system works and how the supporting systems around the asset provide the services to achieve the desirable performance 24 .Due to this highly individualised nature of service, no one performance-based contract is the same.The support system then becomes a one-off development which imposes significant system design issues to both asset owners and contractors.
A service contract often involves active interaction within the supply chain and with the customer.In a service environment, it is normal that a new separate enterprise is formed from several independent, collaborating enterprises.There are many risks in this strategy, for example, there are risks in collaboration, confidentiality, intellectual property, transfer of goods, conflicts, opportunity loss, product liability, and others.To minimise the risks for the new service enterprise, enterprise engineering researches provide an enterprise architecture framework as a common starting point.The study of enterprise architecture in the last decade has been on how enterprises can be designed and operated in an environment when the enterprise missions and objectives are clear.They assumed that one can follow the common engineering practice of well established sequences of steps: design, implementation, operation, and decommission phases 25 .The rationale to use enterprise engineering methodologies to guide these steps is to minimize enterprise design modifications and associated rework of the system governing information and material flows 26 .A PSS as discussed in this paper is a dynamic system.Any unplanned change to the enterprise is an impact of uncertainty to enterprise performances.The enterprise architecture EA approach provides a structured system to manage services activities, for example, promote planning, reduce risk, implement new standard operating procedures and controls, and rationalize manufacturing facilities 27 .Hence, this paper uses an EA approach to understand the enterprise under which the product and related services are managed and to assess the performance of the PSS.
Architectural Approach
An enterprise architecture defines methods and tools which are needed to identify and carry out change 28 .Enterprises need lifecycle architecture that describes the progression of an enterprise from the point of realisation that change is necessary through setting up a project for implementation of the change process.Denton et al. 29 specified an information technology route map that enabled rapid design of IT solutions to automate some business processes for service supply chains.Therefore, it is crucial to use a systematic design methodology that helps the management developing well-defined policy and process across the organisational boundaries and implements the changes in all enterprises of the service supply chain.
However, traditional enterprise architectures are based on top down approach.They emphasized on uniformity throughout the organization.As such, the structure is inflexible.Changing the structure in order to respond to fast changing dynamic issues for in service engineering systems will be too long to fix any problem 30 .Two issues in using standard enterprise architecture to model service and support systems are identified.
Inwards Modelling
Existing enterprise architectures contain functions, data, staff, resources which are inwards looking and focus on internal company issues.There is very few, if any, modelling constructs for interaction with other systems.
Static, Snapshot View of Present and Future
EA modelling methodology is based on the understanding that it is a snapshot of the enterprise at certain point in time.Service systems are dynamic organisations.There are frequent staff movements, external environment changes, customer changes, and change of use context.The static nature of existing EA is incapable of handling changes as anticipated in real service systems.
Advances in Decision Sciences
In order to support decisions on business opportunities by enterprise architectural approach, the PSS enterprise should have the following characteristics.
Measurement of Performance and the Development of Metrics That Can Be Supported by Technology
The PSS will be operated in parallel with the complex engineering system.Service is qualitatively different to the familiar product-based approach where hard artefacts are delivered to the asset owner.Service is a negotiated exchange with the asset owner and operator to provide intangible outputs that are usually produced together with the asset owner.A service is usually consumed at the time of production.Services cannot be transferred to other asset owners in the same way that products can.Hence, the development of appropriate performance metrics is essential and most of these are supported by advanced information and computational technologies.
Use of Proven Enterprise Architecture That Incorporates Broad Range of Engineering Disciplines
PSS incorporates system design knowledge that draws upon principles derived from a wide range of engineering disciplines including systems engineering, logistics engineering, project management, information systems, and many others.The knowledge helps the system support engineer to take into account as many constraints as possible during the system design phase.These constraints are imposed by the environment in which the complex system and the business are operating.
Sustainability Capability That Manages Risks in the Support Contract
The performance-based services are characterised by the need to create value for both asset owner and the service provider.As such both sides are treated as coinnovators in the design of the PSS.Many decisions are made based on incomplete data rather than fully analysed data set.There are a lot of risks, both from the point of view of data availability, as well as subjective human judgement and communication.
When customers want to outsource a service function, capturing the requirements is the real challenge for human intelligence and ability to manage what we know, what other people know, and what nobody knows 31 .A modelling construct that has more human interaction characteristics is required.
SHEL model has been developed from analysing and modelling human interaction with physical and project activities 32 .Chang and Yeh 33 applied the SHEL model to describe the structure of the air traffic control system and its interface to human operators.The research findings provided practical insights in managing human performance interfaces of the system due to changes in its operating environment.Felici et al. 34 applied the SHEL model to deal with the definition of the requirements for a new railways traffic control system.Lei and Le 35 evaluated risks of human factors in flight deck system.They used a SHEL model and found five most significant factors on the risks in the system.
Extending from traditional enterprise modelling methodology, Chattopadhyay and Mo 36 modelled a global engineering services company as a three-column progression process that was centred on human engineering effort.Chattopadhyay et al. 37 developed a business model for virtual manufacturing with particularly emphasis on the need for intense collaborative network for a variable-variety, variable-volume and manufacture to order situation with provisions for recycling and reverse logistics.The concept was further developed as an aggregated model resembling nature's atomic and molecular interaction after studying the supply chain in China 38 .These new attempts to incorporate human participation in modern global enterprises have highlighted the effect of new information and communication technologies in bringing the human dimension in enterprise architecture to a dominated position.
As seen from the literature, the SHEL model has particular focus on local, operational level of the enterprise.It does not have the support of engineering methodology to ensure repeatability and sustainability of the system.Likewise, traditional enterprise architecture methodology tends to ignore human interaction and becomes difficult to describe vibrant enterprises in the services sector.It is logical to develop a new enterprise model for services that combines traditional enterprise architecture with SHEL concept.We propose this new complex engineering product service system architecture as shown in Figure 2.
Figure 2 shows that a PSS for complex engineering products should be a fourdimensional system architecture: product, process, people, and environment.This is in contrast to conventional enterprise modelling methodologies that had significant influence to system development thinking in the 1990s.The new architecture covers the additional "changing" aspect of service system by integrating the concepts of product, process, people to changes in environment over time.The four dimensions are interlinked and affecting one another.The architecture provides a focus for consolidating existing knowledge of designing a service system as well as an instrument for projecting future requirements in a system so that new features can be developed in an orderly fashion.
Case Study Research Design
Case study research is particularly useful in identifying specific characteristics that affect system performances.Serra and Ferriera 39 identified four strategy pillars in five case studies of well-known multinational corporations.In supply chain case study research, Seuring 40 surveyed 68 papers related to supply chain sustainability and supply chain performance and concluded that more supply chain cases should be documented and reviewed.Lewis et al. 41 researched three case studies in the energy and maintenance management practices.They found that the link between different service requirements should be better understood when the teams worked together for service solutions.
However, extracting the theoretical essence of a PSS is not a trivial exercise from studying a wide variety of cases.For example, Holschbach and Hofmann 42 used case study evidence from eight manufacturing and eight service companies.They found that companies did not use quality management for externally sourced business services to its full potential.There were major difficulties in determining quality failures, standardization, and quantity of service.Zhang et al. 43 carried out a structured literature review on the influence of ICT in supply chain management.They found that despite inconsistency in reported findings in this field of research, there were general positive performance outcomes of supply chains due to ICT system development.Kucza and Gebauer 44 investigated the forms of servitization of products could help global manufacturing firms to develop new service-based and relationship-based value propositions for customers.Four such forms were identified: integrated and ethnocentric; integrated and polycentric; separated and polycentric; and separated and geocentric.In this paper, three case studies are described and their key features are highlighted.The cases are earlier forms of PSS representing various degrees of success in creating new businesses for the equipment suppliers.The ad hoc systems established at the time of these cases provide good examples for benchmarking current thinking of the design and implementation of PSS.These cases are chosen because the parties in the cases have tried to apply a defined enterprise infrastructure that links different parts of the service system working in conjunction with the product.Subsequently, the service system has to be redesigned and tailored to characteristics of the product or the enterprise itself.
Advances in Decision Sciences
Data collection for this type of case study research depends on the relationship of the researchers and the parties in the cases.In all cases, the author of this paper has had varying levels of participation in the cases.
Case Studies
The products in the three cases are complex engineering systems.Case 1 is a computer controlled plasma cutting machine that can cut steel plates up to 50 mm thick.The machine has been sold over the world.Case 2 is a chemical plant that is designed and built by a Japanese engineering company.In order to support the customer with minimum costs, the support system was designed to use the Internet, which was evolving at the time when the project was done.Case 3 is a defence case in which the ship builder formed a service consortium to continue its business after the ships were built.Evaluation and analysis of the cases will be based on the CEPSS model presented in Figure 2.
Case Study 1: Signal-Based Condition Monitoring System
System health monitoring plays a critical role in preventative maintenance and product quality control of modern complex engineering products.The effectiveness of management can directly impact their efficiency and cost-effectiveness.A condition monitoring system monitors the products using various classical methods of signal analysis such as spectrum or state-space analyses 49 .Maintenance decisions are then made according to the prediction of system performance.
Using time-based signals available from normal machine sensing mechanisms, a CNC machine manufacturer in Australia developed a remote condition monitoring system for plasma CNC cutting systems with the aim of servicing the customer anywhere in the world via the Internet.Figure 3 shows the network structure of the system known as ROSDAM 50 .All ROSDAM-enabled machines were configured as servers that had functionality communicating with the global master server.Information about the operation of the machines was captured through individual companies' database.The significantly improved sources of information enabled the product manufacturer to decide the best option that supported operation and maintenance of the plasma cutting machine from a distance.
In this case study, new elements were required to be developed and integrated with the product, that is, the CNC plasma cutting machine.These elements are mapped to the CEPSS model as shown in Table 1.
Case Study 2: Global Operation Support Services
Complex assets are normally built from a large number of components and involving a large number of engineers and contractors.In the past, customers as plant owners usually maintain their own service department.However, the increasing complexity of the plant and operating conditions such as environmental considerations require service personnel to have a higher level of analysis and judgment capability.Rathwell and Williams 51 used Flour Daniel as the study platform and validated the use of enterprise engineering methodology for creating services that support operations of chemical plant.The study showed that significant efficiency gain could be achieved in the design and implementation of the service system through systematic enterprise modelling analysis.
In managing the design and manufacture of a chemical plant for their customer, Kamio et al. 52 established a service virtual enterprise SVE with several partner companies around the world providing after-sales services to a customer Figure 4 .
Each partner in Figure 4 was an independent entity that was equipped with its own unique capabilities and competencies, assuming responsibility to perform the allocated work.The SVE was designed as a "hosting service" which had a broad range of services including plant monitoring, preventive maintenance, trouble-shooting, performance simulation and evaluation, operator training, knowledge management, and risk assessment.Participants of the virtual enterprise had well-defined roles and responsibilities.An essential element in the design of a service enterprise is to develop efficient system architecture and provide the right resources to the right service tasks.By synchronising organisational activities, sharing information and reciprocating one another's the technologies and tools, each partner in the service enterprise will be able to provide services that would have been impossible by individual effort.The PSS therefore requires properly designed components to support the use of technology in the provision of support services to customers.
Advances in Decision Sciences
In this case study, in order to operate the SVE, new service elements were developed.They are listed and mapped to CEPSS in Table 2.
It should be noted that the engineering product remained the same as it was designed initially.There was no noticeable engineering change required on the product itself in order to implement the support service offered by SVE.
Case Study 3: Ship Service System
The ANZAC Ship Alliance ASA could be thought of as a virtual company with shareholders comprising the Australian government and two commercial companies, one of which was the ship builder.The primary goal was to create best value for money 53 .The primary goal of ASA was to manage all changes and upgrades to the ANZAC Ships 54 .The Alliance was a "solution focused" company where the staff of the ANZAC Ship Alliance Management Office would develop change solutions but the detailed design is undertaken by the "shareholders" drawing upon their existing and substantial knowledge of the ANZAC Class.
Prior to the development of ASA, Hall 55 developed a highly integrated documentation and configuration management system that served the on-going need of ten ANZAC class frigates.Over the life time of the asset 30 years , changes due to new technologies, people and defence requirements are inevitable.The organisation structure of ASA can be described as shown in Figure 5.
In this case study, the enterprise was not set up as a legal entity.There was no formal binding agreement among the partners in ASA.In the language of virtual enterprise, the partners were loosely linked organisations such that everything done in the ASA is based on trust.The new service elements that were developed on this premise were listed in Table 3.
From the point of view of the ship builder company, the ASA was an unprecedent business environment that no one knew exactly how to operate.There were some upgrade projects as continuous support initially.After several years of operation, the ASA entered into a new material support program focusing on supplies and shore facilities.
Process
Ownership of product-related services rested with ASA The enterprise was a joint development of several companies and hence the ownership of the product-related services had to be resolved.This issue settled at the end but as all participants were in the defence business environment, and the customer was a partner of ASA, the "company" structure at the right hand side of Figure 5 was able to provide sufficient background understanding for the people to rely on.
Process-Product
Secondment of staff from the three partner organisations As the "company" status was eventually accepted, the need to develop a set of processes that is acceptable to all staff who were all seconded from the partner organisations became urgent.A lot of time was spent in synchronising practices and culture originating from individual companies.There was confusion in the first year among the staff of the participating organization about the nature of ASA.This issue was resolved through a number of ASA workshops. People-Environment
Performance-Capability Assessment
From the foregoing case studies, the subsystems and interactions between the subsystems of the CEPSS model are represented by specific elements in the cases.Table 4 summarises the relevance of the cases in a matrix.
When a PSS enterprise is created by three interdependent subsystems under the CEPSS model, the ability of the total PSS in meeting the performance expectation of the customer will depend on how the capability of each of the subsystems is designed and the effect of the environment on the execution of these subsystem capabilities.Theoretically, for each of the subsystems in a CEPSS enterprise, it is possible to devise some measures of enterprise capability in relation to the outcomes that can be produced by the capability.These methods include survey, interviews, system audit, comparative analysis, human resources records, and so forth.This type of capability assessments are bound to have certain degree of uncertainty.In addition, the enterprise capability will change over time due to changes in people, process, and product.Likewise, by way of aggregation of subsystem capabilities, the capability of the PSS can be benchmarked against the theoretical capabilities required to achieve expected performance, an assessment of the potential achievement can be made at the outset, when the PSS enterprise is established.Due to the uncertainties as explained above, the probability of the PSS enterprise's achievement can be expressed in terms of the frequency of success of this capability meeting specified performance metrics.
Using case 1 as an example, assuming all other capabilities are able to deliver to the required performance standards, the improved service elements can be assessed using a 5-point scale of 1 to 5 where 5 represents most certain and 1 represents rarely meeting expectation as shown in Table 5.
The ratings in Table 5 are for illustration only.Since most ratings are above average, the PSS in case 1 is assessed as likely to meet customer expectation.With an assessment of the capability against expectation outcomes, the probability of the service contract being successful can then be determined from the contractual terms.
As an illustration, if the Defence Materiel Organisation four-band performance incentive/penalty scheme is used, the capability distribution can be overlaid on the achieved performance axis as shown in Figure 6.The capability distribution in dotted curve represents the probability density of the enterprise achieving a performance level on the xaxis.Different risks and probabilities of a PSS contract can then be identified as shown in Figure 6.
Several decisions can be made using this assessment outcome.For this discussion, the CEPSS contractor is the prime contractor who is a major engineering company working with the client on a new complex engineering system within the capability systems lifecycle shown in Figure 1.Using the performance-capability assessment methodology, the CEPSS contractor has visibility on what risks are likely to incur in its current proposal.
First, based on this information, the CEPSS contractor can decide whether to go ahead with the enterprise capabilities he/she has.This is a go and no-go decision scenario.The contractor will have to decide in conjunction with other concurrent opportunities, which may be assessed by the same PSS opportunity assessment methodology or other means.
Second, if the risk level is too high, the CEPSS contractor can increase his/her enterprise capabilities by raising the contract price to cover the costs, or by implementing organisational improvements such as lean and six sigma.In the latter case, the time factor of the CEPSS will be brought in to map the change of capability over time.
Third, the CEPSS contractor can identify the shortfall capability areas and collaborate with other prime suppliers in the industry.The performance capability assessment is then modified as the overall performance capability of the combined CEPSS consortium.The contractual detail of the coalition arrangement is outside the scope of this paper 56 .A mapping of potential changes over time in capabilities of other parties should also be considered, as highlighted by the CEPSS model.Fourth, the CEPSS contractor can consider boosting its core capabilities by mergers and acquisition with other companies.This case is more complicated since consideration of which companies to acquire depends on strategic alignment requirements.However, this option will represent an immediate shift of the capability distribution to the right.The only concern is whether the new organisation can be restructured and operated effectively and quickly enough for executing the PSS contract 57 .
Conclusion
New business models in delivering capabilities from the operations of complex engineering products such as aircraft, ships, and refineries have favoured the integration of acquisition and sustainment phases of the products.The product service system PSS concept enables manufacturers of these complex engineering products to incorporate support services into the product's system capability lifecycle.These services are substantially more complex than routine, reliability-based maintenance or spare parts support.Unfortunately, in the past decade, researches in the development of support systems have been fragmented.There is no unified body of knowledge specifying the methodologies that can naturally lead to the design of a support solution for any scenario.This situation prompted this study.
The new type of service business model, which is represented by performance-based contracts, focuses on the performance of the complex engineering product during operations in terms of timeliness, availability, maintainability, and sustainability costs in the product's complete lifecycle from conception, design, manufacture to disposal.Ultimately, the service business model is expected to provide long-term benefits to both contractor and customer due to efficiency gains.However, the PSS itself has imposed significant risks to the contractor not only in the manufacture of the product, but also in the provision of support services over long period of time at a fixed reward scheme with a lot of unknowns.A successful PSS enterprise requires an analysis methodology that can assist the contractor to estimate the performance outcomes of the PSS.In this paper, three case studies are analysed using case study research design approach to investigate a new complex engineering product service system CEPSS .CEPSS combines the conceptual elements in the SHEL model with the systematic enterprise architecture modelling approach.Service elements of the three case studies have been mapped to CEPSS.Through this mapping, the CEPSS can be broken down into subsystems.Capabilities of the subsystems are readily assessed against customer expectation using qualitative methods such as opinion surveys.Once the capabilities are known, they can be overlaid on the performance-based incentive/penalty scheme so that different risk levels can be assessed as a decision support tool.Decision makers can then use this information to select options related to the business opportunities presented to them.This paper is a preliminary investigation of the fundamental question: what service elements should be developed in the new PSS environment?Investigation using case study approach so far seems to show that the capability of the PSS can be assessed by an aggregated evaluation of these service elements and hence the expected performance of the service system can be estimated.However, the complexity of the system cannot be ignored.Further research is required to create a consistent scoring framework that can be applied across different risks and engineering systems.Naturally, more case studies on a broad range of engineer products would be necessary, while validating of the CEPSS using a more quantitative, evidence-based approach against these cases is vital to the development of a quantitative consistent scoring framework.
Figure 2 :
Figure 2: Complex engineering product service system CEPSS architecture.
Figure 3 :
Figure 3: Signal-based condition monitoring service system network structure.
Figure 4 :
Figure 4: A global service virtual enterprise.
Figure 5 :
Figure 5: Organisation structure of ANZAC Ship Alliance.
Figure 6 :
Figure 6: Performance band and risks in the PSS contract.
Table 1 :
Mapping of service elements in case 1 to CEPSS.
Table 2 :
Mapping of service elements in SVE to CEPSS.
Table 3 :
Mapping of service elements in ASA to CEPSS.
Table 4 :
Applicability of cases to elements and interactions of CEPSS.
|
2018-04-03T02:09:17.082Z
|
2012-10-03T00:00:00.000
|
{
"year": 2012,
"sha1": "7138a9890b24f094d125892ad54a259edc92a980",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ads/2012/640601.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "29fa99ca8097a8604d7aab7ca0f81841819f4878",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
258467379
|
pes2o/s2orc
|
v3-fos-license
|
The Use of Edtech Apps in English Language Learning: EFL Learners’ Perspectives
—The emergence of Edtech apps has contributed to the quality of education in general and English language teaching and learning in specific. With the help of Edtech Apps, learners can experience the real world easily and be motivated in learning. Nevertheless, the proliferation of Edtech Apps varies from one context to another. This mixed methods study aims at exploring the utilisation of Edtech apps in English language learning (ELL) from the learners’ perspectives. A group of 122 English as a foreign language (EFL) students from a high school in Vietnam partook in answering the closed-ended questionnaire and fifteen of them taking part in the semi-structured interview. Two types of data, namely quantitative and qualitative data, were generated. The former was processed using the SPSS software, while the latter was analysed thematically. The findings unravelled that EFL students had positive attitudes towards the deployment of Edtech Apps in ELL, and they believed that Edtech Apps in ELL were useful, easy for use, and motivating. The study also highlights some pedagogical implications to leverage the quality of English language teaching and learning.
I. INTRODUCTION
Scholars (e.g., Mazman & Uslue, 2010; Tran & Duong, 2022; Tran & Ngo, 2020) have asserted that technology has emerged as a pivotal component in education in general and English language teaching and learning in specific, which has transformed the teaching and learning methods.Technology can help learners to experience the real world and get excited in the learning process (Zengin, 2007), and it can provide them with a chance to learn in a fun and interactive way (Donahoe et al., 2019).Over the course of technology development, Edtech Apps, which have been invented for educational purposes, can provide a more flexible learning environment that can accommodate individual needs and preferences, and they have become the central drive of the evolution of the education system.As such, the use of Edtech Apps in education has caught much attention of researchers worldwide.For example, Polok and Harezak (2018) examined the effectiveness of the utilization of Edtech Apps in English language teaching and learning; Rajendran et al.
(2019) did a study on the effects of Quizizz on learners' motivation and learning engagement; Zainuddin et al. (2020) carried out studies on the use of Padlet on learners' participation in class activities.This has proved that Edtech Apps have been extensively employed in education globally.
In the context of Vietnam, the application of Edtech Apps in English language teaching has been strongly encouraged as the investment in the proliferation of information communication technology (ICT) in general and Edtech Apps has been intensified, aiming to optimize the teaching and learning process (Vietnam MOET, 2021).It is observed that EFL teachers are found to be skillful and willing to exploit Edtech Apps to stimulate students' motivation and attitude towards English language learning (ELL).Nevertheless, the effectiveness of using Edtech Apps in English language teaching is seen differently among teachers.Within the current research context, Edtech Apps (e.g., Microsoft Teams, Google Classroom, Nearpod, Padlet, Quizizz) are used as an alternative and supportive teaching modality in teaching and learning as this type of learning ecology is believed to support students' learning.It is noticed that while some teachers and students can adapt themselves to the new teaching and learning approach, others still get stuck in using Edtech Apps because they face several difficulties in working on Edtech Apps.Furthermore, students are still passive and get distracted in learning while using Edtech Apps.When using Edtech Apps for a long time, students can get tired easily and face problems in communication.Besides, other discernible problems are mixed-level classes with big class sizes, inadequate teaching materials for Edtech Apps, and the genuine motivational environment which hinder teachers and students from applying Edtech Apps in English language teaching and learning.As for ELL, many students' English proficiency is low, and they depend on their teachers; consequently, they cannot accomplish their learning tasks.
From the aforementioned problems in relation to the Edtech Apps application, this study sets out to unpack EFL students' perspectives on the use of Edtech Apps in ELL at the context of a high school in Vietnam.
II. LITERATURE REVIEW
The terms Attitude can be variously defined.According to the Oxford Learner's Dictionary attitude is understood as the way one thinks or feels about something or the way one behaves toward someone or something.Likewise, Baker (1992) describes attitude as one's behavior's course and persistence.Attitudes may reflect positive or negative views towards a person, something, or an event; these views may be contradictory at times.Attitude is composed of three interrelated components, viz.cognitive, affective, and behavioral attitudes (Solomon et al., 2010).Cognitive attitude refers to one's mental activities showing knowledge and expectations (Schiffman & Kanuk, 2004).Affective attitude is about one's thoughts and emotions towards an object, thing, or event (Feng & Chen, 2009).Behavioral attitude indicates one's tendencies, behaviors, or reactions to respond or behave towards a particular object (Jain, 2014).In this study, attitude refers to students' feelings or acting, a dynamic mental state that includes emotions, beliefs, and the ability to behave in other ways, and it consists of cognitive, affective, and behavioral attitudes towards the use of Edtech Apps in ELL.Cognitive attitude refers to students' beliefs or disbeliefs about the use of Edtech Apps in ELL; the affective attitude indicates students' emotional response (likes or dislikes) towards the use of Edtech Apps in ELL; and the behavioral attitude is about students' actions or observable responses towards the use of Edtech Apps in ELL.
The deployment of Edtech Apps (e.g., Quizizz, Nearpod, Padlet, etc.) in English language teaching is seen to be effective in terms of usefulness, ease of use, and motivation (e.g., Buttrey, 2021;Singh et al., 2014;Zainuddin et al., 2020).Edtech Apps are considered useful tools with different features helping to implement differentiated instructions and provide learner-centered activities to encourage collaborative and creative activities in the classroom (Singh et al., 2014).Learners are eager to get engaged in the learning process as their knowledge and language skills can be improved.Moreover, Edtech Apps are easy to use as they are innovative, free, user-friendly, and supportive and they can be compatible with different technological devices (Buttrey, 2021;Wang & Chia, 2020).Edtech Apps can have a positive effect on learners' learning engagement and improve their motivation in ELL.Learners can feel motivated in using Edtech Apps in ELL as it is interesting, enjoyable, and fun for them to use Edtech Apps in ELL (e.g., Rajendran, 2019;Zainuddin et al., 2020).
Previous studies have examined aspects of technology in general and Edtech Apps in specific in relation to ELL.Internationally, Monerah (2010) conducted a study to examine students' attitudes towards the use of technology in the classroom in an ESL context.A group of fifty students were involved in responding to the questionnaires.This study indicated that participants showed positive attitudes towards the use of technology in the classroom, and they reckoned that it was effective to learn with the use of technology as it could help them increase their knowledge and skills in English.In another context, Kalanzadeh et al. (2014) explored the impact of technology use on EFL students' motivation in Iran.The participants were a group of sixty Iranian EFL university students.The instrument for collecting data was the questionnaire.Findings revealed that research participants showed positive attitudes towards the technology use in English classes.A relationship between learning English and technology use in EFL classroom was found.Izadpanah and Alavi (2016) investigated high school students' technology use and attitude towards technology use in ELL.A cohort of 638 EFL students sampled from a high school in Iran answered the questionnaires.It was found that participants had positive attitudes towards the technology use in learning English language skills, vocabulary, and grammar.Hani (2021) carried out a study to determine the effectiveness of Quizizz on reading ability.There were 324 students eleventh-grade students and one English teacher of Muhammadiyah Kramat participating in the research.Data was collected through questionnaires, observations, and interviews.Observations were carried out in 2 meetings.From the results of data processing, it was found that the application of Quizizz gets a positive response from students and it is essential to apply an assessment method that is not boring for students.Not long ago, Srisakonwat (2022) conducted a study to investigate the impact of using Nearpod to enhance students' vocabulary knowledge and their satisfaction with learning vocabulary via the Nearpod application.The participants were 3 students at Sansai Withayakhom School in Chiang Mai province, Thailand.The study was conducted by quantitative research and the researcher used instruments including vocabulary lessons via the Nearpod application, a vocabulary knowledge test, and a Satisfaction questionnaire.The findings suggested that vocabulary lessons via the Nearpod application bring many effects and students have good perceptions of Nearpod when using it to learn vocabulary.In the Vietnam context, Tran and Duong (2021) studied non-English majors' attitudes towards autonomous technology-based language learning at a University in Da Lat city.For the purpose of data collection, 450 non-English majors answered the closed-ended questionnaire, and joined in the semi-structured interview.The results revealed that the participants showed positive attitudes towards autonomous technology-based language learning.Nguyen and Nguyen (2021) conducted a study to discover the influence of Mobile-Assisted Language Learning (MALL) on freshmen's vocabulary learning and their perception of the use of this method.Twenty-six students at Thanh Dong University, Hai Duong province, Vietnam attended the eight-week course.Participants partook in pre-tests, post-test, questionnaires, and interviews.The finding showed that MALL could impact students' learning mood positively.Students were keen on taking part in activities for vocabulary learning.In short, many studies have been done to examine the aspects of technological tools in ELL, and positive results have been gained.Nonetheless, a scarcity of studies on the use of Edtech Apps in ELL in the context of Vietnam has been found.
As such, this study aims at exploring the EFL students' perspectives on the use of Edtech Apps in ELL in the context of a high school in Vietnam.
A. Research Setting and Participants
This study, which adopted the mixed methods sequential explanatory design model (Creswell, 2014;Creswell & Creswell, 2018) to collect data, was conducted at a high school in Vung Tau City, Vietnam.This school is a state-run school famous for its students' English achievements.The school is equipped with modern teaching and learning facilities (e.g., language lab, internet system, interactive whiteboard).This school has both non-native English teachers (Vietnamese) and native English teachers teaching English to students.Teachers are required to use Edtech Apps (e.g., Nearpod, Quizizz, Padlet, etc.) in their teaching in class.Moreover, this study was conducted during the Covid 19 pandemic outbreak, so the teaching and learning took place online.
A cohort of 122 high school students were chosen based on the convenience sampling method.Among them, there were 71 (58.2 %) students from grade 10, 11 (9.8 %) students from grade 11, and 39 (32 %) students from grade 12. Regarding their English proficiency, it was reported that the majority of participants (72.1%) were Intermediate, 20.5% were Elementary, beginners accounted for 4.1%, and 3.3% were Advanced.With respect to the use of Edtech Apps in ELL, students often employed Quizizz, Padlet, Nearpod, and Azota for English study learning.Fifteen out of 122 students were invited for interviews based on their willingness.
B. Research Instruments
Two research instruments, viz.closed-ended questionnaire and semi-structured interview, were utilized for data collection.The questionnaire which was adapted from Tran and Duong's (2021) study consists of two main parts: Part A contains the general background questions; Part B features the main questionnaire content.There are two main sections in the content.Section I is composed of 18 items which seek for EFL students' attitudes towards the use of Edtech Apps in ELL, and section II includes 20 items asking EFL students' perceptions of the use of Edtech Apps in ELL.All the items were designed with a five-point Likert scale (from strongly disagree to strongly agree).The Cronbach's alpha was .92 and .90 for section I and section II, respectively, which means the questionnaire was very reliable.Regarding the semi-structured interview, five main interview questions were designed based on the purpose of the study and preliminary results from the questionnaire.The content of the questionnaire and interview was translated into students' mother tongue to make sure that they did not encounter any language barrier in understanding and answering the questions.
C. Data Collection and Analysis Procedures
Prior to data collection, the two research instruments were piloted with ten students sharing similar characteristics with those in the main study.After being modified, the questionnaire in the Google form was administered to students via email and social networks, and it took students around 20-30 minutes to answer the questionnaire.The returned results of the questionnaire were checked for content validity.After two weeks of preliminary data analysis of questionnaires, semi-structured interviews were conducted with fifteen students via Google Meet.Each one-to-one interview was carried out in the student's mother tongue, lasting around 25-30 minutes.All the interviews were recorded with students' consent for later analysis.
As for data analysis, this study which adopted the direct approach (Nykiel, 2007) garnered two types of data, quantitative data from questionnaires and qualitative from interviews.The former was processed by the software SPSS (version 22) in terms of descriptive statistics (Mean: M; Standard deviation: SD).The interval scale for the five-point Likert scale was interpreted as 1.00-1.80:Strongly disagree; 1.81-2.60:Disagree; 2.61-3.40:Neutral; 3.41-4.20:Agree; 4.21-5.00:Strongly agree (Kan, 2009).The latter was analysed thematically.The codes as S1, S2 to S15 were labeled to interviewees.All the interviews were transcribed and translated into English.Based on the purpose of the study, key concepts and themes were generated from reading and re-reading the transcripts.The findings were sent back to the interviews for the content check-up, and the intra-rating approach was carried out to double-check both quantitative and qualitative data analysis.
(a). AEFL Students' Attitudes Towards the Use of Edtech Apps in ELL
The results in Table 1 show that the total mean score of EFL students' attitudes towards the use of Edtech Apps in ELL was rather high (M=3.66;SD=.64).That is, students had positive attitudes towards the use of Edtech Apps in ELL.In detail, students' cognitive, affective, and behavioral attitudes were comparatively high.Students' affective attitudes
ELL Students' Affective Attitudes Towards the Use of Edtech Apps in ELL
Table 3 shows that EFL students agreed that they enjoyed learning English with Edtech Apps because the apps were "convenient" (item A2: M=3.91, SD=.81) and "easy to use" (item A1: M=3.84; SD=.84).Additionally, students felt "more relaxed to engage in classroom activities when teachers [used] Edtech Apps" (item A3: M=3.87; SD=.80), "more confident doing tests with Edtech Apps" (item A6: M=3.68; SD=.96), and "confident in learning English with Edtech Apps" (item A4: M=3.61; SD=.94).They also reported that "using Edtech Apps to test [their] English language [was] less stressful" (item A5: M=3.84; SD=.91).It was clear from the qualitative data derived from the interviews that students had favorable affective responses regarding Edtech Apps.They acknowledged the appeal, fun, and interactivity of Edtech Apps.The following are some comments: …Edtech Apps will create a game to help learn English more conveniently, thereby creating excitement for learners to make learners remember for a long time…(S13) …Edtech Apps make my English learning interesting, enjoyable, and less stressful… (S1) …When using Edtech Apps online, I feel more comfortable, excited and active…(S8) However, there are also some difficulties which students commented: ….Bad network connectivity sometimes makes my study interrupt…(S12) ….There are a few physical interactions between students and peers as well as teachers.I can be easily distracted by games online or other social networks when learning online…(S7)
EFL students' Behavioral Attitudes Towards the Use of Edtech Apps in ELL
As seen in Table 4, EFL students agree that they would like to "take part in games which teachers create with Edtech Apps" (item B5: M=3.80; SD=.89), "continue learning English with Edtech Apps" (item B1: M=3.71; SD=.84), and "interact with [their] classmates more via Edtech Apps" (item B2: M=3.55; SD=.91), and "introduce Edtech Apps to [their] friends" (item B3: M=3.57; SD= .89).Furthermore, they would like "[their] teacher to use more Edtech Apps in the class" (item B4: M=3.70; SD=.82).The quantitative findings supported qualitative ones.The interviewed students shared their behavioral attitudes towards the use of Edtech Apps in ELL.They said: …I would like to continue learning English with Edtech Apps in the future …(S11) …I want my teacher to use more Edtech Apps in the class and I can take part in games to improve my English skills… (S6)
(b). EFL Students' Use of Edtech Apps in ELL
Table 5 indicates that the.The total mean score of high school students' use of Edtech Apps is 3.62 (SD=.67)out of five.Specifically, the mean scores of the three components are 3.74 (SD=.82) for Ease of Use, 3.58 (SD=.74) for Motivation, and 3.58 (SD=.69) for Usefulness.This can be interpreted that high school students believed that Edtech Apps played an important role in ELL since they were easy for use, motivating, and useful.
EFL Students' Use of Edtech Apps in ELL in Terms of Usefulness
As can be seen from Table 6, participants reckoned that they could practice their English "freely" (item PU9: M=3.89; SD=.95) and "autonomously" (item PU10: M=3.75; SD=.94) by using Edtech Apps, and "[their] English learning outcomes [were] improved after [they used] Edtech Apps" (item PU1: M=3.42; SD=.78).Moreover, they concurred that Edtech Apps helped them to "finish [their] assignments quickly" (item PU4: M=3.70; SD=.98), made their learning "meaningful" (item PU7: M= 3.64; SD=.91) and "more flexible" (item PU8: M=3.68; SD=.884), and enhanced their "English knowledge" (item PU2: M=3.58; SD=.88) and "English skills" (item PU3: M=3.70; SD=.805).Nevertheless, they were unsure if that Edtech Apps helped to expand social interactions with their "classmates" (item PU6: M=3.31; SD=.82) and "teachers" (item PU5: M=3.34; SD=.71).Regarding the qualitative findings, all interviewees mentioned that Edtech Apps were really useful in learning English.They shared as follows: …Nearpod helps me practice all the skills and when I submit the assignment the teacher can cover it…(S1) …There is nothing better than being able to learn English with Edtech Apps in a contemporary, efficient, and correct manner.They are quicker and more affordable, and I can learn at my own pace without having to pay for English storybooks that are already available on the app.…(S5) … Quizizz benefits a lot of things from helping teachers to engage students to compete with each other, students can also understand more before or after they finish a lesson through a friendly quiz game…(S3) …They are practical apps that support students in their online learning and can simply raise the standard of upcoming courses.On Quizizz, multiple choice questions and flashcards assist students in familiarizing themselves with material, retaining it, and testing their factual knowledge.With Padlet I can create an online post-it board or ideas that I can share with any student or teacher I want.Furthermore, all my reports and essays will be saved immediately by Padlet so that I easily open and learn whenever I want…(S9)
EFL Students' Use of Edtech Apps in ELL in Terms of Ease of Use
With regard to the ease of use, the results in Table 7 indicate that the participants agreed that they found Edtech Apps easy to use (item PE1: M=3.84; SD=.99) as Edtech Apps were easy to "download" (item PE2: M=3.86; SD=.94) and "install on many technological devices" (item PE3: M=3.8; SD=.94).Additionally, participants shared that they "[could] use Edtech Apps to test [their] English language level easily" (item PE5: M=3.73; SD=.87) and "[did not] get any difficulty in using Edtech Apps" (item PE4: M=3.43; SD=.86).Furthermore, most interviewees shared that Edtech Apps were easy to use in many ways.They stated: …I think Edtech Apps are easy to join and use interface, and have no fees.I can get my own results… (S2) ….As online examinations are frequently assessed and recorded automatically, EdTech Apps can also significantly speed up grading and data collection.I can review their responses immediately rather than having to wait for a teacher to grade each….(S9)
EFL Students' Use of Edtech Apps in Ell in Terms of Motivation
The results in Table 8 reveal that EFL students believed that Edtech Apps "[enabled them] to practice better by playing games" (item PM3: M=3.88; SD=.88) and "[motivated them] to learn English because they [were] enjoyable" (item PM1: M=3.48; SD=.92).They also concurred that they "[felt] great after using Edtech Apps because they [provided] many forms of non-judgemental feedback" (item PM5: M=3.66; SD=.90), "[were] interested in learning English" (item PM2: M=3.49; SD=.85), and "[could] work harder whenever [they used] Edtech Apps to study" (item PM4: M=3.39; SD=.87).With respect to qualitative findings, most interviewees expressed that Edtech Apps motivated them to learn English very much.They mentioned: …Quizizz, Padlet, and Nearpod look very eye-catching with the color schemes, and funk or EDM music in the background creates a comfortable atmosphere and makes the lessons more intriguing for us students…(S3) … My teachers can inspire children to be creative by using Padlet.By posting them on the Padlet wall, we will also have the ability to share essays or reports with other classmates.Additionally, Quizizz enables me to learn from nearly anywhere, at any time.A good number of the materials on the site come from thousands of teachers around the globe and can be creatively applied to any subject or grade level.…(S9) …When using Edtech Apps, I am more active in learning and acquiring additional knowledge from the outside.Edtech Apps help me increase my motivation…(S7) … I feel that using Edtech Apps is effective because they motivate me to learn English because they can entertain and help me to broaden my English knowledge …(S10)
B. Discussion
The finding of the study showed in general students had positive attitudes towards the use of Edtech Apps (M=3.66;SD=.64).Students gained an understanding of the benefits of Edtech Apps such as Quizizz, Padlet, and Nearpod, which resulted in an increase in their preference of using Edtech Apps in English language learning.Among the three components of attitudes, students were found to express their highest positive affective attitudes towards Edtech Apps (M=3.79;SD=.74).Students of the study reported that learning English lessons through Edtech Apps made the lessons more interesting and attractive.With respect to cognitive attitudes, students were also found to have positive cognitive attitudes towards Edtech Apps (M=3.53;SD=.685).This finding may imply that students were aware of the importance of Edtech Apps, and they believed that technology could support them to enhance their English language skills.Students shared that learning English with Edtech Apps helped them to enrich their vocabulary, and improve their grammar and reading skills.Similarly, their behavioral attitudes towards Edtech Apps were positive.The findings were aligned with studies conducted by Monerah (2014), Kalanzadeh et al. (2014), Izadpanah and Alavi (2016), and Tran and Duong (2021).In terms of Behavioral, students felt Edtech Apps were useful for their learning, so they intended to continue to use it in the future (M=3.67;SD=.72).This finding is supported by Kara (2009) who has stated that positive behaviors can result from positive attitudes, which can enable students to be more eager to get engaged in the learning process.
Another major finding is that the participants had strong beliefs that using Edtech Apps in ELL was useful, easy, and motivating.There are many reasons for this finding.Firstly, the participants possessed technology-based devices (e.g., Smartphone, Tablets, iPad) for different purposes of use, so they found using technology convenient in ELL.Secondly, students had been using Edtech Apps for some years because of lockdown period and the spread of Covid-19 pandemic.It is clear that the participants got used to using such Apps in ELL.Thirdly, Nearpod, Quizizz, and Padlet were available and free for students; therefore, students could use them anytime and anywhere to improve their learning process.Regarding the perceptions of the usefulness of the use of Edtech Apps, participants believed that Edtech Apps could be used to improve all four language skills: speaking, writing, listening, and reading as well as pronunciation and grammar.The finding was partially supported by the previous research carried out by Hani (2021), Nguyen and Nguyen (2021) and Srisakonwat (2022).As for motivation in using Edtech apps, students positively stated that Edtech Apps motivated them and they felt confident using Edtech Apps in ELL.This finding may be the result from the participants' perceptions of Edtech Apps in terms of usefulness and ease of use, which could link to the perception of motivation in using Edtech Apps in ELL.
V. CONCLUSION
The results of the study have brought out a better understanding of how EFL students think about the use of Edtech Apps in ELL.It was found out that ELL students in this study had positive attitudes towards the use of Edtech Apps as they realized the benefits of using Edtech Apps in ELL.Additionally, EFL students believed that Edtech Apps were useful, easy for use, and motivating as they realized that Edtech Apps helped them to improve their English language skills and sub-skills (pronunciation, grammar, and vocabulary), and Edtech Apps helped them to feel engaged and motivated in ELL.From such gained results, the pedagogical implications are recommended.Firstly, as Edtech Apps are seen to be effective and motivating, EFL teachers should be trained how to use Edtech apps in English language teaching appropriately and effectively.Teachers should help their students and parents to fully understand the usefulness and effectiveness of English learning through Edtech Apps so that students' parents can support their children in using Edtech Apps in ELL.Besides, teachers should instruct students on how to use Edtech Apps in ELL effectively, and they should check students' use of Edtech Apps regularly so that they can give further instruction and feedback on students' use of Edtech Apps.Secondly, EFL students should take responsibility for their use of Edtech Apps under teachers' and parents' supervision as they can be easily distracted by social media and games online.Moreover, students should be introduced to useful and reliable websites and internet resources for Edtech Apps so that they can select suitable resources by themselves.Finally, administrators should consider equipping the school with an internet system as well as technology-supported devices (e.g., LCD TV, Laptop, iPad, etc.) so that teachers and students can embed the use of Edtech Apps in their English language teaching and learning.Apart from that, administrators should have appropriate incentive policies to encourage teachers to apply Edtech Apps in their teaching.
This study still limits itself in some ways.The first limitation derives from the research design which is a survey using questionnaire and interview, so the findings may not reflect the true phenomenon of EFL students' use of Edtech Apps.The second one is the small sample size, so the generalization of the findings may be applicable to other contexts.Therefore, future studies should consider employing the transformative design in collecting data to see the effectiveness of Edtech Apps in ELL.Another study should be conducted to involve a bigger sample size so that the findings can be generalized to other contexts.In another aspect, further research on learner autonomy in the use of Edtech Apps should be conducted.
SD=.74) were the highest component, followed by behavioral attitudes (M=3.67;SD=.72).In comparison with the other two components, students' cognitive attitudes had the lowest mean score (M=3.53;SD=.68).
TABLE 2 EFL
STUDENTS' COGNITIVE ATTITUDES TOWARDS THE USE OF EDTECH APPS IN ELL Regarding the qualitative data, all interviewees shared positive agreement on the impacts of Edtech Apps on ELL.Some remarkable examples are: …I think using Edtech Apps is effective for my learning because I can learn more vocabulary and review a lot of knowledge.Also, they create enjoyment, encourage learning… (S2) ….Edtech Apps are really effective for improving my English skills.My English is getting better day by day thanks to Edtech Apps …(S9) …I know Edtech Apps are usefulness for English grammar and vocabulary…(S13) ...I can correct my grammar mistakes easily by doing English grammar exercises and tests online designed by my teacher... (S10)
TABLE 4 EFL
STUDENTS' BEHAVIORAL ATTITUDES TOWARDS THE USE OF EDTECH APPS IN ELL
TABLE 5 EFL
STUDENTS' USE OF EDTECH APPS IN ELL
TABLE 6 EFL
STUDENTS' USE OF EDTECH APPS IN ELL IN TERMS OF USEFULNESS
TABLE 7 EFL
STUDENTS' USE OF EDTECH APPS IN TERMS OF EASE OF USE
TABLE 8 EFL
STUDENTS' USE OF EDTECH APPS IN TERMS OF MOTIVATION
|
2023-05-04T15:06:47.673Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "7af17a9f9d3903934c6f529cc02fb1eec0d32ddb",
"oa_license": null,
"oa_url": "https://tpls.academypublication.com/index.php/tpls/article/download/5929/4744",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c86b32ae3855e01b9de312de445e2dcb8ddcaa1c",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
53629003
|
pes2o/s2orc
|
v3-fos-license
|
Research on Bertrand Dual-Oligopoly Dynamic Game Model
In this article, we study the equilibrium solution and the dynamic change process of the Bertrand price competition model under the condition of dual-manufactures, give the model of dynamic change, prove that whatever manufactures choose about price, the equilibrium solution only exists, and prove that the manufacture’s price change process must be the monotonic process, and develop relative conclusions of the Bertrand model.
For
) , 2 , 1 ( comes into existence, the mixed strategy section * σ in the problem of complete information static game is called as the Nash equilibrium of the game.The introduction of Nash equilibrium offers a base to confirm the equilibrium of the oligopoly market, and on the Nash equilibrium, various manufactures will consider their competitors and suppose their competitors will do the same thing.In 1883, French economist Joseph Bertrand utilized the concept of Nash equilibrium in the Bertrand model established by him to study the price competition, afterward that model was broad used to explain and analyze phenomena and problems about price competition, and analysis and researches about dynamic price competition game were numerous.The reference 1 gives the different oligopoly optimal orientation pure strategies and their respective optimal profits under conditions of different cost advantages.The reference 2 introduces the oligopoly dynamic game problem under the situation of information asymmetry.And many references from reference 3 to reference 6 analyze different problems from different views, but the analysis has not displayed the equilibrium solution and the dynamic change process of the oligopoly competition.In the reference 7, professor Tang Xiaowo utilizes the Cournot model to study the dynamic competition problem under dual oligopoly manufactures, but this model is on the base of output competition, more dynamic competition problems about price exist in fact.
The introduction of Bertrand dual-oligopoly manufacture model
We consider the Bertrand dual-oligopoly model and suppose that two oligopoly manufactures produce same products with different brands, different qualities and different packages, manufacture A and manufacture B respectively select price p A and p B , and the demand function to the manufacture i is ( ) , and ( ) , where, b ( ) reflects the reflection degree of the manufacture i's product to the manufacture j's product (here, suppose that the price demands to the competitive manufactures' products have same influencing coefficient).We make the marginal cost as the constant c (c<a) and don't consider the fixed production cost.Two manufactures select their prices at the same time, so the manufacture i's profit function is , and from the profit function we can obtain (Jean Tirole, 1997).
When the manufacturer A in the market first enters into the market, he sets down the product price p A (1), and the manufacturer B makes his optimal price 2 according to the demand function ( ) and the condition of profit maximization.In the same way, if the manufacturer A adjusts his price to , the manufacturer B will correspondingly adjust his price to ( ) . The other way round, if the manufacturer B adjusts his price , the manufacturer A will also correspondingly adjust his price to ( ) Therefore, for one time game, according to the hypothesis of the model, we can obtain that manufacturer j's optimal adjustment extent is . But in fact, the game process of two oligopoly enterprises is dynamic and multistage, and we need to review the first manufacturer's equilibrium solution and dynamic change process under the condition of initial random selection.
The dynamic process and equilibrium of Bertrand price competition
Suppose that the price that the first manufacturer (manufacturer A) enters into the market is ) 1 , and the price that the second manufacturer (manufacturer B) enters into the market is . After that, according to the price of manufacturer B, the manufacturer A adjusts his optimal price to . After n times adjustment, the manufacturer A and the manufacturer B's prices are respectively adjusted to And we use the vector form to analyze the price competition, , so, It is easy to prove that = If dual-oligopoly game times is infinite, i.e.
∞ → n
, and the limitations of ) (n p A and ) (n p B confirmed in the above two equations exist, so the equilibrium solution exists, and the limitation is the equilibrium price.
It indicates that whatever the price that the first manufacturer enters into the market is, the equilibrium solution of the price model exists only, and the final adjusted price is the equilibrium price.
And we continue to analyze the dynamic change of dual-manufacturer price.
, we can get from above two equations: Therefore, when the price that the first manufacturer enters into the market is smaller than the equilibrium price, two manufacturers' prices strictly monotonically increase by degrees.
In the same way, when b i.e. when the price that the first manufacturer enters into the market is larger than the equilibrium price, two manufacturers' prices strictly monotonically decrease by degrees.
When
It indicates that the price that the first manufacturer enters into the market is the equilibrium price, two manufacturers' prices are changeless and kept at the level of the equilibrium price, in another words, and two manufacturers' prices have no dynamic changes, which achieve the equilibrium status at the very start.
Conclusions
Through the analysis of the equilibrium solution and price sequence dynamic changes of the Bertrand price competition model, we can get following conclusions.
(1) Under the condition of dual-manufacturer, however the first manufacturer initially selects the price, the equilibrium solution of the Bertrand model only exists.
(2) When the first manufacturer's initial price is International Journal of Business and Management April, 2008 149 When the initial price deviates from the equilibrium price, the equilibrium process of dual-manufacture price is an infinite process, and the price adjustment sequence is the strictly monotonic sequence and the dual-manufacturer price adjustment changes at the same direction.
Otherwise, in this article, we give the dual-manufacturer price dynamic change model under the fixed demand function, and many problems such as manufacturer's profit change and price dynamic change under the condition of multi-manufacture need to be further studied.
equilibrium of the game, and here two manufactures get the profit equation, and can obtain the dynamic adjustment model of dual-manufacturer price competition, achieve the equilibrium at the very start.
|
2018-11-06T17:10:53.386Z
|
2009-02-11T00:00:00.000
|
{
"year": 2009,
"sha1": "185c1e07e16010f2add8c3d4e2aa9a7fedec5b74",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/ijbm/article/download/1554/1479",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "185c1e07e16010f2add8c3d4e2aa9a7fedec5b74",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
}
|
3924569
|
pes2o/s2orc
|
v3-fos-license
|
Multilocus genotyping of Giardia duodenalis in captive non-human primates in Sichuan and Guizhou provinces, Southwestern China
Giardia duodenalis is a common human and animal pathogen. It has been increasingly reported in wild and captive non-human primates (NHPs) in recent years. However, multilocus genotyping information for G. duodenalis infecting NHPs in southwestern China is limited. In the present study, the prevalence and multilocus genotypes (MLGs) of G. duodenalis in captive NHPs in southwestern China were determined. We examined 207 fecal samples from NHPs in Sichuan and Guizhou provinces, and 16 specimens were positive for G. duodenalis. The overall infection rate was 7.7%, and only assemblage B was identified. G. duodenalis was detect positive in northern white-cheeked gibbon (14/36, 38.9%), crab-eating macaque (1/60, 1.7%) and rhesus macaques (1/101, 0.9%). Multilocus sequence typing based on beta-giardin (bg), triose phosphate isomerase (tpi) and glutamate dehydrogenase (gdh) revealed nine different assemblage B MLGs (five known genotypes and four novel genotypes). Based on a phylogenetic analysis, one potentially zoonotic genotype of MLG SW7 was identified in a northern white-cheeked gibbon. A high degree of genetic diversity within assemblage B was observed in captive northern white-cheeked gibbons in Southwestern China, including a potentially zoonotic genotype, MLG SW7. To the best of our knowledge, this is the first report using a MLGs approach to identify G. duodenalis in captive NHPs in Southwestern China.
Introduction
Giardia duodenalis is the etiological agent of giardiasis, a gastrointestinal infection that is typically asymptomatic, but may also be severe in some individuals [1][2][3]. At present, there are eight distinct assemblages of G. duodenalis (A-H), assemblages A and B frequently infect humans and animals, assemblages C and D have been described in domestic and wild canines, assemblage E have been widely reported in ruminants but sporadically detected in NHPs and humans, assemblage F in cats, assemblage G in rodents and assemblage H in seals and gulls a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 [4]. Assemblages A and B are considered zoonotic genotypes. In addition to humans, they are widely reported in non-human primates (NHPs) [4][5][6].
NHPs are valuable wildlife resources. Owing to their high genetic homology to humans, NHPs are important experimental models for clinical research and public health research. G. duodenalis have a monoxenous life cycle and can spread rapidly in captive NHPs [7]. Genetic polymorphism of G. duodenalis has been widely investigated in NHPs. Assemblages A, B and E are found in NHPs and assemblage B is dominant [5,6]. Molecular analyses have revealed that assemblage A is further classified into three major subtypes (AI-AIII), but assemblage B includes many subtypes that have not been systematically categorized [4,5].
However, little is known about genetic variation in G. duodenalis infecting NHPs based on multi-locus genotyping. Molecular analyses to date have typically focused on a single genetic locus [4,8,9]. Inconsistent genotyping results have sometimes been observed among different individual loci [4,10]. To better understand the genetic heterogeneity and zoonotic potential of G. duodenalis, multi-locus genotyping (MLG) employing beta-giardin (bg), triose phosphate isomerase (tpi) and glutamate dehydrogenase (gdh) has been used for genotyping and subtyping G. duodenalis in humans and animals [11][12][13]. The aim of the present study was to characterize G. duodenalis in captive NHPs in Southwestern China. These findings improve our understanding of the genetic diversity and the transmission routes of G. duodenalis in NHPs.
Ethics statement
This study was reviewed and approved by the Institutional Animal Care and Use Committee of Sichuan Agricultural University under permit number DYY-S20156703. Prior to the collection of fecal specimens from NHPs, permission was obtained from owners.
Specimen collection
From March to May 2016 and September to November 2016, 207 fecal specimens from NHPs were collected from Sichuan and Guizhou provinces. Fresh fecal specimens were collected immediately after defecation on the ground and separately stored in 50-mL centrifuge tubes. The specimens were kept cool during transport and arrival at the Sichuan Agricultural University. Specifically, 101 samples were obtained from rhesus macaques from the National Experimental Macaque Reproduce Laboratory in Southwest China (n = 31), Chengdu Gaoxin rhesus macaque farm (n = 30), Chengdu zoo (n = 20) and Bifengxia zoo (n = 20). Thirty-six samples were from northern white-cheeked gibbons from zoos in Guiyang (n = 30), Chengdu (n = 2) and Bifengxia (n = 4). Nine samples were from Golden snub-nosed monkeys in the Chengdu zoo. Sixty samples were from crab-eating macaques in the National Experimental Macaque Reproduce Laboratory in Southwest China and the Chengdu Gaoxin rhesus macaque farm (Table 1). Samples were preserved in 2.5% potassium dichromate at 4˚C in a refrigerator. All samples were processed within 24 h of collection.
DNA extraction and PCR amplification
Before extracting DNA, the fecal samples were washed with distilled water until potassium dichromate was removed. Genomic DNA was extracted using the PowerSoil 1 DNA Isolation Kit (MoBio, Carlsbad CA, USA) following the manufacturer's instructions. DNA samples were stored in 100 μL of the kit's Solution Buffer at 20˚C until use.
Each specimen was examined for G. duodenalis by nested PCR amplification of the betagiardin (bg) gene [14], The bg-positive specimens were further characterized by PCR amplification of the tpi and gdh genes [11]. Secondary PCR products were visualized by staining with Golden View following 1% agarose gel electrophoresis.
Sequencing and phylogenetic analysis
The amplified products of the expected size were sequenced by Invitrogen (Shanghai, China). To determine the G. duodenalis assemblage, the sequences were aligned with sequences downloaded from the GenBank database based on a BLAST analysis (http://blast.ncbi.nlm.nih.gov) using ClustalX. For the phylogenetic analysis, sequences obtained in this study were used to construct a neighboring-joining tree using Mega 5 (http://www.megasoftware.net/). A total of 1000 replicates were used for the bootstrap analysis.
Statistical analysis
Differences in infection rates among NHPs and among animals in different areas were assessed using the chi-square test implemented in SPSS version 17.0 (SPSS Inc., Chicago, IL, USA). P < 0.05 was considered significant.
Results and discussion
In the bg-based PCR analysis of 207 specimens from 4 NHP species, 16 (7.7%) samples from 3 species were positive for G. duodenalis. All the positive specimens were successfully amplified and sequenced for the bg, tpi and gdh genes. Sequences were deposited in the GenBank database under the accession numbers KY696790-KY696821.
The infection rates ranged from 0% to 38.9% in the 4 species (Table 1). Specifically, 1 of 101 (0.9%) rhesus macaques and 1 of 60 (1.7%) crab-eating macaques were positive for G. duodenalis. Northern white-cheeked gibbons showed the highest infection rate (14/36, 38.9%). All golden snub-nosed monkeys (n = 9) were negative for G. duodenalis. The difference in infection rates among 4 species was significant (P<0.05). In China, six studies have examined G. duodenalis infection in NHPs in parks, zoos, farms and laboratories to date, and the overall infection rate was between 1.3% and 18.6% in these studies [7,[15][16][17][18][19]. The overall infection rate in our study (7.7%) was close to the total infection rate in Qianling Park in Guiyang (8.5%) [17], and was much lower than the total prevalence in zoos in China (18.6%) [15]. It was obviously higher than those reported in Guangxi (2.4%) [19], Qinling Mountain (2.0%) [16] and two other additional comprehensive parasite infection studies in China (2.2% and 1.3%) [7,18]. Our results and those of previous studies indicate that G. duodenalis infection is common in wild and captive NHPs and has a wide geographic distribution in China. In other countries, G. duodenalis infection in NHPs showed a similar trend to that observed in China. The overall infection rate of G. duodenalis in NHPs is between 2.2% and 47.0% [7,20], indicating a wide range of infection rates. The prevalence in our present study was close to previous estimates in Italy (6.0%) [20] and Thailand (7.0%) [6], and it was lower than the infection rates reported in Uganda (11.1%) [21] and Croatia (50%) [22]. This result may be explained by differences among regions in climate, environmental management, NHPs species and animal exchange programs [5,8,20].
In this study, the infection rate for captive NHPs in Sichuan province was 0.6% (1/106), which is almost identical to that in a comprehensive parasite study performed in 2009-2015 in Sichuan (0.5%, 3/581) [18]. The infection rate in captive NHPs in Guizhou province was 30% (15/50), much higher than that of free-range NHPs in Guiyang (8.5%, 35/411) [17]. Additionally, 38.9% (14/36) of northern white-cheeked gibbons were positive for G. duodenalis, which was also higher than the infection rate in a previous study (14.3%, 2/14) [15]. These results suggest that captive northern white-cheeked gibbons are more prone to infection by G. duodenalis than wild animals. This might be explained by the single-host life cycle and the resilient infectious cysts of G. duodenalis [23]. Captive northern white-cheeked gibbons are closer to each other than free-range NHPs, and confined spaces result in the transmission of infectious cysts between NHPs. The high transmission between captive NHPs is consistent with those of previous studies in China [7,15].
To date, assemblages A, B and E have been detected in NHP species in China. Assemblage A and B have both been found in captive and free-range NHPs, but assemblage E has only been found in captive NHPs [16]. In this study, only assemblage B was detected in 3 captive NHP species, consistent with a recent study in zoos in China [7]. In these previous studies, all specimens were obtained from captive NHPs inhabiting in zoos, farms or bases. The resilient infectious cysts of G. duodenalis may explain the low infection diversity of assemblages [4], and suggests that assemblage B is predominant in Sichuan province. Assemblage B was identified in rhesus macaques, northern white-cheeked gibbons and crab-eating macaques. According to a previous study in 2009-2015 [15], assemblage B was only identified in rhesus macaques and northern white-cheeked gibbons, but assemblages A and B were both identified in crab-eating macaques. This result may suggest that northern white-cheeked gibbons are more susceptible to assemblage B than to other assemblages, and assemblage A might be hostspecific including few NHPs species.
A total of 16 NHPs specimens (one from a rhesus macaque, one from a crab-eating macaque, and fourteen from northern white-cheeked gibbons) were classified as assemblage B and nine MLGs were identified among the 16 positive specimens. The subtype identities and geographical and host distributions of the nine MLGs are listed in Table 2. A phylogenetic analysis of the concatenated sequences of assemblage B revealed that nine MLGs in this study formed five clusters. MLG SW7, SW9 and SW1 were distributed in three separate clusters. SW 2, 3 and 6 isolated from northern white-cheeked gibbons formed cluster 3. Cluster 5 included SW4, 5 and 8, all of which were isolated from northern white-cheeked gibbons (Fig 1).
Phylogenetic analyses showed that MLG SW7 belonged to a zoonotic group. Given the zoonotic potential of this subtype, epidemiological and source tracking investigations as well as strict surveillance in captive NHPs in southwestern China are needed. MLG SW9 was closely related to the sequences obtained from NHPs in other studies [11]. MLG SW1 was similar to the sequences isolated from chinchillas, suggesting the potential for transmission of G. duodenalis between animals [16]. Other MLGs formed two separate clusters. In this study, most MLGs (7 MLGs) were found in northern white-cheeked gibbons suggesting greater genetic heterogeneity in G. duodenalis from this species [15].
Conclusion
The results of the present study confirm previous findings that assemblage B is dominant in northern white-cheeked gibbons. We first used a MLGs approach to identify G. duodenalis in captive NHPs in Southwestern China. One genotype of the potentially zoonotic assemblage B of MLG SW7 strain was identified in a northern white-cheeked gibbon. This suggests that the zoonotic transmission of Giardia might occur between the northern white-cheeked gibbon and humans. Additionally, high degree genetic diversity of assemblage B MLGs (7 MLGs) was detected in captive northern white-cheeked gibbons in Southwestern China. Additional MLGs studies of captive NHPs are needed, to better characterize genetic diversity and the routes of transmission of G. duodenalis between NHPs and humans or other animals.
|
2018-04-03T04:09:58.220Z
|
2017-09-14T00:00:00.000
|
{
"year": 2017,
"sha1": "697b604146a6b759ce33f1253fb67cc178557599",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0184913&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "697b604146a6b759ce33f1253fb67cc178557599",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
233786641
|
pes2o/s2orc
|
v3-fos-license
|
Energy management techniques and topologies suitable for hybrid energy storage system powered electric vehicles: An overview
Energy management system (EMS) in an electric vehicle (EV) is the system involved for smooth energy transfer from power drive to the wheels of a vehicle. During acceleration and deceleration periods, batteries in EV undergo high peak power consumption. Therefore, battery lifecycle degrades and subsequently reduces the drive range of an EV. Thus, hybridization of different energy resources becomes essential and seen as one of the alternative solutions for afore said issues. Further, hybridization along with efficient EM strategies helps to: (i) optimally utilize the energy storage systems during discharging and charging, (ii) improve the performance which in turn improves efficiency, and (iii) extend the drive range (iv) reduce the battery size. Though, many articles have been reported so far in literature for hybrid energy storage system (HESS) related to EM techniques; comprehensive review on: the configurations related to HESS, various EM strategies used in EV, performance evaluation of EM strategies for HESS configurations is not yet published. Therefore, this paper intends to provide a comparative assessment on different types of HESS topologies, types of EM techniques. The performance indices based on battery peak current reduction; amount of power stored back during regenerative braking has been compared and discussed. Further, a comparative analysis has been made on the New York city cycle; PEI, power electronic interface; PHEV, plug – in electric vehicle; PSO, particle swarm optimization; PSS, power split strategy; SoC, state of charge; SA, simulated annealing; SOH, state of health; UC, ultra capacitor; UDDS, urban dynamometer driving schedule; WLTP, worldwide harmonised light vehicle test procedure; WT, wavelet transform. system design and voltage variations. Hence, this review may be a nonstop solution for researchers and engineers working in the field of EVs. zero and NEDC models are used for reference and validation purpose. The simulation results prove that the proposed strategy provides a longer battery life and dominates existing pure battery-based EV in terms of RMS current reduction optimize the output membership functions of T-S-based FLC. Another FLC-based EM scheme is presented in Reference 64 for three-wheeled EVs using a fully active hybrid topology. In this article, an inner loop control is proposed for intelligent power sharing between UC and battery. Further, results suggest that UC's are able to smoothen and reduce the RMS current of battery with significant improvement in a lifetime and minimizing the battery cost. An adaptive fuzzy logic controller for EM in parallel active topology is proposed in Reference 67 and tested for various drive cycles based on simulation studies. A nonisolated multi-input bidirectional DC-DC converter is used for HESS EM problems solved by FLC scheme in EVs. 65 The results suggest that there is an achievement of 55% improvement in battery lifetime. To outperform the rule-based EM strategies, a supervisory EM strategy has been proposed in Reference 70 by formu-lating the power sharing problem as multiobjective optimization. This is solved by applying DP method for various data sets of drive cycles. The results obtained are utilized to train the neural networks. This intelligent EM controller at supervisory level improves the battery life by 60% and achieves higher efficiency. A SA coupled with dynamically restricted search space used global optimization to solve the EM problem in fully active HESS-based EV. 72 A supervisory architecture based on short-term and long-term management is detailed to accomplish the overall EM. Also, a comparative study has been done using PSO and SA for similar optimization problems. 46 Both SA and PSO EM approaches obtain consistently good results. There is slighter advantage of using PSO as it takes less computational burden than SA. The outcome shows that PSO provides a good solution for online energy distribution among two energy sources by reducing power losses occurring in the battery by reducing the rapid battery usage. In Reference 45 proposes a mathematical representation of an EM for HESS in EV by using gamma functions. First gamma-based strategy (GBS) used to solve EM problem and the genetic algorithm is used to optimally choose the value of gamma ( ɤ ) to improve the performance of GBS strategy. The parameters considered are RMS current and peak current of battery, life cycle cost for a HESS powered EV. The obtained results are compared with the rule-based strategy. These methods capable are (2) EM strategies such as a rule-based and optimization-based techniques, and (3) application of EM strategies for these HESS topologies. A detailed comparative study is presented to analyze the performance of EM strategies applied to these HESS topologies. The different topologies of hybridizing the battery and UC are also studied to understand its own advantages and disadvantages. Based on the study performed in previous sections, the following conclusions are derived: topologies.
List of Symbols and Abbreviations: P min , minimum battery power; P dem , power demand; P batt , battery power; P UC , UC power; J k (x(n)), optimized objective function; g k (x(n)), cost function; k, iteration or each step; V uc,min , minimum UC voltage; V uc,max , maximum UC voltage; V uc,ref , UC reference voltage; V s,max , maximum vehicle speed; V uc , voltage across UC; V batt , battery voltage; I batt_rms , battery RMS current; I uc,conv , UC current; I uc,convmax , maximum UC converter current; I uc,convmin , minimum UC converter current; I uc,convref , UC current reference; H, Hamiltonian function; λ(t), Costate variable; R uc , effective resistance of UC; ω 11 , ω 2 , ω 3 , ω 4 , cost function weights; C batt , battery capacity; C UC , UC capacity; x i , particle position; w, weighting factor; v i , velocity of the particle; C 1 & C 2 , constant variables; r 1 & r 2 , random numbers; P best , local best value; G best , global best; E 1 & E 2 , energy levels; T c , control parameter; K B , Boltzmann constant; SoC min uc , minimum state of charge across UC; SoC max uc , maximum state of charge across UC; SoC min batt , minimum state of charge across battery; SoC max uc , maximum state of charge across battery; SoC ref uc , reference state of charge across UC; SoC ref batt , reference state of charge across batt; F, input set matrix; G w , disturbance matrix; BEV, battery powered electric vehicle; CBDC, central business drive cycle; DCI, driving cycle identification; DESS, dual energy storage systems; DP, dynamic programming; DPR, driving pattern recognition; EM, energy management; EMS, energy management system; ESS, energy storage system; ESR, equivalent series resistance; EV, electric vehicle; FC, fuel cell; FCEV, fuel cell electric vehicle; FTP, federal test procedure; FLC, fuzzy logic controller; GBS, gamma based strategy; HEV, hybrid electric vehicle; HESS, hybrid energy storage system; HIL, hardware in loop; HWFET, high way fuel economy test; ICE, internal combustion engine; LVQ, learning vector quantization; MPC, model predictive control; NEDC, new european drive cycle; NN, neural network; NYCC, New York city cycle; PEI, power electronic interface; PHEV, plugin electric vehicle; PSO, particle swarm optimization; PSS, power split strategy; SoC, state of charge; SA, simulated annealing; SOH, state of health; UC, ultra capacitor; UDDS, urban dynamometer driving schedule; WLTP, worldwide harmonised light vehicle test procedure; WT, wavelet transform. system design and voltage variations. Hence, this review may be a nonstop solution for researchers and engineers working in the field of EVs.
K E Y W O R D S
electric vehicles, energy management, hybrid energy storage systems
| INTRODUCTION
Engines driven by fossil fuel such as gasoline, petrol, diesel, etc., contribute 25% of world's CO 2 emissions. [1][2][3][4] Not only being hazardous fossil fuel fed internal combustion engine (ICE) exhibits the poorest energy conversion efficiency of only 20%. Keeping various other factors in the background, research on EV driven partly/fully from electric power has received considerable interest in recent years. With formidable energy conversion efficiency of up to 60%; it is predicted that all passenger vehicles become 100% zero-emission vehicles by the year 2050. 5 With favourable economic conditions prevail, electric vehicle (EV) manufacturer's roll out many variants that include: plug-in hybrid electric vehicles (PHEVs), hybrid electric vehicles (HEVs), battery-powered electric vehicles (BEVs), fuel cell electric vehicles (FCEVs), and photovoltaic EVs. In recent times, BEV seems to be a promising technology leading the way towards decarbonization of environment. Due to a dawdling social acceptance BEV is expected to have a large market penetration in near future. 5 Having bright prospects to become the future transport, its performance strongly relies on energy storage systems (ESS) preferred, control strategies adopted and energy management (EM) techniques applied.
Hence, the performance of different energy storage technologies, together with their formations, its characteristics, and features concerning EV applications have been assessed in References 6 and 7. It is been reported that to improve fuel economy, dynamic stability, reliability, and for efficient energy storage a power converter in EV plays a major role. Thus, review on various power electronics-based converters and its corresponding control strategies for BEVs is presented in Reference 8. Besides, a brief overview on various power converter topologies with different configurations adopted for FCEV's is detailed in Reference 9. Both single, multistage power conversion arrangement for EV is discussed. Recently, a synoptic review on various energy sources, energy generation systems, types of existing BEV's, and its EM strategies is deliberately discussed with its challenges faced and its latest solutions available in Reference 10. At the same time, different architectures and its operational characteristics suitable for HEV and FCEV are also discussed in Reference 11 and appropriate storage technologies and its characteristics are detailed in References 12 and 13. Owing to become the future transport and to accomplish better performance, control; any BEV require energy management system (EMS). Its main function is to uphold the energy flow from ESS to vehicle wheels depending on the requirement. Further, an efficient EMS can aid in extending the EV drive range. Also, it restrains fast discharge that occur either during starting or sudden speed transition. To sustain such transitions ESS coupled with ultra capacitor (UC) is found to be the best alternative. 14,15 Moreover, such type of EVs require an efficient EMS in handling issues related to hybridization of energy sources. Various forms of hybridization sources include combinations of high-power density (battery) with high energy density (UC), or fuel cell (FC). 16 In literature, UC/battery combination is widely investigated since this combination supports: (a) high peak power consumption, (b) storage of excess energy while braking, and (c) extended battery lifetime. While, many EM techniques have been applied to manage this power split problem in UC/battery combination. The most prominent methods are grouped into rule-based and optimization-based techniques. 17 These methods try to reduce peak battery current, optimize battery state of charge (SoC), and increase battery durability, thereby extending the drive range of an EV. Various architectures of HEV and its corresponding EM strategies are reported in Reference 18. Study on FC/ICE-based HEV in References 19 and 20 highlights the importance of optimization algorithms effecting the unknown parameters identification related to other EM strategies. In Reference 21, detailed battery/UC configurations are provided.
Though, few articles reported review of research works related to EM techniques but does not provide any in depth review on the EM strategies applied for sustainability of BEV. Moreover, a comprehensive study on configurations related to battery/UC HESS and a detailed review on EM methods applied for the abovementioned are yet to be arrived. Further, limitation on EM strategies applied for BEV is not covered in literature. Therefore, this article attempts to consolidate the works on different HESS configurations such as passive parallel, semi active, fully active, series reconfigurable topologies for battery/UC powered EV. Further, a detailed comparative analysis on various topologies is also provided. Furthermore, implementation of EM strategies such as rule-based and optimization-based approaches applied for HESS at supervisory level is detailed. In addition, a comprehensive comparison is also presented for EM strategies applied for full active hybrid and semi-active type HESS arrangement based on performance evaluations such as efficiency, lifetime, range extension, and regenerative energy recovered. Based on the performance evaluations done some of limitations of EM strategies have been identified and thereby, a few suggestions and improvements have been recommended for betterment of battery system in HESS powered EV.
The following subsequent sections are detailed in the following way: Section 2 briefs on EV and its associated parts. The Section 3 elaborates the necessity of HESS and its classifications and Sections 4 and 5 describe the EM techniques and performance improvement achieved so far. Further, Section 6 provides some limitations and modification suggested to be implemented in near future.
| SYSTEM DESCRIPTION
The schematic layout of EV is shown in Figure 1, it consists of a battery pack, power electronics interface (PEI), electric motor, and EMS 22,23 Several characteristics that decide appropriate battery selection are battery capacity, nominal voltage, C-rating, maintenance, reliability, cost and battery type. These batteries are either On-board or Off-board charged depending on type and availability. Moreover, battery pack should have the capacity to deliver peak power via PEI.
The PEI are the one in EV responsible for: (a) EV propulsion, (b) battery charging, (c) energy recovery during regenerative braking, and (d) powering the on-board appliances. Depending on the electric motor employed PEI configurations vary. In case of two-stage AC motor drive, DC-DC converter converts battery voltage to required high-voltage DC for propelling the motor drive; while, the inverter fed ac motor drive operates in two modes: propulsion mode and regenerative braking. During propulsion mode, the power is transferred to electric machine from the battery and viceversa during regenerative braking. The reverse power flow is facilitated using bidirectional DC-DC converter at the front end. Whereas the DC motor drive that operates in similar manner is controlled by using front end two quadrant DC-DC converters.
The EMS unit that remains on top of the hierarchy perform the following functionalities: monitor continuously the battery status, generate control commands to PEI for suitable control action, and sustain battery charge for longer distance. Further, EMS also manages the power distribution from battery to various components such as auxiliary power supplies, air conditioning, etc.
| NECESSITY OF HESS AND ITS CONFIGURATIONS
In order to enhance ESS life cycle, limit surge discharge, improve energy availability, and system efficiency, it is customary to combine more than one energy storage either in parallel or series; this combination is called hybrid energy storage system (HESS). Various HESS combinations possible are battery/UC, UC/FC, battery/FC/UC, etc. 19 Among many, the battery-UC combination offers several advantages such aspeak power reduction, drive range extension, reduced battery degradation, improved lifetime, and State of Health (SoH). Meanwhile, this combination is appropriate for EV applications as it provide high-power density as well as high-energy density. Besides, batteries coupled with UC can store energy at the time of braking and assist for smooth energy transfer. In addition, UC can also sustain dynamic load profile of the vehicle and assist the vehicle for a longer drive range. Therefore, battery-UC combination support long-term EM and dynamic power regulation.
Evidently, coupling between HESS and the DC bus system is done through bidirectional DC-DC converters. Based on HESS and converter arrangement, four types of topologies are popular and they are passive parallel, semi-active topology, fully active hybrid, and series-parallel reconfigurable HESS configurations. 21 Types of HESS and its possible configurations are depicted in Figures 2 and 3, respectively.
| Passive parallel
In this topology, UC and battery are connected in parallel to DC-AC converter, that is, inverter as shown in Figure 3A. Absence of additional DC-DC converter made this topology lighter, smaller, and cheaper one. Additionally, fluctuations in DC-link voltages are comparatively less since the battery clamps the DC bus voltage constant. As the UC voltage is maintained constant due to battery pack it can be sized accordingly to match the characteristics of low-pass filter. 24 Thus, large current peaks and low-voltage dips are avoided. Since the converter and ESS are connected in parallel its terminal voltage follows the discharge characteristics of a battery limiting the UC voltage. Therefore, wide output voltage variation cannot be arrived.
Further, as soon as the stored energy in UC is supplied to the load, its voltage drops drastically. At this instant, the battery has to balance both UC as well as the load leading to additional burden on the battery. This topology resembles a simple RC circuit wherein the discharge and charge current are dependent only on the battery/UC parameters. 25,26 Alhough this topology is light, easy, and cheaper, it suffers from poor performance. 27
| Semi-active hybrid
Previous configuration does not exhibit ideal constant current with reduced ripple characteristics; since it exhibits highly volatile drive cycles. Further, high dynamic current drawn cause battery degradation. Therefore, PEI is included between ESS to improve reliability. In most cases, a bidirectional converter is connected either to battery or UC is utilized to control the regenerative power making the system robust. Further it supports energy transfer into UC during deceleration and acceleration periods. At the same time, controlled charging and discharging improves battery performance. Hence, semi-active hybrid topologies have been widely investigated in recent literature. This topology is subdivided into four different types: (1) battery-UC active type -bidirectional DC-DC converter connected to UC (2) UC-battery active type -bidirectional DC-DC converter connected to battery (3) cascaded connection -bidirectional DC-DC converter between battery and UC with diode. (4) Cascade connection-unidirectional DC-DC converter instead of bidirectional one.
| Battery-UC active
In this type of configuration, a bidirectional DC/DC converter is placed in between UC bank and battery pack as depicted in Figure 3B. Due to which, the battery pack can be relieved from maintaining the DC bus voltage, allowing the battery pack to have a lower terminal voltage. At the same time, smaller battery voltage attribute to less weight and minimal expenses. Since UC is connected directly to the DC link it behaves like low-pass filter. 28 Even though the UC acts as a low-pass filter, the whole range of UC's voltage can be utilized as there is no direct clamping between UC and battery. In the regenerative mode, if the UC bank voltage is made low with respect to the battery voltage, then the energy flows back naturally into the UC improving the system efficiency. 26 Moreover, the UC pack employed has lower equivalent series resistance (ESR) compared to the battery pack, 27,29 thus, absorb the majority of current spikes F I G U R E 3 Hybrid energy storage system topologies; A, passive parallel, B, battery-UC active hybrid, C, UC-battery active, D, battery-UC hybrid topology 1 with diode, E, battery-UC hybrid topology 2 with diode, F, parallel active hybrid, G, series reconfigurable occurred at the time of regenerative braking. Besides, charge and discharge rates of the UC and battery packs can then be controlled with the help of DC-DC converter. 24 The rate at which the battery and UC can be charged and discharged can significantly improve the system efficiency and battery lifetime. 30 However, the only difficulty that arises in this topology is its power converter control. 24,31
| UC-battery active
In another semi-active topology, the DC/DC converter is connected to battery instead of UC as found in Figure 3C. Decoupling battery from DC bus via converter is helpful in applications where smaller UC pack is preferred. 28 Another benefit of this topology is that controlling DC-DC converter provides large variation in the UC voltage. Likewise, directly interfacing the battery to the DC bus keeps up a steady DC link voltage. 32 At the downside, it is observed that there are situations like sudden consumption of peak power where battery cannot handle such power immediately. During such similar situation, it is desirable to directly connect the UC at the DC link so that it can handle large power fluctuations in a better way. However, an important disadvantage is the use of large current rated DC/DC converter that consequently increases the converter size compared to battery-UC active topology. 33
| Battery-UC topology with diode
A recent semi-active type that use battery/UC configuration along with bypass diode is shown in Figure 3D. Due to presence of diode, bidirectional converter can be bypassed during the power transfer from the battery to DC link. Further, the UC will be charged by the battery when its voltage is less than the battery voltage. 31 In this way, the DC-DC converter can be designed for smaller power rating compared to active battery/UC configuration. However, the demerits of this topology are (1) operation of UC is limited by the battery voltage, (2) DC-link voltage varies widely, and (3) system becomes stable when UC voltage goes below battery voltage during acceleration mode. Hence, to further reduce the size and complexity in control bidirectional converter is replaced with a unidirectional converter as depicted in Figure 3E. As a result, the system efficiency improves and converter cost also reduces. Moreover, this modification makes the converter operation and control simple. During driving mode, the battery and UC reference voltages are inherently generated; hence, UC delivers the entire power to load when the battery voltage is less. Whereas in previous UC/battery with diode topology 2; the battery does not absorb any power since diode acts in reverse biased condition. The disadvantages of this topology are (1) during acceleration the system lacks degree of freedom to control and (2) wide variation in DC bus voltage. In Reference 34 a switch is added in series with the UC to existing topology to isolate the battery from direct charging. By operating the bidirectional DC-DC converter either in buck mode or boost mode owing to set of predefined rules manages the energy availability in batteries as well as UC. In Reference 35 by adding a switch in series and diode in parallel to existing topology leads to four different operations such as (a) battery boost mode, (b) battery buck mode, (c) UC buck mode, and (d) regenerative braking mode overcoming the abovementioned disadvantages.
| Fully active hybrid
In this hybrid topology, multiple DC-DC converters are employed to overcome the disadvantages large DC bus voltage variations. 32 Employing multiple bidirectional DC-DC converters achieves complete control over the UC and battery individually. 36 By connecting more than one bidirectional DC-DC converters in parallel makes the DC-bus potential to be always constant. Also, the individual voltages of battery and UC are lower than the DC-bus potential minimizing the voltage balancing issues. Further, UC is fully utilized so that the UC voltage can be varied in wide ranges. Here, in this topology the UC bank and battery pack are separated from DC bus by integrating each other with DC-DC converter. The output terminals of each bidirectional DC-DC converter are connected in parallel as depicted in Figure 3F. Since the power flow from the battery and UC is decoupled from one another, this arrangement has autonomous control over ESS. One of the most important advantages of this topology is that it can allow lower DC link voltages hence, reduces the ESS size and its associated cost. Further, this topology maintains the DC link potential stable thereby maximizing the utilization of UC operational voltage range. 37 One of the important limitations of this topology is the requirement of one or more full-sized converters that requires complex control systems and additional cost.
| Series reconfigurable configuration
In recent literature, a new HESS configuration has been suggested and implemented for EVs. This topology as presented in Figure 3G uses several bidirectional switches to rearrange the HESS configuration from series to parallel connection. Whenever the DC bus voltage falls low, UC bank is charged by the battery pack. This functionality is suitable for the vehicle under standstill condition. Besides, this topology is advantageous during peak power demand, which occurs typically during periods of acceleration and deceleration of EV. When the switch T2 is closed, battery pack and UC bank are connected in series that makes DC bus potential to be greater than or equal to the UC voltage allowing propulsion motors to withstand maximum torque at high operation speeds. Additional DC-DC converter makes energy sources decoupled from DC link. Further, this configuration shares the similar advantages as the active UC/battery topology and battery/UC topology. 38 But the main disadvantage is the complex control strategies required to manage the energy balance between two sources. Another probable drawback is risks related to failure of switches T1, T2 and T3. Moreover, recent study conducted does not mention about the nature of power electronic switches to be used along with DC link of the system. Use of electromagnetic contractors compromises the system performance at the risk of contact failure. At the same time, solid-state switches for this type of application adds the complexity and reduces the risk of welding the contact. A comparative assessment on different HESS topologies is provided in Table 1 based on various parameters such as converter type, number of converters required, number of switches used, maintaining the constant DC bus voltage or not, conversion stages, efficiency, complexity, and also stating its advantages and disadvantages.
| EMS AND ITS REVIEW ON EM STRATEGIES APPLIED FOR HESS POWERED EV
Energy Management System (EMS) in EV is essentially an Electronic Control Unit (ECU) that helps utilize the available energy resources sensibly. Controlled via advanced microprocessor unit it receives various sensory inputs, internal system and driver commands to calculate the required power demand. Upon which appropriate control signals to PEI initiates for smooth energy transfer from the battery to wheels and vice-versa. EMS not only interpret and record the data, but also observes the data from the sensor inputs and tries to improve the drive range by applying appropriate control algorithms ( Figure 4).
One of the important functionality of EMS in EV is effective power splitting between ESS. To optimally divide the power demand between ESS, many deterministic control equation or strategy in EMS is applied. However, factors like: such as driver command, trip length, electric motor/generator speed, and the SoC of battery, etc., are involved in EMS selection. Further, without these data or information, one may have a limited control over the power split problem. To summarize, EMS in HESS are meet the load demand, sustain the battery voltage and UC charge, improve the overall efficiency of the system, and extend the battery lifetime.
Various techniques that have been suggested so far in the recent literature for EM implementation in HEVs, PHEVs, or HESS in EVs are (i) rule-based strategy and (ii) optimization-based technique. Most of the previous proposed methods are suggested for battery/UC combination, FC/battery/UC combination. This section exclusively elucidates various EM control strategies discussed in the literature for battery/UC combination with advantages and disadvantages of each method explained. Moreover, a comparison table based on the mode of application is arrived and presented in Table 2. Classification of different EM techniques available for HESS is portrayed in Figure 5.
| Rule-based control strategy
Rule-based control techniques are framework that creates deterministic design rules based on empiric or heuristics, human expertise, for predefined drive cycles of the vehicle. These rules are generally implemented using a lookup table or if-then rule expressions. The rule-based strategies are classified into (a) deterministic rule-based, (b) frequency based, (c) fuzzy logic-based, and (d) neural network-based control. In this kind of control strategy, a preselected reference battery power (P min ) is calculated. Based on P min value, three rules are framed: (1) If load demand power (P dem ) is below 0, UC consumes all the regenerative braking power within its charging limit; (2) If P dem is greater than P min , UC delivers (P dem − P min ) maximum discharging power in that state while battery provides P min ; (3) Otherwise, battery supplies power within the battery discharging limits. Using this rule-based control strategy, the authors have shown that the hybridized battery-UC system can effectively minimize the peak current of the battery, and also extend the battery lifespan. The flow chart of control strategy implemented is presented in Figure 6. With slight modifications in Reference 52 a voltage control for UC side converter is incorporated. The control technique followed depends upon the power demand (P dem ), threshold battery power (P min ) and the charging power coming from the battery to UC (P UC ). Therefore, these parameters should carefully be chosen depending upon type, size, and drive cycle of the vehicle.
| Frequency-based control strategy
Another representative rule-based control strategy is the frequency-based power decomposition strategy employed in References 40,53-60 for battery-UC HESS power split problem. In this method, the power demand of the HESS is handled using high and low frequency signals. According to the power demand, UC deliver high frequency component and batteries deliver low-frequency components of the power consumption respectively. The combination of UC along with battery filter out the peak current and reduce battery degradation losses. However, this type od filter-based strategy has several disadvantages: (1) it provides a very large phase shift. 51 (2) The cutoff frequency of the filter needs to be adjusted in the design in accordance with load demands.
| Fuzzy logic based control strategy
Fuzzy logic-based control strategies are good at dealing with complex decisions and model uncertainty. The basic idea of fuzzy logic-based control strategy is to use the available knowledge or expertise about the problem to construct a number of fuzzy rules to mimic human thinking and reasoning, which are finally represented as a collection of if-then rules. Having been proposed for EM strategies in References 41,61-67, an adaptive fuzzy controller for EM of battery/UC HESS is proposed in Reference 67 and depicted in Figure 7. Generally fuzzy logic focuses three steps: (a) fuzzification, (b) inference, and (c) defuzzification. Based on the input, a membership is utilized to convert input into fuzzified inputs. Further, an inference mechanism is framed based on the rule base to arrive at conclusions. These conclusions have to undergo the process of defuzzification to generate control input to the system. The main advantages of fuzzy logic approach are (1) tolerant and robust to component variations and imprecise measurements, (2) flexibility and adaptation, as the fuzzy logic rules can be simply framed. However, the fuzzy logic control strategy cannot be an effective control or the optimal control under different driving situations as it still depends on rules and experiences. Another disadvantage of the fuzzy logic approach method is that the defuzzification process takes significant time and memory.
| Neural network-based control strategy
Neural Network (NN) resembles the characteristics of human brain that tries to take intelligent decisions through thinking, computing, and mimicking neuronal activities of the human behavior. Further, NN method train itself from the repeated learning process through a large number of training data instances and updates the weights of hidden layer through weight adaption function as depicted in Figure 8. 68 Recently, NN-based control strategy for EV battery/ UC HESS has been proposed in References 69-71. Despite, NN for HESS powered EV require large amount of training data sets from the past information to minimize the battery peak currents and obtain ideal UC current. This technique to ascertain best power split between battery and UC. Moreover, power demand, different drive cycle sets are taken as input data and UC current as the output data to NN. This method offers following advantages: (a) high degree of freedom, (b) strong thinking and adaptive capability (c) can solve nonlinear control problems, and (d) fault tolerant. Accuracy of this method depends on the amount of training data set used.
| Optimization based control strategy
Unlike rule-based control methods for EM, optimization-based control utilizes either optimal control theory or soft computing algorithms. EM using optimal control do not require prior information to solve the control problem. 42,45,46,[72][73][74][75][76][77] The following section discusses and analyzes optimization-based control methods applied for EM in HESS powered EV. There are classified as (1) Dynamic Programming (2) Pontryagan's Principle, (3) Instantaneous optimization, (4) metaheuristic optimization methods and (5) Model predictive control.
| Dynamic programming
Unlike the rule-based methods, the dynamic programming (DP), one of the most prevailing mathematical tools obtained from Bellman's optimality principle, generally depends up on system model to compute the best control strategy. This model can be either numerical or analytical. Based on the model, the best power split control strategy can be obtained by applying DP. Prior it requires accurate terrain information to predict the future power demand of HESS. A detailed energy optimization in HESS using DP is detailed in Reference 73. DP is applied to optimize the size of UC which influences the battery degradation and cost in an EV. The objective function treated as cost function for DP is described as 43 : The objective function for nth step is The cumulative objective function at kth step (1 ≤ k ≤ d − 1) where g d (x(n)) represents the cost of the end step, g k (x(n), u(m)) represents cost of step k,f k + 1 signifies the step variable at k + 1, (J k + 1 (f k + 1 ) is the optimized objective function with best result at step k + 1. In case of multiple input and multipleo (MIMO) systems DP uses large amount of data leading to heavy computational burden.
| Instantaneous optimization
For EM in HESS, it is necessary to have accurate information on the drive cycle to predict the future power demand.
On the other hand, it is not so simple to obtain the accurate power demand data since the vehicle movement depends on various factors such as driving patterns and traffic on the road. To solve an EM problem with no future operating information available, 45 formulates an instantaneous optimization problem for a battery-UC HESS power split problem. In order to utilize the UC efficiently, the UC should be discharged or charged properly. Since, prediction of a future power demand profile is difficult, a simple strategy based on vehicle speed (V s ) can be used to adjust the UC SoC: Whenever V s is low, UC should be operated in a high SoC range, so that UC can deliver stored energy that is capable to meet peak power during accelerations. On the contrary, the UC SoC needs to be low if there V s is high because of regenerative power during decelerations. In particular, the electric machine usually requires a large power whenever the V s increases from zero. Thus, a reference voltage of UC V ref uc is adjusted using the following equation.
where V uc,min and V uc,max are the boundary of the UC voltages and V s,max is the maximum vehicle speed. The UC reference voltage is repeatedly computed and updated according to real-time vehicle speed. This UC reference voltage value is used in the instantaneous optimization method to minimize the battery current fluctuations, the battery current magnitude, and the difference between the actual UC voltage and its corresponding reference value. optimization problem is formulated, which is repeatedly solved by general solvers in polynomial time. The optimal power split between the battery and UC is computed at each instant. The advantage of this instantaneous optimizationbased control strategy is that it does not depend on the future vehicle operating profile. Besides, to confirm that UC can supply or consume adequate power at each interval of time, the UC reference value is updated deterministically.
| Pontryagin's principle
To solve optimal control problems another method proposed in Reference 78 is based on Pontryagin's principle. This method gives the optimal control over dynamical system that moves from one state to another in presence of constraints. Taking state or control inputs by treating the system as Hamiltonian system (HS) EM for HESS is designed. In case of EM in HESS-based EV, it is a two-point boundary value problem where the battery RMS current must be minimum for conditions of Hamiltonian. Further, to solve this problem for EM in EV three things are to be defined, they are (a) systems dynamical model (b) cost function, and (c) constraints. In References 44 and 79 for EM, the dynamics of UC are considered and taken as state variable, the battery RMS current (i bat_rms ) is taken as cost function to minimize under peak load demands, and the constraints is to regulate the UC voltage V uc and the current I uc_conv flowing out the converter. The corresponding equations are mentioned in Equations (3) to (6). Dynamical Model: Cost Function: Constraints: From the system model in Equation (3) this strategy needs to arrive an optimal control equation such that it can track the trajectory of traction current by generating suitable battery reference current. The Equation (4) as Hamiltonian function described in Equation (7) where λ(t) is a co-state variable which is the product of current and capacitance. The Equation (8) has satisfy necessary conditions for minimization of Hamiltonian under Equation (8).
Further, the optimal control law can be derived as by solving the Equations (8) and (9) by satisfying the constraints. Finally, the control law is the battery reference current (i bat_ref ) obtained is given as 44 in Equation (9) and implementation is depicted in Figure 9.
where R uc is the effective resistance of UC.
| Meta-Heuristic-based EM strategies
In hybrid energy storage-based EV, the foremost problems of EM due to load demand result in unpredictable drive range and wide variations in power request. The key goal of the EM is to minimize the absolute difference between power supplied and the power demand by HESS, that is, battery and ultracapacitor. At any instant, the whole power supplied by HESS is the linear combination of instantaneous power delivered by these sources. The optimization problem formulated is given in Equation (11) can be solved for each time interval k = 1, 2…. n, as in Reference 66: Subjected to the following constraints: With j ϵfbat, SCg where C bat , C SC are control variables to define power variation between the sources. At each interval of time, the model calculates the open circuit voltage (V oc j k ð Þ) as a function of source's SoC (SoC j (k)), the minimum open-circuit voltages (V oc min j k ð Þ) and the no-load voltage drops (δ j ). From these constraints given in above, the reference currents and voltages can be generated by solving the objective function using any one of meta-heuristic algorithms. In this section, a detailed description of particle swarm optimization (PSO) and simulated annealing (SA) algorithm have been discussed.
Particle swarm optimization (PSO)
PSO is a popular meta-heuristic optimization algorithm evolved based on the phenomenon of bird flocking. 80 along the predefined search space with a velocity v to attain the global best position. Generally, PSO algorithm has three phases of implementation. They are initialization phase, exploration phase, and evaluation phase.
• Initialization Phase: Initialize the population size and random search space.
• Exploration Phase: The swarm particle current position is updated (x i ) as they accelerate along w the defined search space with a velocity v i . Throughout the exploration phase, the particle current best position is updated as P best and the global best position is treated as G best . Each particle's position is updated using following Equations (16) and (17) x k + 1 where the x k + 1 i denotes the updated position of the ith particle, x k i is the initial position of the ith particle, the v k i is the initial velocity, and v k + 1 i is the updated velocity. w is the inertia factor that impacts the velocity of the particle. C 1 and C 2 are constant variables that controls individual behavior and social behavior of each particle. r 1 and r 2 • Evaluation Phase: The evaluated fitness value of the particles is updated and also data is recorded as P best and G best .
are the random numbers varied between 0 and 1 which help the algorithm to come out of the local optimal value.
For better understanding, the flow chart shown in Figure 10 is provided. PSO method is a continuous process of finding the global best solution in minimal computation time which makes suitable for real-time applications like EM for EVs powered by HESS, optimal sizing of battery/UC systems.
Simulated annealing algorithm
Recently, a stochastic optimization method known as simulated annealing (SA) is being used to find the global extremums for solving large optimization problems. 82 It is based on physical annealing analogy involving the process of solidification of fluids and random distribution of particles in the liquid. SA algorithm has two main stages. They are one is shift over between the states and the other one is to control the temperature to acquire the minimal energy state. This optimization method starts with initialization of solid state x i which has an initial energy level of E 1 . The forthcoming state x 2 with an energy level of E 2 will be accepted if the following Equation (20) is satisfied.
And if E 1 − E 2 > 0, is satisfied then probability function in terms of energy and temperature can be written as in Equation (21) where T C denotes the control parameter, it can be varied in the entire search space of the algorithm until the lowest energy state is achieved. The merit of this algorithm is having highest probability to find the best optimal point after getting the local optimum point. Further, it keeps on searching the solutions until the objective function provides a better solution than the current candidate solution. Furthermore, for better understanding a flow chart describing the process of SA is depicted in Figure 11.
| Model predictive control
Commonly called as receding horizon control, model predictive control (MPC) predicts the future inputs from the past information by minimizing the cost function with real time optimization and feedback correction from the predictive models. 47,83 The schematic diagram of MPC is depicted in Figure 12. Due to inherent optimization and feedback correction in MPC are used to solve HESS-based EM problems in EVs. In this type of control, a state space model of system can be used to predict the future value of output variables. The model outputs and process outputs are compared and processed through the prediction block to predict the future outputs of the process. At each sampling instant, the set point calculation has been done to fix the targets or set points to the controller. Further, feedback error is corrected based on the optimization problem defined over the finite horizon time. The performance of MPC mainly depends upon two aspects: (1) one is prediction accuracy and (2) control strategy optimization. Here in the case of HESS-based EM, the objective function formulated to minimize the power demand of the battery by efficiently utilizing UC under acceleration and deceleration conditions. Further, depending on the HESS arrangement, the control problem is treated to optimize the cost function compromising the conflicts between battery and UC as mentioned in Reference 47 is given as Subjected to constraints: limits of the battery. By solving this cost function, an optimal power references for battery SoC, UC SoC are obtained for EM between these two sources.
| PERFORMANCE EVALUATION OF EM STRATEGIES APPLIED FOR EM IN HESS CONFIGURATIONS
With various approaches on EM strategies have been detailed in previous sections; adoption of these strategies semiactive and fully active hybrid topologies is reviewed. While many techniques have been derived and tested with different drive cycles such as UDDS, NEDC, ECE-15, ECE45, ARTEMIS, LA92, US06, FTP75, CBDC, WYC, JC08, HWFET; its assessment on a common platform is quite difficult. Therefore, the following section critically reviews the works done on EM strategies applied for semi-active and fully active topologies only. In addition, a comparative study is also carried out for above said topologies based on battery power, UC power, drive cycle used for testing, peak power demand, and regenerative power recovered during deceleration. Further, the performance evaluation is also made to know the improvements on the battery lifetime, battery RMS current reduction, and drive range extension.
| Performance evaluation of EM strategies applied for semi-active configuration
Most of the works reviewed so far use semi-active topology with any one of the following: (1) Rule-based approach, (2) Fuzzy logic, (3) Frequency separation, (4) Dynamic programming, (5) Instantaneous optimization, (6) SA, (7) PSO optimization, and (8) Model predictive control (MPC) strategy. The block detailed diagram of semi-active HESS-based EMS is shown in Figure 13. Commonly two quadrant bidirectional converters are used at the power stage at the battery side while UC is connected directly to the DC bus. Any one of the abovementioned EM strategies is employed at high or top level providing suitable inputs to the power controller at the bottom level thus, controls the current flow from the battery depending on the power demand.
A simple deterministic rule-based control strategy is proposed in Reference 84 to meet the adequate power demand as well as to utilize the energy available during regenerative braking. The strategy proposed has three different modes of operation acceleration mode, constant speed mode, and braking mode. The mode is selection automatic based on the drive cycle. From the results, it is seen that the energy consumed by battery is reduced up to 47.99% for City II and drive range is extended by 27.17%, and 13.47% for ECE-15, and NEDC drive cycles, respectively.
A new power management strategy to improve accuracy and overcome uncertainties of terrain information is studied in References 66 and 67. With the aim of improving the battery life, the overall system performance estimation is made. Further, the EM strategy has the advantage of adaptation to select the appropriate membership function according to previous drive cycle patterns. Through this adaptability of the strategy is able to suppress battery charging/ discharging current variation by less than 4% achieving efficiency of 86.20%.
As an alternative, power split strategy has been proposed in Reference 40 where power demand of the vehicle is subdivided into low and high frequency components. The low frequency component is allowing to supply by battery while the high frequency component of power demand gets bypassed through UC pack. In Reference 63 a wavelet transformbased (WT) frequency decoupling EM scheme is proposed at the bottom level to split the power demand between battery and UC. Whereas this EM scheme is supervised by fuzzy logic controller (FLC) at the top level to generate suitable power references to parallel active HESS arrangement. This modification enables the control strategy to optimally utilize the energy from UC and makes the system robust in any unpredictable driving conditions. Furthermore, this EM technique extends the battery life time and reduce stress on the battery. In addition to WT-fuzzy at supervisory level an adaptive WT-fuzzy logic-based EM has been proposed in Reference 58 by integrating driving pattern recognition (DPR) for UC in EV applications. Cluster analysis is utilized by DPR and adaptive wavelet transform extracts and allocate the respective frequency components to UC and battery during power demand conditions. Further, the proposed technique reduces the charging and discharging battery current to 58.2% and enhances the battery life and vehicle endurance limit by 6.16% and 11.06%, respectively.
Recently, pontryagin's minimum principle applied for EM in HESS for EV applications for EM has been proposed in References 44 and 79. The derived control law helps in a feedback control eliminating the usage of additional controllers. Tazzari zero and NEDC models are used for reference and validation purpose. The simulation results prove that the proposed strategy provides a longer battery life and dominates existing pure battery-based EV in terms of RMS current reduction of battery by 50%. In another approach, a new EM is formulated to minimize the battery current variations and power loss using instantaneous optimization problem. 73 Further, this EM approach contains two parts: one part for calculating the UC voltage reference depending on the load dynamics and the other part is for optimizing the current sharing between energy storage devices. Furthermore, a technique to compute the voltage reference for UC under real-time operating conditions considering regenerative braking is also proposed. As a result, concurrent reduction in dynamic battery power up to 45% is achieved compared to rule-based approach. Another method was proposed to adaptively switch between the: buck/boost mode, battery/UC mode optimizing the energy distribution by SA for semi-active type arrangement achieving the improvement of 2.2% for UDDS and 0.7% for NEDC in comparison with rule-based strategy. 34 Alternatively, a convex optimization-based power sharing strategy for two propulsion machines fed by semi-active type HESS is proposed. 75 From the results, it is found that (1) 10% improvement in power train efficiency, (2) 82% reduction in battery lifetime cost, and (3) 27% of driving range extension are achieved. In addition, an optimization problem is framed to minimize the error between UC reference, actual currents, and the battery current. 52 The proposed problem is solved applying Kuhn-Tucker conditions; hence, battery performance is evaluated based on the battery's State of Health (SoH). To optimally derive the best HESS configuration, a dynamic programming approach is considered in Reference 74 for an electric bus. After obtaining the best configuration and size of HESS, a supervisory control made of rule-based strategy has been implemented to reduce the battery lifetime cost significantly.
A nonlinear MPC method is studied in detail in Reference 47 by considering nonuniform sample time at supervisory level. For predicting the future inputs, the control problem is defined over a short and long prediction horizon with two different sample times. At start, the prediction horizon time is taken as smaller sample time to reduce the battery peak current. Later, the longer sample time at the high-level prediction is made to fully utilize the UC's to decrease the rapid change in power by maintaining a constant voltage at the battery. Thereby, it also reduces the peak power consumption, which in turn improves the driving range, lifespan of a battery, and vehicle performance. Additionally, the reference for the UCs and battery is updated more often, it allows further improvements in battery power variations. A detailed comparative study in Reference 52 has been done using deterministic rule-based, frequency-based, fuzzy logic, MPC methods for a HESS-based EV. From the results, it can be understood that the life cycle cost of HESS is reduced by 23% compared to battery-operated EV. A comparative study is also performed between optimization-based and rulebased EM strategies in Reference 50. As far as the practical application is concerned, a deterministic rule-based and fuzzy logic is regarded as the best choices because of their notable performance and easy implementation.
Further, in case of a semi-active topology the performance evaluation is also critically evaluated based on voltage variation and system design is presented in Table 3.
| Performance evaluation of EM strategies applied for fully active configuration
Compared to the semi-active topology, fully active hybrid topology is adopted for the reason that the battery power flow is regulated via additional bidirectional converter. The EMS of fully active HESS configuration is shown in Figure 14.
The block diagram of semi-active type HESS-based EMS As mentioned in earlier sections, this topology has full control over battery current as well as the UC voltage. It is imperative that the control techniques play a major role in enhancing the durability of the battery, extending the drive range, and maintaining stiff DC-bus voltage; a comparative analysis on voltage variations and system design is performed for EM techniques applied in fully active topology (Table 4).
Using rule-based control strategy, the authors in Reference 39 have shown that the hybridized battery-UC system is capable of reducing the peak current of battery along with improved lifespan of battery. A novel rule-based EM strategy for light EVs with multienergy sources connected next-generation transportation is explained in Reference 37. The proposed system focuses on the efficacy enhancement by combining multiple renewable source models and control algorithms with multiple switches for maximum energy harvest. The performance of proposed system establishes a benchmark with standard ECE-47 drive cycle under standard and abnormal constraints. The results show that with proper control algorithm the energy between multiple sources can be effectively managed. An EM strategy incorporating rule-based meta-heuristic technique is presented in Reference 48 for multiple sources fed EVs. First, a long-term management with dynamical solution space with a set of possible rules is evolved to monitor the energy level of the battery. Second, a simulated annealing technique is utilized to optimally share the power. The generated references to lower-level controllers of DC-DC converters meet the power demand irrespective of future power requirements. This EM method achieves good control and provides feasible possibilities to share available energy via online among various sources with better range and reliability.
A hybrid active parallel topology with two separate DC-DC converters one for NiMH battery and another one for UC's that operates in bidirectional mode, is proposed in Reference 49. A cascaded control scheme is deduced through current and voltage controllers for battery voltage regulation. Subsequently, an EM approach incorporating a power follower with efficiency map to generate reference current for a UC fed DC-DC converter depending on the power demand is developed. Advantages of this EM strategy are (1) improvement in the DC link voltage stability, (2) UC is also recharged through recovered energy during braking conditions. Hardware in Loop (HIL) testing in real time following a similar strategy using particle swarm optimization to share power optimally is explained in Reference 49. A comparative study is also performed on optimization based and rule-based EM strategies in Reference 50. By controlling the required UC voltage range, the lambda control can be employed for high UC voltage ranges; wherein the rule-based or filtering techniques could be used for narrow voltage ranges. Only 2% improvement is obtained in the RMS current of the battery. But the efficiency aspects of the entire model are not taken into account in this study. In Reference 51 a control strategy incorporating deterministic rules for a power follower approach in power management of dual energy storage system (DESS) has been proposed. Here, the EM is decomposed into two layers such as strategy and control. A cascaded or decoupled EM method is used to control the battery side converter and the current control strategy based on the power follower approach is used for UC side converters.
An adaptive frequency splitter is utilized in Reference 55 for channelizing the low-frequency and high-frequency content to the battery and UC by considering UC as the peak power unit. The outcome of the work recommends that it is suitable only for off-loading applications owing to its simplicity, flexibility, and robustness. Novel frequency adaptive based Power Split Strategy (PSS) for HESS is detailed in References 56 and 57. The developed PSS produces uninterrupted outputs; hence, variations in the driving conditions can be identified in real time even for small transitions by observing the working conditions of every source. 61 Further, in References 59 and 85 Driving Cycle Identification (DCI) incorporating Learning Vector Quantization (LVQ) NN to forecast the power demand during the power-sharing between the battery and UC is outlined. A multilevel Haar wavelet transform (Haar-WT) is incorporated for assigning frequency components required for UC during power demand circumstances. Further, the proposed technique reduced the overcharging of a UC, thereby increasing the system efficacy and lifetime of the battery.
An artificial potential field is developed in Reference 60 to allocate power for HESS by estimating the power allocation ratio and cutoff frequency with the aid of existing virtual attractive force. Employment of artificial potential field allows the battery to operate in normal operating limits with reduced stress. It is found that the proposed system reduces 15% of battery capacity loss thereby enhancing the lifetime of the battery.
In Reference 62 a detailed modeling, analysis, and sizing of multiple ESS with fuel cell, battery, UC are investigated. A fuzzy-rule-based supervisory EM algorithm is proposed in monitoring and control. The energy flow in UC is tested for simple ECE-15 urban drive cycle. The results suggest that voltage fluctuations caused by load characteristics are smoothened by UC, and battery voltage is maintained constant most of time with the help of fuel cell. A position-based power management is proposed in Reference 41 using T-S fuzzy logic for HESS in practical implementation of electric tram. Here, the global positioning systems (GPS) are used to predict the future power flow and energy demands for preparation of UC control. And also, a heuristic optimization algorithm called differential evolution (DE) is used to 65 The results suggest that there is an achievement of 55% improvement in battery lifetime.
To outperform the rule-based EM strategies, a supervisory EM strategy has been proposed in Reference 70 by formulating the power sharing problem as multiobjective optimization. This is solved by applying DP method for various data sets of drive cycles. The results obtained are utilized to train the neural networks. This intelligent EM controller at supervisory level improves the battery life by 60% and achieves higher efficiency. A SA coupled with dynamically restricted search space used global optimization to solve the EM problem in fully active HESS-based EV. 72 A supervisory architecture based on short-term and long-term management is detailed to accomplish the overall EM. Also, a comparative study has been done using PSO and SA for similar optimization problems. 46 Both SA and PSO EM approaches obtain consistently good results. There is slighter advantage of using PSO as it takes less computational burden than SA. The outcome shows that PSO provides a good solution for online energy distribution among two energy sources by reducing power losses occurring in the battery by reducing the rapid battery usage. In Reference 45 proposes a mathematical representation of an EM for HESS in EV by using gamma functions. First gamma-based strategy (GBS) used to solve EM problem and the genetic algorithm is used to optimally choose the value of gamma (ɤ) to improve the performance of GBS strategy. The parameters considered are RMS current and peak current of battery, life cycle cost for a HESS powered EV. The obtained results are compared with the rule-based strategy. These methods capable are minimizing battery RMS current by 40% in NEDC. It is observed that EV with HESS consumes higher energy. The difference is about 0.5% against RBS and GBS. It also causes increase in weight and losses in DC-DC converters.
The study in Reference 70 deals with a novel predictive algorithm, which utilizes a state-based technique to forecast load demands. Decisions made on power distribution are in real time-based forecasts and possibilities of state trajectories related to system losses. Thus, fully active HESS is implemented and validated experimentally with a programmable drive cycle with well-built regenerative power component. It is practically proved that the HESS is more capable, efficient, and captures the excessive regenerative energy wasted under braking because of battery's inadequate consumption of charge current potential.
For fully active hybrid topology polynomial correctors have been used for current and voltage control of UC and batteries. 86 The EM is achieved by controlling the current sharing between loads and DC-bus voltage. Here two different methods have been proposed. One is to control the current of UC and DC-bus voltage control by taking the battery current as the inner loop. Second, by keeping the battery voltage fixed to control the current for both UC and battery are proposed via polynomial corrector method. A control scheme in Reference 87 proposed for parallel hybrid topology says that any supervisory EM methods can be applied to get an efficient operation of HESS-based EV.
| SUGGESTIONS FOR IMPROVEMENTS IN EM FOR HESS BASED EV
The EM strategies in this review focus on efficient and flexible use of UC with battery present in EV for various drive cycles. The challenge that still persists in EM is due to improper sizing of UC and influence of EM strategy on battery aging. Therefore, suitability study on different EM techniques applied for semi and fully active BESS is conducted based on the following parameters: (a) reduction of battery peak current, (b) drive cycle extension, (c) lifetime maximization, and (d) capacity loss minimization and are tabulated in Table 5. To define the method suitability parameter, index such low, medium, and high are used. For instance, high indicates an acceptance percentage of greater than 70%, medium acceptance is between 30% and 70%, and lowest suitability with less than 30% is taken as low. From Table 5, the following lists the limitations that are been observed from EM techniques applied for both semi-active and fully active configurations.
• The deterministic rule-based and fuzzy logic-based strategies can be implemented in real time with less computational burden. Being robust and wise to parameter variations these methods failed to tackle the fast variations in power demand for the battery. • Filtration or frequency-based EM technique can decouple load power into different frequency components effectively but requires efficient estimation of frequencies that needed to be separated. • DP, instantaneous and convex optimization, techniques can minimize the power demand from battery, but these methods are sensitive to model uncertainties, change in drive pattern, and excessive computational burden in obtaining optimal solution. • Meta-heuristic techniques such as GA, PSO, and SA applied for EM in battery/UC powered EV imposes challenges in the real-time implementation in the form of control parameters selection and tuning. Based on the abovementioned limitations and literature review performed so far, the following are some of the suggestions that are possible to be investigated in near future to enhance the performance of HESS -fed EV.
• Rule-based strategy for semi-active and fully active hybrid topologies achieves reduction up to 81% and less than 30% in battery peak current. 48,84 But to clearly investigate the performance improvements, its advantages and disadvantages a comparative study need to be performed between the same under similar power ratings. • MPC-based EM strategy showcase better performance among methods therefore can be implemented at supervisory level for parallel active hybrid topology under different sample times to achieve battery peak current reduction >40% as in Reference 45. • A comparative study between different EM strategies applied for fully active topology can be done to evaluate the effectiveness of each strategy under NEDC, UDDS, NYCC, US06, ECE-15, ECE-45, ARTEMIS, WYC, and WVU drive cycles. • An extended suitability study can be performed for various DC-DC converters such as cuk, sepic, Luo, Full bridge converter, etc., to study its system efficiency under dynamic braking conditions, stability, and its cost for EV applications. • In case of a battery/UC with diode topology, a supervisory level controller based on MPC can be applied to evaluate the performance indices such as efficiency, lifetime, degradation, SoH, and drive range extension. • Impact of EM strategies on battery aging can be explored using optimization-based technique in order to minimize the battery aging and capacity loss more than 10%. • Multiple objective functions can be formulated based on battery SOH, configuration, cost, weight and can be solved using various optimization methods for optimal UC sizing for HESS to lessen the size of the battery.
| CONCLUSIONS AND RECOMMENDATIONS
Due to varied drive cycles of EV, the battery combined with UC has a huge potential to improve performance and efficiency of the EV system. This article presents a brief overview on (1) classification of HESS topologies such as passive parallel, semi-active, fully active, and series reconfigurable topologies, (2) EM strategies such as a rule-based and optimization-based techniques, and (3) application of EM strategies for these HESS topologies. A detailed comparative study is presented to analyze the performance of EM strategies applied to these HESS topologies. The different topologies of hybridizing the battery and UC are also studied to understand its own advantages and disadvantages. Based on the study performed in previous sections, the following conclusions are derived: • Many available EM methods are applied only for semi-active and parallelly active hybrid topologies.
• Choosing an appropriate EM strategy at the supervisory level can significantly reduce the battery peak current, lifetime cost, degradation, and extends the driving range. • A predefined rule-based EM strategy for semi-active type HESS arrangement provides significant improvements in drive range extension for City II, ECE-15, and NEDC drive cycles. • Selection of optimal UC size and application of DP improves the life cycle cost and reduces the size of battery.
• Based on the analysis, it is noted that MPC-based EM strategy at the supervisory level functioning at nonuniform sample time for semi-active arrangement has shown significant reduction of stress on the battery, thus increasing the lifetime. • For the predefined set of drive cycles optimization-based EM strategies provide better results simultaneously. These techniques work well in offline than online. • For fully active HESS arrangement stability of the entire system is improved with usage of efficiency maps along with rule-based strategy. • Fuzzy logic controller at the supervisory level for fully active HESS arrangement with frequency-based control has noteworthy improvements in reducing capacity loss and extends the life of the battery up to 55%. • Application of metaheuristic algorithms such as PSO and SA at short-term level supported by well-tuned rule-based strategy exhibit better results with lesser computation time for various drive cycles. • Dynamically restricting search space of SA technique at supervisory level maximizes the usage of UC for fully active type HESS arrangement under regenerative braking conditions. From the above derived conclusions, the following are the few recommendations suggested for the upcoming researchers in the field of energy management for EVs powered by battery/UC • The performance parameters such as battery peak current reduction, efficiency of the system, lifetime improvement, drive range extension for a single EM strategy can be investigated. • Some of the meta-heuristic algorithms such as firefly, dragon fly, whale optimizations methods can be used for obtaining the better battery energy efficiency and optimal sizing of UC. • For fully active and semi active-type hybrid configurations, a nonlinear controller such as backstepping approach can be applied to have better performance and efficiency under dynamic operating conditions. • Hybridizing the nonlinear controllers such as backstepping and MPC can also be accomplished for the HESS powered EV systems to enhance the performance and efficacy.
ACKNOWLEDGMENTS
This work is carried out in Solar Energy Research Cell (SERC), School Electrical Engineering (SELECT), Vellore Institute of Technology (VIT), Vellore. Hereby authors would like to thank VIT management for their support and providing lab facility to carry out this work.
CONFLICTS OF INTEREST
Authors declare that there are no conflicts of interest.
DATA AVAILABILITY STATEMENT
Data sharing not applicable -no new data generated.
|
2021-05-07T00:02:52.105Z
|
2020-09-24T00:00:00.000
|
{
"year": 2021,
"sha1": "3defe902368ff5b4817359f9e68fdf5cf5c41d66",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/2050-7038.12819",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "42c3dda26da4c58a6ac0cc97c533daa1131ee849",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
259283363
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Scheduling in General Multi-Queue System by Combining Simulation and Neural Network Techniques
The problem of optimal scheduling in a system with parallel queues and a single server has been extensively studied in queueing theory. However, such systems have mostly been analysed by assuming homogeneous attributes of arrival and service processes, or Markov queueing models were usually assumed in heterogeneous cases. The calculation of the optimal scheduling policy in such a queueing system with switching costs and arbitrary inter-arrival and service time distributions is not a trivial task. In this paper, we propose to combine simulation and neural network techniques to solve this problem. The scheduling in this system is performed by means of a neural network informing the controller at a service completion epoch on a queue index which has to be serviced next. We adapt the simulated annealing algorithm to optimize the weights and the biases of the multi-layer neural network initially trained on some arbitrary heuristic control policy with the aim to minimize the average cost function which in turn can be calculated only via simulation. To verify the quality of the obtained optimal solutions, the optimal scheduling policy was calculated by solving a Markov decision problem formulated for the corresponding Markovian counterpart. The results of numerical analysis show the effectiveness of this approach to find the optimal deterministic control policy for the routing, scheduling or resource allocation in general queueing systems. Moreover, a comparison of the results obtained for different distributions illustrates statistical insensitivity of the optimal scheduling policy to the shape of inter-arrival and service time distributions for the same first moments.
Introduction
Machine learning algorithms have been used over the last ten years in almost all fields where problems associated with data classification, pattern recognition, non-linear regression, etc., have to be solved. The application of such algorithms has also intensified in the field of queueing theory. While the first steps in the successful application of machine learning to evaluate the performance characteristics of simple and complex queueing systems have already been taken, the total number of works on this topic still remains modest. As for reviews, we can only refer to a recent paper by Vishnevsky and Gorbunova [1] which proposes a systematic introduction to the use of machine learning in the study of queueing systems and networks. Before we formulate our specific problem we would like also to make a small contribution to the popularisation of machine learning in the queueing theory by describing briefly the latest works. In Stintzing and Norrman [2], an artificial neural network was used for predicting the number of busy servers in the M/M/s queueing system. The papers of Nii et al. [3] and Sherzer et al. [4] have answered positively the question regarding whether the machines could be useful for solving the problems in general queueing systems. They employed a neural network approach to estimate the mean performance measures of the multi-server queues GI/G/s based on the first two moments of the inter-arrival and service time distributions. A machine learning approach was used in the work of Kyritsis and Deriaz [5] to predict the waiting time in queueing scenarios. The combination of a simulation and machine learning techniques for assessing the performance characteristics was illustrated in Vishnevsky et al. [6] on a queueing system MMAP/PH/M/N with K priority classes. Markovian queues were simulated using artificial neural networks in Sivakami et al. [7]. Neural networks were used also in research by Efrosinin and Stepanova [8] to estimate the optimal threshold policy in a heterogeneous M/M/K queueing system. The combination of the Markov decision problem and neural networks for the heterogeneous queueing model with process sharing was studied by Efrosinin et al. [9]. The performance parameters of the closed queueing network by means of a neural network were evaluated in Gorbunova and Vishnevsky [10]. In addition to the presented results of using neural networks in hypothetical queueing theory models, academic studies in this area with real-world applications have gradually been proposed. For example, the problem regarding the choice of an optimum chargingdischarging schedule for electric vehicles with the usage of a neural network is proposed by Aljafari et al. [11]. The main conclusion to be drawn from the previous results obtained via the application of machine learning to models of the queueing theory is that the neural networks cannot be treated as a replacement for classical methods in system performance analysis, but rather as a complement to the capabilities of such an analysis.
The systems with parallel queues and one server are known also as polling systems which have found wide application in various fields such as computer networks, telecommunications systems, control in manufacturing and road traffic. For analytic and numerical results in various types of polling systems with applications to broadband wireless Wi-Fi and Wi-MAX networks, we refer interested readers to the textbook by Vishnevsky and Semenova [12] and the references therein. The same authors in [13] developed their research on polling systems to systems with correlated arrival flows such as MAP, BMAP, and the group Poisson arrivals. In Vishnevskiy et al. [14], it was shown that the results obtained by a neural network are close enough to the results of analytical or simulation calculations for the M/M/1 and MAP/M/1-type polling systems with cyclic polling. Markovian versions of a single-server model with parallel queues have been investigated by a number of authors. The two-queue homogeneous model with equal service rates and holding costs was studied in Horfi and Ross [15], where it was shown that the queues must be serviced exhaustively according to the optimal policy. In research by Liu et al. [16], it was shown that the scheduling policy that routes the server with respect to the LQF (Longest Queue First) policy is optimal when all queue lengths are known and that the cyclic scheduling policy is optimal in cases where the only information available is the previous decisions. The systems with multiple heterogeneous queues in different settings, also known as asymmetric polling systems, have been studied intensively in cases where there are no switching costs by Buyukkoc et al. [17], Cox and Smith [18], where the optimality of the static cµ-rule was proved. This policy schedules a server first to the queue i with a maximum weight c i µ i consisting of the holding cost and service rate. In Koole [19], the problem of optimal control in a two-queue system was analysed by means of the continuous-time Markov decision process and dynamic programming approach. The author found numerically that the optimal policy which minimizes the average cost per unit of time can be quite complex if there are both holding and switching costs. The threshold-based policy for such a queueing system was applied by Avram and Gómez-Corral [20], where the expressions for the long-run expected average cost of holding units and switching actions of the server were given. The queueing system with general service times and set-up costs which have an effect on the instantaneous switch from one queue to another was studied in Duenyas and Van Oyen [21]. The authors proposed a simple heuristic scheduling policy for the system with multiple queues. A rather similar model is described in Matsumoto [22], where the optimal scheduling problem is solved in a system with arbitrary time distributions. Here, instead of switching costs, the corresponding set-up time intervals required for switching are used. The system is controlled by the Learning Vector Quantization (LVQ) network, see Kohonen [23] for details, which classifies the system state by the closest codebook vector of a certain class in terms of the Euclidean metric. The problem with this approach is the large number of parameters associated with the codebook vectors, where it is normally required that several vectors per class must be estimated for a given control policy using computationally expensive recurrent algorithms.
This paper proposes a fairly universal method for solving the problem of optimal dynamic scheduling or allocation in queueing systems of the general type, i.e., where the times between events are arbitrarily distributed, and in queueing systems with correlated inter-arrival and service times. Furthermore, it can provide a performance analysis of complex controlled systems described by multidimensional random processes, for which finding analytical, approximate or heuristic solutions is a difficult task. The main idea of the paper is to use a multi-layer neural network for server scheduling. The parameters of this neural network trained first on some arbitrary control policy are optimized then with the aim to minimize a specified average cost function. Moreover, such a cost function for systems with arbitrary inter-arrival and service time distributions can only be computed via simulation. We consider this approach, which combines neural networks with simulation technique, to be quite universal to obtain an optimal deterministic control policy in complicated queueing systems. The method is exemplified by some version of a single-server system with parallel queues equipped with a controller for scheduling a server. The system under study is assumed to have heterogeneous arrival and service attributes, i.e., unequal arrival and service rates, as well as holding and switching costs. Systems with arbitrary distributions and switching costs have not yet been considered by other authors. It is assumed in our model that the queue currently being served by the server is serviced exhaustively. The next queue to be served by the server is selected according to a dynamic scheduling policy based on the queue state information, i.e., on the number of customers waiting in each of parallel queues. It is expected that the changing of the serviced queue involves the switching costs. The holding of a customer in the system is also linked to the corresponding cost. Clearly, even with some fixed scheduling control policy, calculating any characteristics of the proposed queueing system with arbitrary inter-arrival and service time distributions in explicit form is not a trivial task. It is also difficult to fix the dynamic control policy defining the scheduling in large systems in a standard way, e.g., through a control matrix that would contain the corresponding control action for all possible states of the system. Therefore, in such a case we consider it justified to solve the problem of finding the optimal scheduling policy with the aim to minimize the average cost per unit of time by combining the simulation as a tool to calculate the performance characteristics of the system with a machine learning technique, where the neural network will be responsible for dynamic control. By training a neural network for some initial control policy, we obtain characteristics of the network in the form of a matrix of weights and a vector of biases. The process of solving the optimal scheduling problem is then reduced to a discrete parametric optimization. The parameters of the neural network must be optimized in such a way that this network can guarantee the minimal values of the average cost functional by generating control actions at decision epochs. For this purpose, we have chosen one of the random search methods, such as simulated annealing, see, e.g., in Aarts and Korst [24], Ahmed [25]. It is a heuristic method based on a concept of heating and controlled cooling in metallurgy and is normally used for global optimization problems in a large search space without any assumption on the form of the objective function. This algorithm was implemented by Gallo and Capozzi [26] specifically for the probabilistic scheduling problem. The algorithm will be adapted for a non-explicitly defined parametric function with a large number of variables defined on a discrete domain.
To verify the quality of the calculated optimal parameters of the neural network, the values of the average cost functional for the markovian version of the queueing system are compared with the results obtained by solving the Markov decision problem (MDP). The general theory on MDP models is discussed in Puterman [27] and Tijms [28]. The details on application of MDP to controlled queueing systems with heterogeneous servers can be found in Efrosinin [29]. The optimal control policy and the corresponding objective function are calculated in the paper via a policy-iteration algorithm proposed in Howard [30] for an arbitrary finite-state Markov decision process. According to the MDP, the router in our system has to find an optimal control action in the state visited at a decision epoch with the aim to minimize the long-run average cost. Note that for our queueing model under general assumptions the semi-Markov decision problem (SMDP) can be formulated. The SMDP is a more powerful model than the MDP since the time spent by the system in each state before a transition is taken into account by calculating the objective function. The objective function must be calculated here also by means of a simulation. In this case, the reinforcement learning algorithm, e.g., Q-P-Learning, can be applied. The main problem of this approach consists of the fact that many pairs of state and action can remain non-observable for deterministic control policy and as a result the control actions in such states can not be optimized. However, in our opinion, neural networks can also be used to solve this problem which presents a potential task for further research. The SMDP topic is outside the scope of this article but we refer readers to work by Gosavi [31], where one can find a very interesting overview on reinforcement learning and a well-designed classification of simulated-based optimization algorithms.
Summarising our research in this paper we can highlight the following main contributions: (a) We propose a new controlled single-server system with parallel queues where the router uses a trained multi-level neural network to perform a scheduling control: (b) A simulated annealing method is adapted to optimize the weights and biases of the neural network with the aim to minimize the average cost function which can be calculated only via simulation; (c) The quality of the resulting optimal scheduling policy is verified solving a Markov decision problem for the Markovian analog of the queueing system; (d) We provide detailed numerical analysis of the optimal scheduling policy and discuss its sensitivity to the shape of the inter-arrival and service time distributions; (e) The distinctive feature of our paper is the presence of algorithms employed in the form of pseudocodes with detailed descriptions of relevant steps.
The rest of the paper is organized as follows. Section 2 presents a formal description of the queueing system and optimization problem. Section 3 describes the Markov decision problem and the policy-iteration algorithm used to calculate optimal scheduling policy. In Section 4, the event-based simulation procedure of the proposed queueing system is discussed. The neural network architecture, parametrization and training algorithm are summarized in Section 5. Section 6 presents simulated annealing optimization algorithm. The numerical analysis is shown in Section 7 and concluding remarks are provided in Section 8.
Single-Server System with Parallel Queues
Consider a single-server system with N parallel heterogeneous queues of the type GI/G/1 and router for scheduling the server across the queues. Heterogeneity here refers to unequal distributions associated with inter-arrival and service times of customers in different queues, as well as unequal holding and switching costs. The queue that is currently being serviced is exhaustively serviced. Denote I = {1, 2, . . . , N} as a queue index set. The proposed queueing system is shown schematically in Figure 1. Denote τ n,i , n ≥ 1 as the time instants of arrivals to queue i and ν i := ν n,i = τ n,i − τ n−1,i , n ≥ 1 as the sequence of mutually independent and identically distributed interarrival times with a CDF A i (t), i ∈ I. Further denote by ζ i := ζ n,i , n ≥ 1, the service time of the nth customer in the ith queue. These random variables are also assumed to be mutually independent and generally distributed with CDF B i (t), i ∈ I. We assume that the random variables ν i and ζ i have at least two first finite moments The squared coefficients of variation are defined then, respectively, as This characteristic will be required to provide a comparison analysis of the optimal scheduling policy for different types of inter-arrival and service time distributions. From now it is assumed that the ergodicity condition is fulfilled, i.e., the traffic load Let D(t) indicate the sequence number of the queue currently being serviced by the server at time t, and Q i (t) denote the number of customers in the ith queue at time t, where i ∈ I. The states of the system at time t are then given by a multidimensional random process with a state space Further in this section, the notations d(x) and q i (x) will be used to identify the corresponding components of the vector state x ∈ E. The cost structure consists of the holding cost c i per unit of time the customer spends in queue i and the switching cost c i,j to switch the server from queue i to queue j. It is assumed that the system states X(t) are constantly monitored by the router which defines the queue index to be serviced next after a current queue becomes empty. In initial state, when the total system is empty, a server is randomly scheduled to some queue. If the ith queue to be served becomes empty, such a moment we call a decision epoch, the router makes a decision by means of the trained neural network whether it must leave the server at the current queue or dispatch it to another queue. The routing to an idle queue is also possible. We remind that the server allocated by the router to a certain queue serves it exhaustively, i.e., it is only possible to change the queue if it becomes empty. Denote by A = I an action space with elements a ∈ A, where a indicates the queue index to be served next after the current queue has been emptied. The subsets A(x) of control actions in state coincide with the action space A. In all other states x from E \Ê the subsets A(x) = {0} includes only a fictitious control action 0 which has no influence on the system's behavior. The router can operate according to some heuristic control policies. It could be for example a Longest Queue First (LQF) policy which is a dynamic one and it prescribes at decision epochs to serve the next queue with the highest number of customers. If there are more than one queue with the same maximal number of customers, the queue number is selected randomly. Alternatively, the static cµ-rule, which needs only the information if a certain queue in non-empty, can be used for scheduling. According to this control policy the queue i with the highest factor c i µ i which is the product of the holding cost and the service intensity, must be serviced next. In the system with totally symmetric queues the former policy is according to [16] optimal. The latter control policy is optimal due to [17] if there is no switching costs, i.e., c i,j = 0. Otherwise, in case of positive switching costs and asymmetric or heterogeneous queues such policies are not optimal with respect to minimization of the average cost per unit of time.
The main idea of an optimal scheduling in our general model is as follows. We will equip the router with a trained neural network which will inform it on the index number of the next queue to which the server should be routed with the aim to reach formulated optimization aims. Obviously, we can only train the neural network on available data sets, i.e., on some heuristic control policy, and then we will need to optimize the network parameters such as the weights and the biases to solve the problem of finding the optimal scheduling policy. In the average cost criterion the limit of the expected average cost over finite time intervals is minimized in a set of admissible policies. The control policy f :Ê → A(x) is a stationary policy which prescribes the usage of a control action f (x) ∈ A(x) whenever at a decision epoch the system state is x ∈ E. The decision epochs arise whenever after serving any queue that queue becomes empty. For studied controllable queueing system operating under a control policy f , the average cost per unit of time for the ergodic system is of the form where S i,j (t) is the random number of switches from queue i to queue j in time interval [0, t]. Expectation E f must be calculated with respect to the control policy f . The policy f * is said to be optimal when for any admissible policy f , Our purpose focuses on a combination of simulation and neural network techniques. To verify the quality of results obtained by solving the optimization problem (4) we formulate an appropriate Markov decision problem. Then we compute the optimal control policy together with the corresponding average cost g * using a policy iteration algorithm, see, e.g., in Howard [30], Puterman [27], Tijms [28], which will be discussed in detail in a subsequent section.
Markov Decision Problem Formulation
Assume that the inter-arrival and service times are exponentially distributed, i.e., ν i ∼ E (λ i ) and ζ i ∼ E (µ i ), i ∈ I. Under Markovian assumption the process (1) is a continuous-time Markov chain with a state space E. The MDP associated with this Markov process is represented as a five-tuple: where state space E, action spaces A and A(x) have been already defined in the previous section.
λ xy is a transition rate to go from state x to state y by choosing a control action a is defined as where is an immediate cost in state x ∈ E by selecting an action a, Here the first summand denotes the total holding cost of customers in all parallel queues in state x which is independent of a control action. Let c(x) = ∑ N i=1 c i q i (x) and if c i = 1, i ∈ I, we get the number of customers in state x. The second summand includes the fixed cost c j,a for switching the server from the current queue j to the next queue with an index a.
The optimal control policy f * and the corresponding average cost g f * are the solutions of the system of Bellman optimality equations, where B is a dynamic programming operator acting on value function v : E → R.
Proposition 1. The dynamic programming operator B is defined as
Proof. From the Markov decision theory, e.g., [27,28], it is known that for continuous time Markov chain the operator B can be defined as Bv(x) = min a c(x, a) + ∑ y =x λ xy v(y) . This equality for the proposed system can be obviously rewritten in form (8). In this equation, the first term c(x) represents the immediate holding cost of customers in state x. The second term by λ i describes the changes in value function due to new arrivals to the system. The third term by µ j for q j (x) > 1 stands for the value function by service completion in the queue j where there are customers waiting for service. The last term by µ j for q j (x) = 1 describes also a service completion which leads now to the state with an empty queue when a control action must be performed. Hence only the last term occurs with a min operator.
Note that the state space of the Markov decision model is countable infinite and the immediate costs c(x, a) are unbounded. The existence of the optimal stationary policy and convergence of the policy iteration algorithm can be verified for the system under study in a similar way as in Özkan and Kharoufeh [32], where first, the convergence of the value iteration algorithm for the equivalent discounted model is proved, and then, using the criteria proposed in Sennott [33], this result is extended to the policy iteration algorithm for the average cost criterion.
To solve Equation (8) in the policy iteration algorithm required to calculate the optimal control policy, we convert the multidimensional state space into a one-dimensional space by mapping ∆ : E → N 0 . The buffer sizes of the queues must be obviously truncated, namely B i < ∞. Thereby the state x = (d, q 1 , . . . , q N ) can be rewritten in the following form: where β i,j = ∏ j k=i (B k + 1) with β N,N−1 = 1. The notation ∆ −1 (s) will be used for the inverse function. In one-dimensional case the state transitions can be expressed as The set of states E in truncated model is finite with a cardinality |E| = Nβ 1,N . The policy iteration Algorithm 1 consists of two main steps: Policy evaluation and policy improvement. In first step for the given initial control policy, it can be for example the LQF policy, the system of linear equations with constant coefficients must be solved. To make the system solvable the value function v(s) for one of the states can be assumed to be an arbitrary constant, e.g., v(0) = 0 in the first state with d = 1 and q i = 0. In this case we obtain from the optimality Equation (7) . The remaining equations can be solved numerically. As a solution we get the |E| values v(s) and the current value of the average cost g. In the policy improvement step, a control action a that minimizes the test value in the right-hand side of Equation (7) must be evaluated. The algorithm generates a sequence of control policies that converges to the optimum one. The convergence of the algorithm requires that the control actions in two adjacent iterations coincide in each state. To avoid policy improvement bouncing between equally good control actions in a given state, one can simply keep the previous control action unchanged if the corresponding test function is at least as large as for any other policy in determining the new policy. As an alternative to the proposed convergence criterion, one can use the values of average costs the variation of which should be for example less than a given some small value. Example 1. Consider the queueing system with N = 4 queues. The buffer sizes are equal to B i = 10, i ∈ I. At these settings the number of states already reaches large values, |E| = 58, 564, which confirms one of significant restrictions on application of dynamic programming for this type of control problems. The switching costs can be defined for example as c i,j = j − i + 4 mod 4. The holding costs c i for simplicity are assumed to be equal. The values of system parameters λ i , µ i , c i and c i,j are summarized in Table 1 and reflect heterogeneity of the system parameters, i.e., λ i = 0.05i and µ i = 3.750 i .
Initial policy end for 8: 10: else n ← n + 1, go to step 4 11: end if 12: end procedure These values correspond to the system load ρ = ∑ N i=1 ρ i = 0.4, that is the system is stable. This value is enough small to ensure on the one hand that the system is sufficiently loaded so that states appear where all queues are not empty, and on the other hand to minimize the probability of losing an arriving customer for given rather small buffer sizes. The solution of the large system of optimality equations is carried out numerically. The optimized average cost is g * = 2.5632.
Using Algorithm 1, we calculate the optimal scheduling policy. For some of states with fixed number of customers in the third and the fourth queues and varied number of customers in the first two queues the control actions are listed in Table 2. The first row of the table contains the values of the number of customers q 2 and q 1 in the second or first queue when a decision is made, respectively, when the first or second queue is emptied. The first column contains some selected states of the system for the fixed levels q 3 and q 4 of the third and fourth queues. As we can see, the optimal scheduling policy has a complex structure with a large number of thresholds, making it difficult to obtain any acceptable heuristic solution explicitly. To better visualise the complexity in structure of the optimal control policy, the background of the table cells changes in grey colour from darker to lighter backgrounds as the queue index decreases. The cµ-rule as expected is not optimal here, g cµ = 6.7237 that is almost two and a half times more than the value of the average cost under the optimal policy. When the values q 1 and q 2 are small, the router schedules the server to serve the queues with low service rates. In this case the switching costs are low as well. According to the optimal scheduling policy the initiative to route a server to the queue with a higher service rate and switching costs increases as the length of the first two queues increases. (2, q 1 , 0, 5, 5) 3 3 3 3 3 3 3 3 3 3 3 (2, q 1 , 0, 8, 8) 3 3 3 3 3 3 3 3 3 3 3 (2, q 1 , 0, 9, 9) 3 3 3 3 3 3 3 3 1 1 1 Example 2. In this example we increase the arrival rates λ i as given in Table 3. The other parameters are fixed at the same values as in the previous example. The load factor now is ρ = 0.64, and the corresponding optimized average cost is g * = 3.8201 and g cµ = 7.0420. The Table 4 of scheduling policy shows that as the system load increases the router switches the server to queue 2 or to queue 1 with a higher service rates at almost all queue lengths q 1 and q 2 , respectively.
Event-Based Simulation for General Model
We use an event-based simulation to simulate the proposed queueing system. This technique is suitable for random process evaluation where it is sufficient to have the information about the time instants when changes in states occur. Such changes will be referred to as events. Note that although simulation modelling is extensively used in queueing theory, many papers lack explicitly described algorithms that readers can use for independent research. For more information on simulation methods with applications to single-and multi-server queueing systems, we can recommend Ebert et al. [34] and Franzl [35]. In this regard, it will certainly not be superfluous if we present and discuss here an algorithm for the system simulation which is not difficult to adapt for other similar systems.
In our case, the events are the arrivals to one of N parallel queues and the departures of customers from the queue d currently being served by the server. The present time is selected as a global time reference.
In Figure 2, on the time axis we mark the moments of arrival of new customers and the moments of their service in a fixed queue with index d by means of arrows above and below the axis, respectively. The dotted arrows indicate the arrival of new customers in other queues. The successive events are denoted by ε i and the corresponding time moments by t(ε i ). In the proposed queue simulation Algorithm 2 all the times are referred to the present time. Suppose that at the present moment of time there is a new arrival to the queue with the number d, which is serviced by the server, i.e., t(ε i ) = 0. Denote by T x (ε i ) the holding time of the system in state x up to the occurrence of the event ε i . According to the time schema the holding time in a previous state is defined as remaining inter-arrival time to the queue d, T b (d) stands for the generated service time after the event ε i−2 of the previously occurred departure and the dots replace the time intervals associated with arrivals of customers in other queues. The next event is determined then by subtracting the holding time t i from the all event time intervals. In this case the current event is a new arrival. Thus, the holding time t i+1 in state up to the event ε i+1 of an arrival to some other queue which not equal to d is calculated by The subsequent holding times are calculated as follows, j=i+1 T x (ε j ) is a remaining inter-arrival time for the next arrival to the queue d. Continuing the process in a similar manner, all holding times of the system in the corresponding states are evaluated. By summing up the times t i we obtain the total simulation running time of the system simT. The average cost per unit of time is then obtained by division of the accumulated cost by the time symT.
Neural Network Architecture
In our model, we propose to equip the router with a trained neural network. This network will determine an index of the queue that the server will serve next based on the information about the system state at a decision epoch when the server finishes service of the current queue. We have chosen a simple architecture for the neural network consisting of only two layers in such a way that, on the one hand, it would have a small number of parameters for further optimization and, on the other hand, that the quality of correct classification of some fixed initial control policy would be equal to at least 95%. The proposed neural network has one linear layer which represents an affine transformation and softmax normalization layer as illustrated in Figure 3. The input includes N + 1 neurons according to the system state x = (d, q 1 , . . . , q N ), where q d (x) = 0. The neuron 0 gets the information on d(x), the ith neuron for i ∈ I gets the information on the state of ith queue. When the server finishes service at queue d, then the neural network classifies this state to one of N classes which defines a current control action a ∈ A in state x. The hidden linear layer consists of N neurons y = (y 1 , . . . , y N ) which are connected with an input neurons via the system of linear equations with w i = (w i,0 , w i,1 , . . . , w i,N ) are, respectively, the matrix of weights and the vector of biases of the given neural network which must be estimated by means of the training set. The softmax layer z = softmax(y) is a final layer of the multiclass classification. The softmax layer generates as an output the vector of N estimated probabilities of the input sample y i , where the ith entry is the likelihood that x belongs to class i. The vector y is normalized by the transformation The class number is then defined asâ = arg max z i . Hence, the output z is a mapping of the form z = ϕ(x, θ), where θ ∈ R N(N+2) is the parameter vector of the neural network which includes all entries of the weight matrix W ∈ R N×(N+1) and the bias vector B ∈ R N , i.e., θ = (w 1 , w 2 , . . . , w N , B ).
The values of the parameter vector θ of the initial control policy, which in the next section will be used as a starting solution for optimization procedure, are obtained by training the neural network on some known heuristic control policy. In our case this policy is the LQF.
In the training phase the following optimization problem must be solved given the training where a non-negative loss function , θ] takes the value 0 only if the class of the kth element of a sample is i, i.e.,â = a (k) . The problem (12) can be solved in a usual way by the stochastic gradient descent method, where a single learning rate η to update all parameters is maintained. The corresponding iterative expression is given below, where ∇ θ is a Nabla-operator defining the gradient of the function relative to the parameter vector θ. In our calculations we use the adaptive moment estimation algorithm (ADAM) to solve the problem (12). It updates iteratively the parameters of the neural network based on training data. The ADAM calculates independent adaptive learning rates for the elements of θ by evaluating the first-moment and second moment estimation of the gradient. The method is simple to implement, computationally efficient, requires little memory and is invariant to diagonal changes in gradients. The further detailed information regarding ADAM algorithm can be found in Kingma and Ba [36]. Despite the fact that the ADAM algorithm can be found across various sources, we have also chosen to cite it in this article. The main steps required for the iterative updating the parameter vector θ are summarized in the Algorithm 3. The parameters of the Algorithm 3 are fixed to η = 0.001, β 1 = 0.9, β 2 = 0.999, ε = 10 −8 and δ = 0.001. The classification accuracy of the proposed neural network trained on the LQF policy is over 97%. The test phases of the trained network were conducted on system states with a queue length of up to 100 customers per queue. Thus, this starting network can be used to generate control actions of the initial control policy for subsequent parameters' optimization of this neural network.
Calculate the gradient at step n 9: Update the biased first moment 10: Update the biased second moment 11: The bias-corrected first moment 12: The bias-corrected second moment 13: Update the parameter vector 14: if |θ (n) − θ (n−1) | < δ then return θ (n)
Optimization of the Neural-Network-Based Scheduling Policy
Denote by θ the known parameter vector of the trained neural network as was defined in (11). The function g(θ) means the average cost for the queueing system where the router chooses an action obtained from the trained neural network with the parameter vector θ. We adapt further a simulated annealing method described in Algorithm 4 for discrete stochastic optimization of the average cost function with a multidimensional parameter vector θ. This algorithm is quite straightforward. It needs some starting solution and in each iteration the algorithm evaluates for the randomly selected neighbor values of the function parameters the corresponding function value. If the neighbor occurs to be better than the current solution with respect to value of the objective function, algorithms replaces the current solution with a new one. If the neighbor value is worse, the algorithm keeps the current solution with a high probability and chooses a new value with a specified low probability. The simulated annealing requires the finite discrete space for the parameters of the optimized function. It is assumed that all weights and biases of the neural network summarized in the vector θ take values in the interval [θ min , θ max ] with a low bound θ min and an upper bound θ max . Moreover, this interval is quantized in such a way that θ i , i = 1, . . . , N(N + 2), takes only discrete values θ min + k∆, k = 0, 1, . . . , Q, where Q = θ max −θ min ∆ is a quantization level. Note that the domains for the elements of the parameter vector θ can be specified separately, and the values of the vector obtained by training the neural network based on the optimal policy of the Markov model will be suitable for determining the possible maximum and minimum bounds. In this case it is possible to achieve faster convergence of Algorithm 4 to the optimal value. g * ←ḡ(θ (n) ), θ * ← θ (n)
18:
else θ (n) ← θ (n−1) , m ← m + 1 19: end if 20: end while 21: end procedure Since the average cost function g can not be calculated analytically, for this purpose a simulation technique is used. As shown in Algorithm 4, at each iteration at the step where the current solution can be accepted with a given probability we need to calculate the difference between the object functions. Due to the fact that this function can only be calculated numerically, it is necessary to check whether this difference is statistically significant at each iteration of the algorithm. The algorithm is modified in such a way that the t-test for two samples is used to compare the expected values of two normally distributed samples with unknown but equal variances. Denote by θ 1 and θ 2 , respectively, the current and the modified parameter vector and bȳ two corresponding first empirical moments of the objective function. According to the t-test the null hypothesis which states that for the modified vector the average cost is statistically smaller then the previous solution is rejected if where t m;q stands for the q-quantile of the t-distribution and statistics S g(θ 1 ),g(θ 2 ) is defined as with empirical variances V (m) g(θ 1 ) and V (m) g(θ 2 ) . Below, we briefly describe the main steps of the Algorithm 4. At the initialisation step of the algorithm, the neural network is trained based on the LQF control policy. The parameter vector is then equal to the initial vector θ (0) to be optimized. The simulation Algorithm 2 is then used to calculate the initial sample {g (k) (θ (0) )} m k=1 with g (k) (θ (0) ) = QSIM(. . . ) of the average cost function for a given initial parameter vector θ (0) and the corresponding first empirical momentḡ(θ (0) ). These values are set as the current solution g * and θ * to the optimization problem (13). At the perturbation step, a randomly chosen element of the previous parameter vector θ (n−1) must be randomly perturbed on the specified set of average costs must be calculated together with the first empirical momentḡ(θ (n) ). At the acceptance step, a new policy θ (n) can be accepted as a current solution with a probability p defined as where T(n) is the temperature at the nth iteration. If a new policy θ (n) is accepted, then it is defined together with a corresponding average costḡ(θ (n) ) as a current solution. Otherwise, the last change in the parameter vector θ (n−1) must be reversed, i.e., θ (n) = θ (n−1) and the sample size m for calculating the first moments is updated. Then the perturbation step must be repeated. For termination of the algorithm the stopping criteria T(n) < τ or n < ν is used. We note that the classical simulated annealing method generates for some function g(θ) a sample θ (n) which for the constant temperature T(n) = T can be interpreted as a realization of a homogeneous Markov chain {Θ n } { n ∈ N 0 } with transition probabilities where U n is a uniformly distributed random variable on the interval [0, 1]. It is easy to show that the modified transition probabilities, where the objective function is calculated numerically, converges to the transition probabilities (17) which in turn can guarantee the convergence to an optimal solution. Proposition 2. The acceptance probability p(n) satisfies the limit relation Proof. The probability P[U n ≤ X] can be obviously rewritten as . Then the following relation holds, due to the strong law of large numbers and the fact that for n → ∞ the sample size m → ∞ and hence
Numerical Analysis
Consider the queueing system with N = 4. We first analyse a Markov model, where the parallel queues are of the type M/M/1 with ν i ∼ E (λ i ) and ζ i ∼ E (µ i ), i ∈ I, the coefficient of variation CV 2 ν i = CV 2 ζ i = 1. The values of system parameters λ i and µ i are fixed as in examples 1 and 2 which will refer to as Cases 1 and 2. We compare the optimization results obtained by combining the simulation, neural network and simulated annealing algorithm with the results evaluated by the policy iteration algorithm. In Cases 1 and 2, the weights and the biases of the neural network trained on the calculated by PIA optimal scheduling policy take, respectively, the following values On the basis of these values, we can estimate in the simulation annealing Algorithm 4 the domain or solution space for each element of the vector θ. For simplicity, in our experiments we set common boundaries for all elements as θ min = −6 and θ max = 6. The length of the increment ∆ = 0.1 implies the quantization level Q = 120. Next, we set η = 6, ν = 200, and T(n) = 0.2 log(n) . As an initial vector θ (0) we take the parameter vector obtained by training the neural network on the LQF policy. For the initial control policy, one could also choose the policy W PIA , B PIA obtained by Algorithm 1. However, we would like to check the convergence of the algorithm when choosing not the best initial solution, since in general case one usually chooses either some heuristic policy or an arbitrary one. The empiric average costḡ(θ (n) ) for each iteration step is calculated based on sample with a size m ≥ 20. The accumulation of sample data in QSIM Algorithm 2 is carried out after 1000 customers have entered the system and is completed after 5000 customers have entered the system. We see that the elements of matrices W PIA and W SA are different, but they are markedly similar in terms of the elements with dominant values. The optimization process of the scheduling policy is illustrated in Figure 4. In addition to values of the average cost function obtained at each iteration step of the simulated annealing algorithm, the figures show horizontal dotted and dash-dotted lines, respectively, at level of the average cost g LQF = 9.7093 and g cµ = 4.1984 in figure labelled by (a) and g LQF = 11.1740 and g cµ = 5.2546 in figure labelled by (b) for the LQF and cµ heuristic policies. As expected, a non-optimal control policy LQF implies too high average cost. The results look much better for policy cµ, but still the presence of switching costs significantly worsens the performance of this policy. The red horizontal line indicates the average cost g PIA = 2.5632 and g PIA = 3.5500 obtained by solving the Markov decision problem using the policy iteration Algorithm 1. We can observe that the values are quite close to those obtained by random search. However, some small difference may be due, firstly, to the fact that the simulation is used for calculations and the results have a certain scattering, and, secondly, we do not exclude the influence of boundary states in the Markov model, where a buffer size truncation has been used. Testing the hypothesis for the difference between the optimal average costs g * and g PIA at least for our model showed the values to be statistically equivalent. In the figures, we have also marked with triangles those iteration steps with accepted policy (AP) where the perturbed parameter vector has been accepted. The number of accepted points in Case 1 and 2 is equal, respectively, to 98 and 110. From above results in case of exponential time distributions we can make the following observations. If the parameter vector θ (0) with elements W PIA and B PIA is used for the initial scheduling policy, then one can expect the faster convergence of the simulated annealing algorithm to the optimal solution which was confirmed numerically. If an optimal policy for a controlled Markov process is not available, e.g., when the number of queues is too large, in this case it is reasonable to use the static cµ-rule as an initial policy.
The SA algorithm converges to the values g * = 1.6500 and g * = 2.0326, respectively, for Case 1 and 2 with the following optimal policies, Case 1: The average costs for heuristic policies take the values g LQF = 3.7333, g cµ = 2.8000, g PIA = 1.6500 and g LQF = 5.0133, g cµ = 3.9866, g PIA = 2.7373. It is observed that the optimal policy obtained by the SA algorithm is quite close to those obtained by the PIA. Nevertheless, from experiment to experiment certain deviations in the value of the average costs may appear. Therefore it is of interest for us to check whether such differences are statistically significant.
Further we analyse how sensitive is the optimal policy obtained in exponential case by the SA algorithm to the shape of arrival and service time distributions. The following distributions will be used to calculate the optimal control policy in the non-exponential case: gamma G(α, β), log-normal LN (µ, σ) and Pareto P R(α, k) distributions , where two last options belong to a set of heavy tail distributions. The parameters of these distributions are chosen so that their first and second moments coincide. Moreover, the first moments are the same as for exponential distributions. The moments need to be represented as functions depending on the corresponding sample moments as in the method of moments used for parameter estimation. In the following experiments, the first moments of the inter-arrival and service times are fixed at values of Case 2, and the squared coefficient of variation is varied as CV 2 ν i = CV 2 ζ i = 0.5 and CV 2 ν i = CV 2 ζ i = 20. Denote by {Z (k) } m k=1 a sample random variable Z distributed according to the proposed distributions with two first sample momentsZ,Z 2 and squared empirical coefficient of variation CV 2 Z =Z 2 Z − 1. Then for the gamma distribution Z ∼ G(α, β) with a PDF the parameters α > 0 and β > 0 satisfy the relations, In case of the lognormal distribution Z ∼ LN (µ, σ) with a PDF the parameters µ ∈ R and σ > 0 are calculated by In case of a Pareto distribution Z ∼ P R(k, α) with a PDF x ≥ k 0 x < k the parameters k > 0 and α > 0 are calculated by relations Parameters of the proposed probability distributions are listed in Tables 5 and 6, respectively, for inter-arrival and service time distributions. The sensitivity of the optimal control policy to the shape of the distributions is tested by means of a two-sided t-test for samples with unknown but equal variances. Let g exp and g opt are the samples of the average cost values obtained for the optimal control policy in case of exponentially distributed times and for the system with proposed distributions for the inter-arrival and service times. These samples of size m are associated with the normally distributed random variables Z exp ∼ N (µ g exp , σ g exp ) and Z opt ∼ N (µ g opt , σ g opt ), where µ g exp , µ g opt ∈ R and σ g exp = σ g opt > 0. The test is defined then as where statistics S g opt ,g exp is calculated by (16). The results of tests in form of the p-value, the values of the average costsḡ exp andḡ opt together with their 95% confidence intervals are summarized in Tables 7 and 8 for the systems with different inter-arrival and service time distributions with smaller and greater levels of dispersion around the mean, d.h. for CV 2 ν i = CV 2 ζ i = 0.5 in Table 7 and CV 2 ν i = CV 2 ζ i = 20 in Table 8. Table cell contains two rows with the values for the average costsḡ exp andḡ opt together with confidence boundaries, and the third row has the p-value. From the numerical examples, it is observed that the shape of distributions expressed through a coefficient of variation has a high level of influence over the value of the average cost functionsḡ exp andḡ opt . In almost all cases, the average cost increases significantly when the coefficient of variation increases. Only in the case of the Pareto distribution for the interarrival and service times is the change in values not significant. However, an examination of the entries in the last two tables reveals that in all experiments the p-value exceeds the significance level of α = 0.05. Furthermore, it is worth noting that in most cases this exceeding is sufficient large. In this regard, the statistical test fails to reject null hypothesis at a given significance level, in other words, the average cost values are statistically equal and the corresponding optimal control policies are equivalent. Therefore, at least within the framework of the experiments conducted, we can state that the optimal scheduling policy is insensitive to the shape of the inter-arrival and service time distributions given that the first moments are equal. For practical purposes, in general queueing systems one can either apply the proposed optimization method, or use the control policy optimized for the equivalent exponential model as a suboptimal scheduling policy.
Conclusions
In this paper, we combined the queue simulation technique, neural network and simulated annealing optimization to calculate the optimal scheduling policy and optimized average cost function in a general single-server queueing system with multiple parallel queues. The proposed combination of tools is sufficiently versatile to solve discrete optimization problems that occur during resource allocation in complex queueing systems and networks. The numerical results subsequently demonstrate the effectiveness of the proposed approach. The obtained optimal scheduling policy outperforms the best available heuristic policy which is the cµ-rule by more than 45% on average. Nevertheless, a couple of important points must be stressed that can be considered when using the proposed method. In simulated annealing, the choice of initial control policy affects the speed of convergence to the optimal solution. Furthermore, it is required that the finite domain be defined for the solution. If the dimensionality of the state space allows, the initial control policy and the corresponding finite solution space can be obtained by the policy iteration algorithm implemented for the Markov model. The obtained optimal solution seems to be statistically insensitive to the form of inter-arrival and service time distributions where the first two moments are the same. Moreover, the optimal policy in exponential case can be treated as a suboptimal policy and the corresponding trained neural network can be used by routers in queueing systems with arbitrary distributions. In terms of future research, we see potential in developing and applying this method to other complex controlled queueing systems where the search for optimal routing, scheduling and resource allocation policies is required. The possibility to compose the reinforcement learning algorithms and neural networks to solve optimization problems in general controlled queueing models could also be considered as a further line of research. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The authors can be contacted to obtain data used in the study.
|
2023-06-30T05:11:20.559Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "aa0f33c2c098496ea3550739d5c1646c0703db2f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aa0f33c2c098496ea3550739d5c1646c0703db2f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
218845923
|
pes2o/s2orc
|
v3-fos-license
|
Jumping to Conclusion? A Lévy Flight Model of Decision Making
The diffusion model is one of the most prominent response time models in cognitive psychology. The model describes evidence accumulation as a stochastic process that runs between two boundaries until a threshold is hit, and a decision is made. The model assumes that information accumulation follows a Wiener diffusion process with normally distributed noise. However, the model’s assumption of Gaussian noise might not be the optimal description of decision making. We argue that Lévy flights, incorporating more heavy-tailed, non-Gaussian noise, might provide a more accurate description of actual decision processes. In contrast to diffusion processes, Lévy flights are characterized by larger jumps in the decision process. To further examine this proposal, we compare the fit of the basic diffusion model and the full diffusion model (including inter-trial variability of starting-point, drift rate and non-decisional processes) to the fit of a simple and a complex version of a Lévy flight model. In the latter model, the heavy-tailedness of noise distributions was estimated by an additional free stability parameter alpha. Participants completed 500 trials of a color discrimination task and 400 trials of a lexical decision task. Results indicate that a complex Lévy flight model – including inter-trial variability parameters and alpha – shows the best fit in both tasks. Importantly, alpha-values correlated across tasks, indicating a trait-like nature of this parameter.
Introduction
Speed response time tasks are one of the most frequently used tasks in cognitive psychology. The diffusion model provides a valuable method to analyze response time data and its advantages have become increasingly clear during the last decades (Wagenmakers, 2009). The model describes evidence accumulation as a Wiener diffusion process that runs in a corridor between two thresholds, representing two response alternatives. In addition to a systematic drift in the decision process, normally distributed noise is assumed (Ratcliff, 1978). Although the diffusion model shows a reasonably good fit to data from a wide variety of cognitive tasks and its validity has been demonstrated across different paradigms (Ratcliff, 2002;Ratcliff & Rouder, 1998;Ratcliff, Thapar, & McKoon, 2001, 2003, recent studies suggest that models with more heavy-tailed noise distributions might be superior to classical diffusion models (Voss, Lerche, Mertens, & Voss, 2019). Stochastic processes assuming heavy-tailed noise are called Lévy flights. In these Lévy flights, the probability of extreme events is strongly increased, which results in jumps in evidence accumulation. Lévy flight models have been applied to a variety of contexts in different fields of science, including animal foraging (e.g., Reynolds, 2012), economic processes (e.g., Mantegna, 1991), as well as human perception and cognition (Liberati et al., 2017;Montez, Thompson, & Kello, 2015;Rhodes & Turvey, 2007).
The present study aims at providing further evidence for the applicability of Lévy flights to human decision making. Similar to the approach of Voss et al. (2019), we compare different evidence accumulation models with different noise distributions for the decision process. However, in contrast to the study of Voss et al. (2019), the present study is based on data with notably larger trial numbers per participant, and, importantly, we now employ exper-The Quantitative Methods for Psychology imental paradigms that have been often used in the field of diffusion modeling. Furthermore, we now apply a fully Bayesian parameter estimation method. The Bayesian information criterion (BIC) that penalizes model complexity is computed to allow for a rigorous model comparison. Additionally, we address the question of whether there is a psychologically meaningful inter-individual variance in the stability parameter alpha, which maps the heavytailedness of the noise distribution.
The Diffusion Model
Speeded response time tasks are a common type of paradigm in cognitive psychology. Typically, in twoalternative forced choice (2AFC) tasks, either mean response time or accuracy of responses is used as a measure of performance. However, such separate analyses entail the problem that a common metric for performance is missing, which is especially problematic when it comes to speed-accuracy trade-offs between experimental conditions. In addition, these traditional analyses utilize only a small amount of the available information, since information from many trials of an experimental condition is summarized by one single number, such as the mean response time.
The diffusion model (Laming, 1968;Ratcliff, 1978) addresses these problems. Due to its advantages over traditional analytic strategies for RT data, the model has remarkably grown in popularity over the last decades (Voss, Nagler, & Lerche, 2013). The model provides a theoretical account for the composition of response time distributions in binary decision making, considering location and shape of response time distributions for correct responses and errors and accuracy of responses. According to the model, during a binary decision, information is accumulated continuously, and this evidence accumulation is described by a Wiener diffusion process. This process comprises a systematic component, the so-called drift rate, and Gaussian noise. Whereas the drift rate determines average speed and direction of information accumulation, the random noise is responsible for variance of response times and response outcomes when the same stimulus is processed repeatedly by the same person. As soon as the diffusion process hits an upper or lower threshold, a decision for one or the other response is made.
Parameters of the Diffusion Model
In a diffusion model analysis, several parameters are estimated from the empirical response time distributions, and these parameters are associated with different cognitive processes. Distinct psychological interpretations have been assigned to all model parameters. The basic diffusion model, as described by Ratcliff (1978), comprises four parameters: drift rate v, threshold separation a, starting point z and duration of non-decisional processes t 0 .
The drift rate v reflects the average slope of the diffusion process. It depends both on the speed of a partic-ipant´s information processing and the task difficulty. Accordingly, drift rates closer to zero indicate a slower processing of information or more difficult tasks (Schmiedek, Oberauer, Wilhelm, Süß, & Wittmann, 2007;Voss, Rothermund, & Voss, 2004). Threshold separation a describes the amount of information that is needed to draw a conclusion. High values of this parameter indicate a conservative decisional style with slow responses and high accuracy, whereas low values represent a liberal style with fast responses and higher error rates (Ratcliff, Thapar, Gomez, & McKoon, 2004). The starting point z of the information accumulation is located between the two thresholds. In case of an a priori bias, it can be located closer to the threshold corresponding to the preferred response (Voss, Rothermund, & Brandtstadter, 2008;Voss et al., 2004). Finally, the duration of extra-decisional processes t 0 , such as encoding and motoric response processes, is added to the decision times determined by the diffusion process (Ratcliff, Spieler, & McKoon, 2000). Note that some researchers question the validity of the psychological interpretation of parameters in evidence accumulation modeling (Jones & Dzhafarov, 2014), while others argue in favor of it (e.g., Heathcote, Wagenmakers, & Brown, 2014).
In the full version of the diffusion model (Ratcliff & Rouder, 1998;Ratcliff & Tuerlinckx, 2002) that is often used in psychological applications, additional inter-trial variabilities are estimated for drift, starting point, and nondecision time. Note, however, that parameter estimation is often more accurate when more parsimonious models are used, suggesting that these inter-trial variability parameters might lead to overfitting (Lerche & Voss, 2016).
The Lévy Flight Model
Recent studies suggest that the noise in evidence accumulation might be better described by heavy-tailed distributions . Heavy-tailed distributions like the Cauchy distribution or the Lévy distribution are characterized by an high likelihood for extreme events, compared to a normal distribution . Lévy flights have been applied to a variety of contexts. For example, they have proven useful to model animal foraging behavior. The Lévy flight foraging hypothesis states that in certain environments (truncated) Lévy flights optimize random searches. According to this hypothesis, biological organisms have evolved to exploit Lévy flights for their wandering movements during foraging (Viswanathan, Raposo, & da Luz, 2008). For example, Reynolds (2012) reports evidence for Lévy flights in the fishy-scented olfactory- Figure 1 Basic version of the Lévy flight model. An information accumulation process starts at a starting point z and runs over time with the mean slope v until it hits an upper (a) or lower (0) threshold. Because of random noise, the process durations and outcomes vary from trial to trial. Outside the threshold decision-time distributions are shown. Due to a heavy-tailed noise distribution, sudden large jumps can be observed in the information accumulation process. mediated prey detection of wandering albatross regarding the length of flights between landings. The author concludes that natural selection lead to Lévy flight searching as this is the optimal pattern when prey is sparsely and randomly distributed, whereas Brownian motion (i.e., a process with Gaussian noise) is only effective when prey is abundant. Similar search patterns have been observed in other kinds of species, including several open-ocean predatory fish (Humphries et al., 2010). 1 Lévy flights are also used in other scientific domains, as for example, to describe economic processes. Here, it has been shown that price indices in a stock exchange have statistical properties that are compatible with a Lévy random walk (Mantegna, 1991).
In the field of cognitive research, there are only few studies that provide evidence of Lévy flight processes in human perception and cognition. Montez et al. (2015) applied Lévy flights to searching and clustering in semantic memory. In a first experiment, Rhodes and Turvey (2007) had participants recall as many animal names as possible within 20 minutes. Inter-response intervals were recorded. In a second experiment by Montez et al. (2015), other participants had to arrange magnets with animal names, taken from the previous experiment, on a whiteboard with spatial distances representing the similarity of the species. Inter-response intervals from the first experiment correlated with spatial distances from the second experiment and distributions of both variables approxi-mately followed predications from Lévy flights (Montez et al., 2015). Lévy flights have also been observed in the field of human perception. Liberati et al. (2017) conducted an eye-tracking study in which they showed typically developed children and children with autism spectrum disorder images of an adult gazing toward one of two objects. Scan-paths of gaze position of both groups were characterized by a probability distribution geometrically equivalent to Lévy flights. Voss et al. (2019) applied a Lévy flight information accumulation model to analyze data from decision making based on a number-letter classification task. In a simple single-stimulus task, participants had to classify presented stimuli as numbers vs. letters, whereas in a more difficult multiple-stimulus task, participants estimated whether there were more letters or more numbers in a set of simultaneously presented stimuli. Both tasks were administered under speed and accuracy instructions. A model with an additional parameter that indicated the heaviness in the tails of the noise distribution (corresponding to Model 2 in the present study), fit the data better than a model with Gaussian noise. Moreover, larger jumps in the decision process were observed in the single-stimulus condition compared to the more complex multi-stimulus condition and under speed instructions compared to accuracy instructions.
Thus, Voss et al. (2019) provided first evidence that a Lévy flight model might be applicable to human decision making. In contrast to the diffusion model, the Lévy flight model allows for jumps in the decision process (see Figure 1). More heavy-tailed random noise in an evidence accumulation model might mirror cognitive processes in binary decision tasks even better than normally distributed noise.
Distributions such as the normal distribution, the Cauchy distribution or the Lévy distribution are special cases of a class of distributions called Lévy alpha-stable distributions. The heaviness in the tails of the distribution is described by the stability parameter α ∈ [0, 2]. The normal distribution is indicated by a value of α = 2 and the more heavy-tailed Cauchy distribution by α = 1. In the present paper, we compare four different models: 2 Model 1 (standard diffusion model; α = 2) assumes the noise to be normally distributed, whereas in Model 2 alpha is not fixed but estimated as an additional free parameter. Besides the fixed or free stability parameter α, the two models comprise the free parameters v 0 and v 1 (drift rates for two stimulus types), a (threshold separation), z (starting point) and t 0 (non-decision time), as described above. Inter-trial variabilities were fixed to zero. Models 3 and 4 are full versions of the described models that additionally comprise s v (inter-trial variability of drift), s z (inter-trial variability of starting point) and s t (inter-trial variability of non-decision time).
Research Questions and Hypotheses
Following the results of Voss et al. (2019), we expect a model with stability parameter alpha as an additional free parameter to fit the data better than a model with Gaussian noise, both for the color discrimination task and for the lexical decision task. Thus, we assume alpha to take average values between 1.0 (Cauchy noise) and 2.0 (Gaussian noise). Additionally, we expect a positive correlation between individual alpha-values across the two tasks, indicating meaningful inter-individual variance in this parameter.
Participants
The sample consisted of 40 participants (33 female, 6 male and 1 non-binary; mean age=21; range: 18-38) who were recruited with flyers at the Institute of Psychology at Heidelberg University. Consequently, the majority of participants were undergraduate students majoring in Psychology. All participants were German native speakers and none of them had impaired color vision. They gave an in-formed consent prior to the experiment and were granted partial course credit or 6 Euros as compensation.
Design
The design comprised the within-participant factors "task" (color discrimination vs. lexical decision) and "stimulus type". For both tasks, there were two different stimulus types (orange vs. blue or word vs. non-word for color discrimination and lexical decision, respectively). The order of tasks was counterbalanced across participants.
Apparatus and Stimuli
Stimuli were presented on a 17-inch computer screen. For color discrimination (Voss et al., 2004), we used colored squares (approximately 40 × 50 mm) consisting of 150 × 200 pixels. Each pixel was either orange or light blue. The proportion of colors within each stimulus was 47:53. Pixels were randomly intermixed. For half of the stimuli, orange was the dominant color; for the other half, blue was dominant.
For the lexical decision task, we re-used stimuli from Lerche and Voss (2017). Two-hundred German nouns with one or two syllables and four to six letters served as word stimuli. All words had low frequency in German language. For each word stimulus, a non-word was created by random replacement of all vowels. Stimuli were presented centered on the screen in Arial 20 pt font.
After the experimental tasks, a German adaptation of the UPPS Impulsive Behavior Scale was used to assess participants' impulsivity (Schmidt, Gay, d'Acremont, & Van der Linden, 2008). 3
Procedure
The experiment was conducted in five computer-based group sessions with 6 to 10 participants. Participants were instructed to respond as quickly as possible, even if this would lead to some mistakes. Thereby, we intended to generate higher error rates that were sufficiently large for a robust parameter estimation. After every 100 trials, the opportunity for a short break was given.
For each of the two tasks, participants started with a training block of 12 trials. Only in this training block, accuracy feedback was given after each response. Thereafter, participants completed five experimental blocks of the color discrimination task and four blocks of the lexical decision task (order of tasks was counterbalanced) with 100 trials per block. Each trial had the following sequence: First, a white screen was presented for 500 ms. Afterwards, a fixation cross appeared for 250 ms at the center of the 2 In addition to the described models, we analyzed the fit of a simple and a complex version of a model with Cauchy-distributed noise (α = 1.0). As this model yielded an inferior fit in comparison to the other models, we did not describe the model in detail here to improve readability. 3 By assessing participants' self-rated impulsivity, we intended to exploratively analyze the correlations between parameter alpha and UPPS scores.
As the analysis did not yield any statistically significant correlations, results are not reported in detail here.
The Quantitative Methods for Psychology screen before it was replaced by a colored square in the color discrimination task or by a letter sequence in the lexical decision task. The stimulus remained on the screen until a decision was made. Responses were given by pressing the A-key for "blue" or "non-word" and the L-key for "orange" or "word" on a standard keyboard. Labels for the assignment of keys were presented at the bottom-corners of the screen throughout the experiment. The total duration of the experiment was approximately 45 minutes.
Fitting the Models to the Data
The models' parameters were estimated for each task and each participant separately. This requires a multidimensional search for a set of parameter estimates that provide an optimal fit between predicted and empirical response time distributions. Whereas for the diffusion model, probability density functions (PDF) for response times are known (e.g., Navarro & Fuss, 2009;, this is not the case for the model with alpha as an additional free parameter. Therefore, a direct calculation of the likelihood is not possible. Voss et al. (2019) applied a somewhat cumbersome simulation based approach to approximate the likelihood. Recent work from our lab showed that a deep learning approach for likelihood-free parameter estimation is much more efficient (Radev, Mertens, Voss, Ardizzone, & Köthe, 2019). In the following section, we describe the rationale of this new approach briefly. Our estimation method draws on recent advances in deep probabilistic modeling (Ardizzone, Lüth, Kruse, Rother, & Köthe, 2019;Grover, Dhar, & Ermon, 2017;Kingma & Dhariwal, 2018;Radev et al., 2019). It involves two neural networks (a summary network that learns to extract the most informative summary statistics from raw data and an invertible network that learns the relation of these summary statistics to the true parameter values) which jointly learn a probabilistic mapping from data to parameters without assumptions on the parametric form of the posterior distributions of all parameters. The networks are trained from simulated data by implicitly minimizing the Kullback-Leibler (KL) divergence between the approximate posterior deduced by the networks and the true posterior of the model parameters. Moreover, the method recovers the true posterior exactly under optimal performance of the networks, as proved by Radev et al. (2019). Once trained with a sufficient amount of simulated data, the converged networks can be used to perform rapid likelihood-free Bayesian inference, thus essentially amortizing costs of training. Specifically, the method involves the following steps: 1. A broad enough prior on the models parameters is specified, such that the prior captures a realistic range of all parameter values.
2. Data are simulated on the fly by repeatedly drawing from the prior and generating artificial response time data. 3. The simulated data are fed into the networks which iteratively minimize the KL divergence between the approximate posterior (deduced by the networks) and the true posterior over parameters. 4. The trained networks are applied to the observed data in order to approximate parameter posteriors. By splitting the process of parameter estimation into a training and an inference phase, the computational load is "outsourced" into the training phase. Subsequent inference involves only passing observed data sets through the trained networks, which is computationally cheap and very efficient. Moreover, the trained networks can be stored and re-used for estimating the parameters of the model they have been trained on.
The method described above provides posterior distributions for all parameters. To allow for model comparisons, we then employed the simulation approach by Voss et al. (2019) to approximate the likelihood at the mean parameter values for each model and each person. This allowed us to calculate the BIC. This information criterion is defined as follows with smaller values indicating a better fit (Voss, Voss, & Lerche, 2015): where LL is the log-likelihood, P is the number of free parameters and M is the number of observations (i.e., trials).
Data Pre-Treatment
Data from trials with logarithmized response times that fell more than 1.5 interquartile ranges below a participant's first quartile or more than 1.5 interquartile ranges above a participant's third quartile were removed prior to all analyses. This criterion led to an exclusion of 3.59% of trials in the color discrimination task and of 2.64% of trials in the lexical decision task.
Response Times and Accuracy
Mean correct and error response times (RT) and accuracyvalues for the different types of stimuli in the two tasks are presented in Table 1. Additionally, response times and accuracy-values are visualized in Figures 2 and 3, respectively.
As can be seen, for the color discrimination task, response times for errors were slightly slower that for correct responses. In the lexical decision task, we observed fast errors in response to non-word stimuli, but slower errors in response to word stimuli. Accuracy-values suggest Reaction times in milliseconds for correct responses and error responses to non-word and word stimuli in the lexical decision task and to predominantly blue and predominantly orange stimuli in the color discrimination task for the 40 participants. Note. N = 40, except for non-word stimuli, where one participant had to be excluded as she did not make any mistakes. that the lexical decision task was overall somewhat easier than the color discrimination task. This difference is largely based on high accuracy rates for non-words. While response times do not differ largely between the different conditions in the two tasks, the variance in performance between participants regarding response time and accuracy is larger in the color discrimination task in comparison to the lexical decision task. Lastly, a larger variance in response times can also be observed for error responses compared to correct responses.
Model Fit
To assess model fit, BIC was calculated separately for all participants for the two models and the two tasks. We decided for the report of BIC values instead of AIC values (Akaike Information Criterion) as BIC is consistent, whereas AIC might tend to select complex models that overfit the data (Vandekerckhove, Matzke, & Wagenmak-ers, 2015). Summarized BIC values are presented in Table 2 with smaller values indicating a better fit. Additionally, the number of participants for whom the respective simple and full models had the best fit are shown. 4 These analyses reveal that overall Model 4 (i.e., complex Lévy flight with freely estimated stability of noise and inter-trial variability) shows the best fit. When comparing simple models (without inter-trial-variability parameters) to each other, Model 2 (i.e., a model with alpha as an additional free parameter) performs better than the first model for both tasks.
When comparing the complex models, which allow for inter-trial variability in starting point, drift rate and nondecisional processes, Model 4 with alpha as an additional free parameter has a better fit than the full diffusion model (Model 3) across tasks. Note that the differences in BIC are huge, which is indicated by a performed transformation of BIC values to Schwarz weights (Vandekerckhove et Figure 3 Accuracy values for word stimuli and non-word stimuli in the lexical decision task and for predominantly blue and predominantly orange stimuli in the color discrimination task for the 40 participants. al., 2015). As a result of this analysis, weights of nearly 1 for the superior model were observed. Results also suggest notable inter-individual differences in the type of information sampling, as different models are superior for different participants. As Model 1 (the standard diffusion model with a fixed parameter α = 2) is nested in Model 2 (the Lévy model with alpha as an additional free parameter), we additionally performed a likelihood ratio test to compare the fit of these models. Highly significant advantages were observed for the Lévy flight model for data from the lexical decision task, χ 2 (40) = 4, 058, p < .001, and from the color discrimination task, χ 2 (40) = 1, 950, p < .001.
Parameter Estimates
Averages from the mean posterior distributions of all parameters for the four tested models are presented in Table 3. Additionally, distributions of mean alpha-values are presented in Figure 4 for the simple Lévy flight model. For the 40 participants, alpha is approximately uniformly distributed in a range from α = 1.0 (i.e., a model with Cauchydistributed noise) to α = 2.0 (i.e., the Ratcliff diffusion model with normally distributed noise).
Correlations of Alpha-Values with Accuracy and Response Times
To improve the understanding, which characteristics of behavioral data are indicative of alpha, mean posterior values for alpha from Model 2 were correlated with response times for correct responses as well as accuracy rates. To increase normality, response time values were log transformed and accuracy-values were arcsin transformed prior to the analysis. Correlations are presented in Table 4 and visualized in Figure 5. Moderate to high positive correlations were observed between response times and alpha-values across all tasks and stimuli, indicating that larger jumps in the decision process are associated with faster responses. Alpha-values Table 3 Mean parameter values (standard deviations in parentheses) for simple and complex versions of the diffusion model and the model with stable noise in the color discrimination and the lexical decision task. Note. N = 40. a = threshold separation; z r = relative starting point; v 0 = drift rate for blue stimuli or non-word stimuli; v 1 = drift rate for orange stimuli or word stimuli; t 0 = non-decisional time; α = stability parameter of noise distribution; s z = across-trial variability in starting point; s v = across-trial variability in drift rate; s t = across-trial variability in non-decisional processes.
were also positively related to accuracy, showing significant correlations between alpha-values and accuracy in color discrimination, r(38) = .34, 95% CI [.03; .59], as well as between accuracy in lexical decision and alpha-values in color discrimination, r(38) = .35, 95% CI[.04; .59]. 5 Importantly, also a significant correlation was observed between alpha-values from color discrimination and lexical decision, r(38) = .41, 95% CI [.12; .64]. This inter-task correlation of alpha suggests a stable, trait-like component of the quality of information accumulation. Correlations of parameters in the simple Lévy flight model (Model 2) across both tasks are shown in Table 5.
Discussion
Summary and Interpretation of Results. In the present study, we compared the fit of four evidence accumulation models applied to a color discrimination and a lexical decision task: In the color discrimination task, participants had to indicate whether there were more orange or more blue pixels in the presented stimuli. In the lexical deci-sion task, participants had to decide whether a presented letter-string was an existing German word or a meaningless letter-string. The examined models included (1) a parsimonious version of the diffusion model with Gaussian noise (Ratcliff, 1978), and (2) a model with a stable distribution for the noise of the information accumulation, where the heavy-tailedness was modelled with an additional free parameter. Besides the distribution of noise in evidence accumulation (sometimes also denoted as intra-trial variability of drift), the models were identical. Additionally, (3) a full version of the diffusion model with inter-trial variability for drift, starting point and non-decision time (Ratcliff & Rouder, 1998;Ratcliff & Tuerlinckx, 2002), and (4) a model with alpha as a free parameter and inter-trial variabilities were considered. Evidence accumulation in the models with non-normal noise follows a so-called Lévy flight, that is characterized by larger jumps in the stochastic process . The Gaussian distribution is a special case of the family of Lévy alpha-stable distribution with a stability value of α = 2. Lower values of stabil-5 Please note that the correlations of alpha with reaction times and accuracy rates are of exploratory nature and have to be interpreted in a cautious manner. In this exploratory phase, we did not correct for multiple comparisons as we did not want to discard upcoming evidence to early and easily.
The Quantitative Methods for Psychology ity parameter alpha indicate a higher probability of jumps and also larger jumps in the decision process.
Accuracy rates were slightly higher in the lexical decision task than in the color discrimination task, suggesting that the latter was more difficult. Responses to non-word stimuli exhibited larger accuracy rates than the responses to word stimuli in the lexical decision task. This suggests that participants might not have been familiar with some of the used words due to their low frequency in German language. Comparison of Models. Model 1 is a standard diffusion model. Five parameters (a, z r , v 0 , v 1 and t 0 ) were estimated for each participant and task. The stability parameter was fixed to a value of α = 2, that is, Model 1 includes Gaussian noise in evidence accumulation. In Model 2 a more general stable distribution (with free parameter α) is assumed for the random noise. Due to their heavy tails, the distribution allows the occurrence of extreme events, that is, jumps in the evidence accumulation process. Models 3 to 4 are equivalent to Models 1 and 2, respectively, except for the additional inclusion of inter-trial variabilities in drift, starting point and non-decision times.
Model parameters were fitted by using a novel Bayesian approach based on machine learning with neural networks (BayesFlow; Radev et al., 2019). This method provides accurate posterior distributions for parameters of mathematical models without requiring an explicit calculation of the likelihood. Rather, the neural networks learn the relation of data to plausible parameters from simulated data.
To compare fit of the four models, the BIC information criterion (Kuha, 2004) was computed. In accordance with previous results , the model with stable distributed noise (Model 2) had the best fit within the simple models (without inter-trial-variability) for both tasks.
Within the complex models, that allow for inter-trial variability of starting point, drift rate and non-decision time, the complex Lévy model performed better than the complex diffusion model. Unlike results from Voss et al. (2019) suggesting a superior fit of the simple Lévy flight model compared to the full diffusion model, in the present study the complex model had a superior fit compared to the simple Lévy flight model. All full models had a better fit than the simple models. We assume that these differences between the findings from Voss et al. (2019) and the present study might be based on the larger trial number: Possibly, the longer experimental duration causes more fluctuations in performance which in turn makes it necessary to include intertrial variability parameters in the cognitive models. Inter-individual differences in alpha. Results indicate that average values for alpha in the simple Lévy model are around 1.6 for both tasks, thus falling between a standard diffusion model and a model with Cauchy noise. This finding suggests that models with stable noise are applicable across different paradigms that have been applied in diffusion modeling.
In addition to the question, what model fit the data best, we were also interested to test whether alpha measures stable inter-individual differences in decision making. Substantial correlations of alpha-values were observed across the two completely different tasks. This find- Table 4 Correlations of alpha, accuracy and mean response time (BF 10 -i.e., Bayes factors in favor of an existing correlation -in parentheses and 95% confidence intervals in square brackets). Correlations with 95% confidence intervals between parameters in the color discrimination and the lexical decision task for the simple Lévy flight model. Note. N = 40. α = stability parameter of noise distribution; a = threshold separation; z r = relative starting point; v 0 = drift rate for blue stimuli or non-word stimuli; v 1 = drift rate for orange stimuli or word stimuli; t 0 = non-decisional time.
ing provides first evidence for a trait-like quality of the stability parameter. However, it remains unclear what the differences in alpha indicate in psychological terms. On a theoretic account, two opposing hypotheses are possible: On the one hand, increased jumps in evidence accumulation could reflect an irrational "jumping to conclusion" (McKay, Langdon, & Coltheart, 2006), that is, a premature and inefficient decision strategy (inefficient jumping hypothesis). On the other hand, it is also possible that a Lévy flight style of decision making reflects an efficient way to process information: Especially for easy, perceptual decisions that are conducted under time pressure, the processing of single pieces of information should change the subjective beliefs about the stimulus immediately. Following this thought, a gradual, diffusion-like, decision making style might reflect an inefficient use of information, while efficient decision making might be characterized by more jumps in the evidence accumulation process (efficient jumping hypothesis).
Following the inefficient jumping hypothesis, larger jumps -indicated by lower alpha-values -would be expected in populations with dysfunctional decision making or high values of impulsivity, such as people suffering from schizophrenia, borderline personality disorder or substance abuse (Oshri et al., 2018;Richard-Lepouriel et al., 2019;Sterzer, Voss, Schlagenhauf, & Heinz, 2019). According to the efficient jumping hypothesis, larger jumps would be expected in individuals with efficient decision making and presumably high intelligence. The present data do not allow us to differentiate between both hypotheses. Therefore, future Lévy flight studies should include other measures of cognitive performance like, intelligence, working memory functioning, or cognitive flexibility.
Evaluation of the Present Study and Implications for Further Research
The present study is one of the first attempts to apply Lévy flights in the context of decision making. For this reason, its results shed first light on a nearly unstudied field. In comparison to a previous study by Voss et al. (2019), we used notably larger trial numbers here. Thus, the present study allows for a more reliable assessment and comparison of the model fit. Correlations of alpha-values across tasks provide evidence for a meaningful inter-individual variance in this parameter. The inclusion of the parameter alpha in decision modeling provides further insight into human binary decision making as it helps to identify the exact processes that underlie fast reaction times and in- Correlations of alpha, arcsin transformed accuracy and log transformed mean response time (RT) for the 40 participants in the lexical decision task (Lexical) and the color discrimination task (Color). creased rates of fast errors. Consequently, the Lévy flight model allows the differentiation between different explanations for fast responses, which could alternatively occur due to stronger drift, lower decision thresholds, faster non-decision processes or a combination of these parameter values.
Some limitations of the presented study need to be addressed. Firstly, the present sample is limited in size and consists mainly of female psychology students in their early twenties. The study of this highly selective group could lead to a restricted variance of one or several of the assessed variables and therefore to an underestimation of correlations (e.g., between alpha and impulsivity).
Secondly, alternative criteria to BIC should be considered for model comparison. An adequate punishment of complexity and potential problems of overfitting of the complex models have to be analyzed more carefully.
Thirdly, models with alternative combinations of free parameters might show a superior model fit and should be considered in future research. For example, a model with alpha as a free parameter and inter-trial variability for non-decisional processes only might be a more parsimonious alternative to the complex Lévy model with inter-trial variability for starting point, drift rate and nondecisional processes. At the same time, such a model could possibly provide a better way to accommodate and explain fast errors compared to the full diffusion model.
Fourthly, in the present research we assume that jumps toward the correct and the error response boundary have the same probability. However, there are theoretical reasons to expect that jumps toward the correct boundary are more probable than jumps in the opposite direction. For example, one could assume Lévy flights to represent a kind of sudden insight into the solution of a problem and would therefore expect a higher probability of jumps toward the correct response boundary. Future studies should test models that allow for asymmetric proportions of jumps toward the correct and the error boundary in the noise distribution.
Lastly, the psychological meaning of stability parameter alpha requires further examination. As self-reported measures of impulsivity did not show any significant correlations with parameter alpha, experimental measures of impulsivity should be considered. Careful theories are to be developed to address the question of cognition or personality related correlates of alpha. Subsequently, the re-sulting hypotheses have to be backed up by further empirical work.
|
2020-04-23T09:06:01.986Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "2d8086b1c49d3f87c90f26b7c30e7bc9273e8392",
"oa_license": null,
"oa_url": "https://www.tqmp.org/RegularArticles/vol16-2/p120/p120.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4598a4a25ac7db06410a7c7c1e2510e01ef1ca48",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
46648842
|
pes2o/s2orc
|
v3-fos-license
|
Mineralogy and Genesis of the Polymetallic and Polyphased Low Grade Fe-Mn-Cu Ore of Jbel Rhals Deposit (Eastern High Atlas, Morocco)
The Jbel Rhals deposit, located in the Oriental High Atlas of Morocco, hosts a polymetallic Fe-Mn-Cu ore. Large metric veins of goethite and pyrolusite cut through Paleozoic schists that are overlaid by Permian-Triassic basalts and Triassic conglomerates. The genesis of this deposit is clearly polyphased, resulting from supergene processes superimposed over hydrothermal phases. The flow of Permian-Triassic basalts probably generated the circulation of hydrothermal fluids through the sedimentary series, the alteration of basalts and schists, and the formation of hydrothermal primary ore composed of carbonates (siderite) and Cu-Fe sulfides. Several episodes of uplift triggered the exhumation of ores and host rocks, generating their weathering and the precipitation of a supergene ore assemblage (goethite, pyrolusite, malachite and calcite). In the Paleozoic basement, Fe-Mn oxihydroxides are mostly observed as rhombohedral crystals that correspond to the pseudomorphose of a primary mineral thought to be siderite; goethite precipitated first, rapidly followed by pyrolusite and other Mn oxides. Malachite formed later, with calcite, in fine millimetric veins cutting through host-rock schists, conglomerates and Fe-Mn ores.
Introduction
The Jbel Rhals Fe-Mn-Cu deposit is located in the Moroccan Oriental High Atlas, 20 km south from Bou Arfa city ( Figure 1). Several ore deposits in the Bou Arfa district were mined in the first half of the 20th century, and are recently (re)considered in several papers: Jbel Klakh (Cu) [1,2], Jbel Haouanit (Pb-Zn-Cu-V) [1,2], Hamarouet and Aïn Beida (Mn) [3]. The Jbel Rhals locality was known as a Cu deposit for centuries and has been shortly exploited during the 19th-20th centuries; some mining activities started up again in 2012.
However, there is no mineralogical inventory of the Jbel Rhals deposit, and processes leading to the formation of the various mineral phases have not been investigated yet. The issue of the origin and genesis of these ores has not been raised either. This work presents a petrological, mineralogical, and geochemical synthesis of the Jbel Rhals deposit. We focus our investigation mainly [4]).
Geological Setting
The Jbel Rhals deposit is located at the northern edge of the Oriental High Atlas (Figure 1). This intracontinental mountain belt extends WSW to ENE, from the Atlantic coast to Algeria where it continues in the so-called "Saharan Atlas". The geodynamic evolution of the High Atlas system involves the interplay of two main events: (1) the Triassic to late Cretaceous pre-orogenic rifting and subsequent filling of the basins, and (2) the Cenozoic basin inversion leading to the shortening of basement and cover units, formation of syn-orogenic basins, and uplift [5][6][7].
The break-up of Pangaea and the opening of the Tethys and Atlantic Oceans triggered the formation of intracontinental rifts that affected the Variscan basement and induced the thinning of the North African crust [8]. The Triassic is characterized by the transition from a distensive to a transtensive context: narrow subsident basins, separated in horsts and grabens, developed during the first part of this period [8,9], while pull-apart basins were induced during the second one [10]. Rifting aborted during the late Triassic [8]. During Liassic, a sinistral transform zone induced the division of the Atlasic basin into two trenches that later became the Middle and High Atlas [8,11]. Synrift basins contain a succession of Permian to Triassic red-bed sedimentary rocks (conglomerate, sandstone, siltstone and mudstone), with widespread evaporites and intercalated basaltic and lava [4]).
Geological Setting
The Jbel Rhals deposit is located at the northern edge of the Oriental High Atlas (Figure 1). This intracontinental mountain belt extends WSW to ENE, from the Atlantic coast to Algeria where it continues in the so-called "Saharan Atlas". The geodynamic evolution of the High Atlas system involves the interplay of two main events: (1) the Triassic to late Cretaceous pre-orogenic rifting and subsequent filling of the basins, and (2) the Cenozoic basin inversion leading to the shortening of basement and cover units, formation of syn-orogenic basins, and uplift [5][6][7].
The break-up of Pangaea and the opening of the Tethys and Atlantic Oceans triggered the formation of intracontinental rifts that affected the Variscan basement and induced the thinning of the North African crust [8]. The Triassic is characterized by the transition from a distensive to a transtensive context: narrow subsident basins, separated in horsts and grabens, developed during the first part of this period [8,9], while pull-apart basins were induced during the second one [10]. Rifting aborted during the late Triassic [8]. During Liassic, a sinistral transform zone induced the division of the Atlasic basin into two trenches that later became the Middle and High Atlas [8,11]. Synrift basins contain a succession of Permian to Triassic red-bed sedimentary rocks (conglomerate, sandstone, siltstone and mudstone), with widespread evaporites and intercalated basaltic and lava flows, followed by Jurassic-Cretaceous carbonates and marls [5]. The flow of ca. 210-195 Ma basalts and dolerites is related to the Central Atlantic Magmatic Province (CAMP) [5,[12][13][14]. Following [5,14], the geochemical signature of these rocks is typical of continental intraplate magmatism, and more precisely of continental tholeiites (flood basalts); their common slight enrichment in incompatible elements implies a potential crustal contamination during rising.
By the end of Cretaceous, the opening of Atlantic Ocean changed the drifting direction of the African plate from heading E to NE, in convergence with the Iberian (Eurasiatic) plate. In response, constraints switched from being extensional to compressional, leading to basin inversion [11,15]. Shortening occurred by fracturing and folding the Mesozoic-Cenozoic cover units and the Variscan basement, and sometimes by detachment of cover units from the basement [5,12,16]. Three major episodes of exhumation are defined in the Eastern High Atlas: late Eocene (between Lutetian and Bartonian), early to middle Miocene, and late Pliocene to Quaternary [6]. The last episode, which corresponds to the paroxysmal compression phase, is responsible for the building of the current orogeny, and contributes to most of the High Atlas topography, together with lithospheric thinning [17].
The Jbel Rhals ore deposit (32 • 20 40.6" N; 1 • 58 40.6" W), elevated between 1200 and 1300 m in altitude, is located at 20 km south of the city of Bou Arfa, in the Figuig province of Morocco ( Figure 1). The deposit is situated in a Precambrian-Liassic inlier, at the base of a 200 m high cliff (Figure 2a), eastward of the Tamlet plain which is filled with various Quaternary materials, and south of Jurassic-Cretaceous reliefs hosting supergene Cu-Pb-Zn-V ore deposits ( Figure 1) [1]. The place is also known as "Guelb en Nahas", which means the "Copper Hill" in Arabic, referring to some ancient mining activities by Portuguese miners for copper ores. The Cu mineralization has been somewhat mined during the 20th century, but received little attention since closure of the deposit until 2012, when a project of mining recovery was submitted. Ores were/are mined in shafts and subhorizontal galleries of several tens of meters long and about two meters high, driven into the Paleozoic basement rocks (Figure 2b). The shafts are not accessible anymore. The stratigraphic succession of the entire area (mapped by [4]) is complex: the intense fracturing and moderate folding of the series disrupt the sedimentary structures arrangement, and most original features are no more apparent. Several NE-SW normal faults are observable in the deposit area (Figure 1), one of which, close to the ores, triggers a pluri-decametric displacement in the Triassic-Jurassic series. flows, followed by Jurassic-Cretaceous carbonates and marls [5]. The flow of ca. 210-195 Ma basalts and dolerites is related to the Central Atlantic Magmatic Province (CAMP) [5,[12][13][14]. Following [5,14], the geochemical signature of these rocks is typical of continental intraplate magmatism, and more precisely of continental tholeiites (flood basalts); their common slight enrichment in incompatible elements implies a potential crustal contamination during rising. By the end of Cretaceous, the opening of Atlantic Ocean changed the drifting direction of the African plate from heading E to NE, in convergence with the Iberian (Eurasiatic) plate. In response, constraints switched from being extensional to compressional, leading to basin inversion [11,15]. Shortening occurred by fracturing and folding the Mesozoic-Cenozoic cover units and the Variscan basement, and sometimes by detachment of cover units from the basement [5,12,16]. Three major episodes of exhumation are defined in the Eastern High Atlas: late Eocene (between Lutetian and Bartonian), early to middle Miocene, and late Pliocene to Quaternary [6]. The last episode, which corresponds to the paroxysmal compression phase, is responsible for the building of the current orogeny, and contributes to most of the High Atlas topography, together with lithospheric thinning [17].
The Jbel Rhals ore deposit (32°20′40.6″ N; 1°58′40.6″ W), elevated between 1200 and 1300 m in altitude, is located at 20 km south of the city of Bou Arfa, in the Figuig province of Morocco ( Figure 1). The deposit is situated in a Precambrian-Liassic inlier, at the base of a 200 m high cliff (Figure 2a), eastward of the Tamlet plain which is filled with various Quaternary materials, and south of Jurassic-Cretaceous reliefs hosting supergene Cu-Pb-Zn-V ore deposits ( Figure 1) [1]. The place is also known as "Guelb en Nahas", which means the "Copper Hill" in Arabic, referring to some ancient mining activities by Portuguese miners for copper ores. The Cu mineralization has been somewhat mined during the 20th century, but received little attention since closure of the deposit until 2012, when a project of mining recovery was submitted. Ores were/are mined in shafts and subhorizontal galleries of several tens of meters long and about two meters high, driven into the Paleozoic basement rocks (Figure 2b). The shafts are not accessible anymore. The stratigraphic succession of the entire area (mapped by [4]) is complex: the intense fracturing and moderate folding of the series disrupt the sedimentary structures arrangement, and most original features are no more apparent. Several NE-SW normal faults are observable in the deposit area (Figure 1), one of which, close to the ores, triggers a pluri-decametric displacement in the Triassic-Jurassic series. The basement is constituted of Paleozoic shales that are successively covered by Triassic conglomerates, Triassic silty layers, and Jurassic dolomitized series (Figure 2a). Permian-Triassic basalts are found between the basement and Triassic-Jurassic series (Figure 2a). The several meters thick Fe-Mn ore is confined to the Paleozoic shales, right under the Permian-Triassic basalts (Figure 2a), while Cu-mineralized veins extend from the Paleozoic shales (and the Fe-Mn ore) to the Triassic conglomerates. The thickness of the shale beds ranges from 1 centimeter to several decimeters. Stratification is subvertical in the Paleozoic shales (Figure 3a), but horizontal in the Triassic and Sinemurian formations.
quartz and clays are identified, along with chlorite presenting variable proportions of Al and Fe. Skeletal crystals of ilmenite and hematite, exsolution lamellae of hematite and pyrophanite in ilmenite (Figure 3d,e), and fine hematite dendrites are also observed. Most altered/weathered basalt is composed of a greenish clay-rich layer, a red-brown iron-rich layer and a white layer (Figure 3c,f). The white part is mostly composed of quartz, with some calcite and chlorite (Figure 3h). Quartz, chlorite, clays and calcite are present in the greenish layer, along with some rutile laths (Figure 3g), euhedral apatite and scarce small grains of pyrite. Different grain shapes suggest that some minerals are relics of original/primary ones (Figure 3g), but weathering/alteration made their identification impossible. For instance, the clayey losangic sections observed in Figure 3g are thought to be relics of pyroxenes or olivines. Less common relics are associated to high porosities in the red-brown layer ( Figure 3j) where quartz, chlorite, clays, and a mix of calcite, goethite and pyrolusite are identified (Figure 3g). Goethite and pyrolusite are often associated together and form concentric structures, where they are respectively located at the center and at the external rim (Figure 3i). Zircon and (La, Ce, Nd) phosphates have been observed in the poorly weathered basalt, close to quartz, and in the altered/weathered sample, associated with pyrolusite.
Materials and Methods
Thirty samples were collected in March 2014. X-Ray diffraction was carried out on twenty-one samples in order to identify the major mineral phases of the ore, using a BRUKER X-ray diffractometer (Bruker, Billerica, MA, USA) with a HI STAR GADDS (General Area Detector Diffraction System) CuKα detector. One thin section of the "freshest" basalt was observed in transmitted light mode with a LEITZ HM-POL petrographic polarizing microscope (Microscope Central, Feasterville, PA, USA). Twenty-one polished sections were observed on a ZEISS PHOTOMICROSCOPE reflection microscope (Carl Zeiss AG, Oberkochen, Germany), and with a JEOL JXA-8600 SUPERPROBE scanning electron microscope (SEM) (JEOL, Tokyo, Japan) coupled to an energy dispersive electron spectrometer (EDS).
Geochemical analyses were performed on eighteen samples in Activation Laboratories (Ancaster, ON, Canada). REE (Rare Earth Elements) and most of the trace elements were analyzed by Fusion Mass Spectrometry (FUS-MS) (Perkin Elmer Sciex Elan 9000 ICP-MS; Sciex AB, Singapore); Sr, Ba, Zr, and V contents were quantified by Fusion Inductively Coupled Plasma Optical Emission Spectrometry (FUS-ICP) (Varian Vista 735 ICP; Agilent, Santa Clara, CA, USA). Major elements of host rocks, basalts and Cu-mineralized veins were analyzed by FUS-ICP, while contents of iron-rich samples were evaluated with Fusion-X-ray Fluorescence (FUS-XRF) (Panalytical Axios Advanced XRF; PANalytical, Almelo, The Netherlands). FeO was quantified by titration. For samples containing high contents of Mo, Cu, Co. and Ni, analyses were carried out with Fusion Inductively Coupled Plasma Sodium Peroxide Oxidation (FUS-Na 2 O 2 ), and results quantified in percentages rather than in ppm. Afterwards, REE contents were normalized to those of the PAAS (Post Archean Australian Shale; [18]), considered as a reference for sedimentary rocks, while major and trace elements contents have been normalized to those of the UCC (Upper Continental Crust; [18]). Mineralized samples contents have also been compared and normalized with those of host rocks, in order to highlight potential enrichments or depletions in particular elements.
Petrographic Characterization
The Jbel Rhals deposit hosts two types of mineralization located in different veins. On the one hand, Fe-Mn minerals are restrained in decimetric-scaled veins hosted in Paleozoic schists, and overlaid by basalts. On the other hand, Cu minerals are found in veins of millimetric scale cutting Paleozoic schists, Fe-Mn ores, and Triassic conglomerates.
Host rocks and basalts underwent significant fracturing and severe alteration and weathering. Paleozoic schists (Figure 3a) are mainly composed of quartz, muscovite and chlorite; rare single (La, Ce, Nd) phosphate grains are observed associated to quartz (Figure 4f). In some places, the schisteous protolite has been almost completely altered/weathered to clays (probably illite). Triassic conglomerates, composed of quartz and muscovite, experienced a circulation of iron-rich fluids, which turned their original whitish color to a red tint (Figure 5g), and caused the rimming of some quartz grains with goethite. Basalts are strongly altered/weathered: the basalt considered on the field as "the freshest", meaning "poorly weathered", actually lost much of its primary minerals and textures (Figure 3b), while the most altered/weathered one is composed of three layers of different colors and compositions ( Figure 3c). In the poorly weathered sample, plagioclase laths are still recognizable, but ferromagnesian minerals are intensively altered/weathered ( Figure 3d). Secondary quartz and clays are identified, along with chlorite presenting variable proportions of Al and Fe. Skeletal crystals of ilmenite and hematite, exsolution lamellae of hematite and pyrophanite in ilmenite (Figure 3d,e), and fine hematite dendrites are also observed. Most altered/weathered basalt is composed of a greenish clay-rich layer, a red-brown iron-rich layer and a white layer (Figure 3c,f). The white part is mostly composed of quartz, with some calcite and chlorite ( Figure 3h). Quartz, chlorite, clays and calcite are present in the greenish layer, along with some rutile laths (Figure 3g), euhedral apatite and scarce small grains of pyrite. Different grain shapes suggest that some minerals are relics of original/primary ones (Figure 3g), but weathering/alteration made their identification impossible. For instance, the clayey losangic sections observed in Figure 3g are thought to be relics of pyroxenes or olivines. Less common relics are associated to high porosities in the red-brown layer ( Figure 3j) where quartz, chlorite, clays, and a mix of calcite, goethite and pyrolusite are identified (Figure 3g). Goethite and pyrolusite are often associated together and form concentric structures, where they are respectively located at the center and at the external rim ( Figure 3i). Zircon and (La, Ce, Nd) phosphates have been observed in the poorly weathered basalt, close to quartz, and in the altered/weathered sample, associated with pyrolusite.
Fe-Mn Mineralization
The main minerals are goethite, hematite and pyrolusite; they are either powdery and poorly crystalline, pseudomorphosing a preexistent rhombohedral mineral (Figure 4b), and/or forming collomorph structures (Figure 4j). Powdery iron oxihydroxides are yellow, red, or even (dark) brown. At Jbel Rhals, rhombohedral goethite sometimes shows a yellow iridescence that is due to a fine layer of iron oxihydroxides coating the surfaces of crystals (Figure 4a). Late sulfates such as jarosite (KFe 3 (SO 4 ) 2 (OH) 6 The main minerals are goethite, hematite and pyrolusite; they are either powdery and poorly crystalline, pseudomorphosing a preexistent rhombohedral mineral (Figure 4b), and/or forming collomorph structures (Figure 4j). Powdery iron oxihydroxides are yellow, red, or even (dark) brown. At Jbel Rhals, rhombohedral goethite sometimes shows a yellow iridescence that is due to a fine layer of iron oxihydroxides coating the surfaces of crystals (Figure 4a). Late sulfates such as jarosite (KFe3(SO4)2(OH)6), melanterite (FeSO4·7H2O) and ferricopiapite (Fe5(SO4)6O(OH)·20H2O) developed as coatings on the walls and roof of certain galleries (Figure 4l). Goethite often shows an unusual habitus denoting the pseudomorphose of a precursor mineral, characterized by rhombohedral dark brown to black crystals (Figure 4b-e), with relatively well-preserved cleavages and fractures often punctuated by Mn oxides. Pyrolusite is often intermixed with goethite (Figure 4c-e), but is also growing as laths on goethite crystals (Figure 4c), and is observed in veins getting through goethite. Other Mn oxihydroxides are noticed, but their identification is difficult due to their common intermixing and small dimensions; we nevertheless suspect the presence of lithiophorite and cryptomelane commonly observed in weathering deposits in Morocco [19] (Figure 4c,k). Quartz is present as large euhedral zones and sometimes as late euhedral crystals growing on goethite and coated by pyrolusite laths. Euhedral apatite is occasionally found in pyrolusite veins (Figure 4f). In the intermixed goethite-pyrolusite, calcite is progressively replaced by Ni-and Cu-rich pyrolusite and goethite, from the outer rim and through cleavages (Figure 4i). Some samples also host collomorph structures involving Fe and Mn (hydr-)oxides (Figure 4j). In most cases, botryoidal goethite grows toward the center of cavities, which are subsequently filled with several generations of pyrolusite laths, goethite needles, and supposed cryptomelane and lithiophorite (
Cu Mineralization
The schisteous basement (Figure 5a,b), the Fe-rich ore (Figure 5d-f) and the Triassic conglomerates (Figure 5g-i) are cut by numerous Cu-mineralized veins. Within schists, veins are mainly filled with goethite, but malachite and a hydrated Cu-silicate (probably chrysocolla, yet formal identification of this mineral was not possible due to its relative scarcity) are also observed (Figure 5b,c). The veins cutting through goethite and pyrolusite contain calcite, acicular malachite growing from the center to the edges of veins (Figure 5f), and sometimes Cu-(and Ni-) silicates ( Figure 5e). Malachite, growing from the edges to the center of veins (Figure 5h), is the main mineral within veins intersecting conglomerates, but cuprite, tenorite, a hydrated Cu-silicate (probably chrysocolla) (Figure 5i), and pyrite relics are noticed at their center.
Geochemical Characterization
Contents of major, minor or trace, and rare earth elements (REE) of host rocks and mineralizations are presented in Table 1.
Major Element Patterns
Host rocks and mineralizations have similar major element contents (Figure 6a), but Fe 2 O 3 and MnO are obviously subject to strong variations. In comparison to the fresh host rock, the altered/weathered schist is slightly depleted in all major elements. Major elements contents of the green part of the altered/weathered basalt are very close to those of the poorly weathered sample, unlike the red part, which is depleted in SiO 2 , TiO 2 , Al 2 O 3 , Na 2 O, and enriched in Fe 2 O 3 , MnO, CaO. The fresh host rock, the poorly weathered basalt, and the green altered/weathered basalt are quite rich in their FeO content, which is not the case for the red altered/weathered basalt.
Minor and Trace Elements Patterns
Many values are below the detection limit (e.g., Ta, Nb, Tl, Bi). Host rocks and basalts have higher amounts of Zr and Th than mineralized samples, but are depleted in U, Y, Mo, As, In, Cu, Co., and Ni. The fresh schist, the altered/weathered schist, and gangue minerals of the Fe-Mn mineralization display the highest contents in Rb (respectively 179, 117, and 162 ppm), Nb (respectively 17.3, 7.5, and 9.3 ppm), and Zr (respectively 152, 84, and 101 ppm). Goethite hosting malachite veins is very rich in Cu, Co and Ni (respectively 2.92%, 0.268%, 3.48%), but in this particular case, the enrichment is considered as contamination from the hosted veins during sampling (Figure 5d).
Most samples are enriched in U and depleted in Th, when contents are normalized to those of the UCC (Figure 6b); the more they are rich in U, the less they are in Th. The greenish altered/weathered basalt profile is very similar to that of the poorly weathered basalt, while the red part is more comparable to the mineralized samples. Both altered/weathered basalts are depleted in Co, in comparison to the poorly weathered basalt, but enriched in Cu and Ni. Mineralized samples are enriched in chalcophile elements, but contents in Co and Ni are somewhat variable. All mineralized samples (beside 14RL23), and particularly powdery oxihydroxides, are enriched in Y, as well as goethite 14RL18 reaching 179 ppm. Fe-Mn mineralizations have similar profiles punctuated by enrichments in U, Y, Mo, As, Sb, Ag, In, Cu, Co, Ni, Ga, and depletions in Rb, Ba, Th, Zr, Hf, V, Cr. Cu-mineralized veins have very similar patterns regarding minor elements, but divergent values for Co and Ni. As the fresh host rock only differs from the UCC at the level of Sr, As, Sb, Ag, and Cu, almost no difference is observed between normalization to the schist (Figure 6c) and to the UCC (Figure 6b).
Minor and Trace Elements Patterns
Many values are below the detection limit (e.g., Ta, Nb, Tl, Bi). Host rocks and basalts have higher amounts of Zr and Th than mineralized samples, but are depleted in U, Y, Mo, As, In, Cu, Co., and Ni. The fresh schist, the altered/weathered schist, and gangue minerals of the Fe-Mn mineralization display the highest contents in Rb (respectively 179, 117, and 162 ppm), Nb (respectively 17.3, 7.5, and 9.3 ppm), and Zr (respectively 152, 84, and 101 ppm). Goethite hosting malachite veins is very rich in Cu, Co and Ni (respectively 2.92%, 0.268%, 3.48%), but in this particular case, the enrichment is considered as contamination from the hosted veins during sampling (Figure 5d).
Most samples are enriched in U and depleted in Th, when contents are normalized to those of the UCC (Figure 6b); the more they are rich in U, the less they are in Th. The greenish altered/weathered basalt profile is very similar to that of the poorly weathered basalt, while the red part is more comparable to the mineralized samples. Both altered/weathered basalts are depleted in Co, in comparison to the poorly weathered basalt, but enriched in Cu and Ni. Mineralized samples are enriched in chalcophile elements, but contents in Co and Ni are somewhat variable. All mineralized samples (beside 14RL23), and particularly powdery oxihydroxides, are enriched in Y, as well as goethite 14RL18 reaching 179 ppm. Fe-Mn mineralizations have similar profiles punctuated by enrichments in U, Y, Mo, As, Sb, Ag, In, Cu, Co, Ni, Ga, and depletions in Rb, Ba, Th, Zr, Hf, V, Cr. Cu-mineralized veins have very similar patterns regarding minor elements, but divergent values for Co and Ni. As the fresh host rock only differs from the UCC at the level of Sr, As, Sb, Ag, and Cu, almost no difference is observed between normalization to the schist (Figure 6c) and to the UCC (Figure 6b).
Rare Earth Elements Patterns
Normalization of REE contents of all samples to PAAS (Figure 6d) highlights some significant trends. Host rocks, and more particularly fresh schist, have logically similar flat profiles close to that of the PAAS. All
Discussion
As already stressed by many authors, the understanding of supergene deposits formation and the identification of the hypogene ores are complicated by the overprinting of weathering processes over primary ores, and by the lack of data on supergene (and sometimes hydrothermal) fluid chemistry leading to difficulties in quantifying pH, Eh, and geochemical signatures of these fluids. However, as emphasized by [20], textural relations and structures of newly formed mineralizations allow reasonably accurate deductions about the nature of original minerals and the character of the fresh/unweathered deposits.
At Jbel Rhals, a sequence of mineralization is highlighted by successive intersects of supergene mineralized veins: pyrolusite veins cut through goethite, unidentified silicates and quartz veins go through goethite and pyrolusite, and fine calcite and malachite veins pass through goethite, pyrolusite and silicates. Goethite is the first supergene minerals to precipitate; pyrolusite (probably followed by lithiophorite and cryptomelane) forms later, under more oxidizing conditions (Figure 7). The Fe-Mn oxihydroxides forming rhombohedral minerals seem to replace a primary mineral, thought to be a carbonate (probably siderite-see below). Their precipitation is followed by the filling of cavities and fractures with minerals forming collomorph structures, and finally by poorly crystalline powdery minerals. Malachite, calcite, and probably chrysocolla precipitate later, in thinner veins that cross through Fe-Mn ores. Sulfates observed on some walls formed recently, and most probably are related to ongoing mining activities.
Hydrothermal Alteration and/or Weathering of Basaltic Rocks
The Jbel Rhals host rocks, especially basalts, show evidence of extensive alteration and/or weathering, as attested by the various secondary products, the few preserved primary minerals (Figure 3d-g), and the mineralogical and chemical variation of the layered most altered/weathered basalt (Figure 3c). Some secondary minerals may result from low-temperature hydrothermal alteration (e.g., chlorite, albite and clays), while others from weathering processes (e.g., goethite and pyrolusite), but mixing of these mineral phases get the determination even more complicated in terms of origin, and the understanding of their formation processes more complex.
Weathering of basalts usually follows the next sequence: glass-plagioclases-ferromagnesian minerals-Fe and Ti oxides [21,22]. When the glass proportion is significant, that component being particularly susceptible to weathering, the weathering sequence is modified as follows: glass-ferromagnesian minerals-plagioclases-Fe and Ti oxides [23,24]. Plagioclases and Fe-Ti oxides are the only primary minerals remaining in Jbel Rhals basalts (Figure 3d), which suggests that weathering followed the second sequence described above. Preservation of Fe-Ti oxides during weathering is quite common, as observed by [21,25]. The relatively close TiO 2 contents of the greenish layer and the parent basalt endorse that Ti-oxides tend to be immobile during weathering of Jbel Rhals ore and that they were moderately dissolved and transported [25][26][27]. Exsolution lamellae of hematite (and pyrophanite) in ilmenite are primary, and the skeletal shape of crystals reflects their fast growth under supercooling (Figure 3e).
The layered structure of the altered/weathered basalt (Figure 3c) and the mineralogical and geochemical segregation in the three layers ( Figure 3f) suggest a sequential weathering of primary minerals and reflect various environments of precipitation. The red layer, rich in Fe 3+ , Mn, Ca, and mobile elements such as U ( Figure 6, Table 1), is mostly composed of goethite and pyrolusite (Figure 3h-j) that may result from alteration/weathering of ferromagnesian minerals such as pyroxenes and olivines [21,28,29]. The greenish layer, composed of quartz, calcite, chlorite, clays ( Figure 3g) and rich in Si and Al (Table 1), is thought to concentrate alteration/weathering products of the dissolution of feldspars [21,28,29], and (relics of) primary minerals, resulting in a relative enrichment in immobile elements (e.g., Th, Zr, Ti). Chlorite and clays are typical features of the hydrothermal alteration of basalts, and are here frequently observed as pseudomorphs after primary minerals, as stressed by [29,30]. Albite is thought to be formed from the low-temperature hydrothermal alteration of anorthite. The conservation of original minerals fabrics and the filling of spaces created by dissolution with associated clays and Fe oxihydroxides (Figure 3h,j) imply that the alteration/weathering took place under isovolumetric conditions [31,32]. The various undetermined and mixed silicates observed in veins cutting through schist and Fe-Mn mineralization are also thought to be the result of the alteration/weathering of basalts and from the percolation of derived fluids [30]. The white layer is made of quartz (Figure 3h,i) that precipitated later than minerals of the two other layers, from Si-rich fluids circulating through voids fractures, probably under acidic conditions. The geochemical features (REEs, major and minor elements) of the greenish layer are very close to those of the parent basalt, which is not the case of the red part ( Figure 6). The latter is much closer in composition to the Fe-Mn mineralization. This variance reflects the successive steps of weathering of the primary minerals, the subsequent partition of elements, and their precipitation into specific mineral phases, under particular conditions. Respectively, the very low and slightly high FeO contents of the red and greenish layers also indicate that minerals of the red part formed under oxidizing conditions, while reducing conditions prevailed during precipitation in the greenish layer. The (slight) positive Eu anomalies of basalts, and especially of the red altered/weathered layer, are related to the decoupling of Eu from other REE, because of its presence as trivalent (under surface conditions) or divalent (in reduced environments and at elevated temperatures and pressures) cations [33,34]. The positive anomaly of the poorly weathered basalt is related to the incorporation of Eu in plagioclase during magmatic processes [35]. As plagioclase is absent from most altered/weathered layers, Eu is supposed to be incorporated in a secondary phase, hydrothermal or supergene, by adsorption on clay minerals and/or co-precipitation with Fe oxihydroxides [33,36,37]. According to [30,34,37,38], positive Eu anomalies related to the predominance of Eu 2+ in hydrothermal fluids are indeed typically found in associated ore deposits.
Fe-Mn Mineralization
The unusual rhombohedral crystals of Jbel Rhals goethite denote the pseudomorphosis of a former mineral. This hypogene mineral is thought to be siderite, although this mineral has not been observed at Jbel Rhals, probably owing to the high degree of weathering and the extreme instability of this mineral under oxidizing conditions (Figure 7a) [39]. Pseudomorphosis of goethite and pyrolusite after siderite is common in supergene environments [20,39] and has been documented in many places, e.g., at Akjouit (Mauritania; [20]), Lake George (USA; [40]), Errachidia (Morocco; [41]), Ljubija (Bosnia; [42]), Schwarzwald (Germany; [43]), and Granada (Spain; [44]). Besides the rhombohedral characteristic of the pseudomorphs, other diagnostic indications of the initial presence of siderite at Jbel Rhals are cleavages, the Mn content, and the dark-brown color. Relatively well-preserved 120 • -orientated cleavages and fractures of the primary carbonate are forming a "box-work pattern" (Figure 4d) and are punctuated with goethite and pyrolusite, as described by [40] in Colorado (USA). The intermixing of goethite and pyrolusite, the development of pyrolusite laths at the surface of goethite and quartz crystals, in cavities and veins cutting goethite, also support the hypothesis of primary siderite at Jbel Rhals. Siderite usually contains a significant amount of Mn (1-3%) that can be released during weathering and be available for later Mn oxides formation [31,42]. The close association of goethite and pyrolusite is thus related to the release of the Mn that crystallizes as small aggregates of pyrolusite arranged within the network of goethite [44]. The supposed hypogene siderite is probably of hydrothermal origin: siderite typically forms in reduced environments from solutions containing little dissolved sulfur but a lot of bicarbonate ions [39]. Some small scarce sulfide grains such as chalcopyrite, pyrite, and galena, observed in the ore, are strengthening this hypothesis, as hydrothermal siderite is commonly accompanied by such sulfides as that concentrate the little dissolved sulfur present in solution [39,45,46].
Goethite is the most common and most stable Fe oxihydroxide under atmospheric conditions (Figure 7a), and is widely observed in gossan of ore deposits. Hematite, present in minor proportions at Jbel Rhals, is thought to have precipitated shortly before goethite but to have turned into goethite as the latter is more stable under oxidizing conditions (Figure 7a). It is also possible, but quite unlikely, that hematite formed during hydrothermal processes, and that small amounts have been preserved from weathering. Large euhedral quartz is supposed to have formed during hydrothermal phases. The fine quartz veins cutting mineralization and basalts, and the small crystals observed at the surface of various minerals precipitated during the last stages of weathering. small aggregates of pyrolusite arranged within the network of goethite [44]. The supposed hypogene siderite is probably of hydrothermal origin: siderite typically forms in reduced environments from solutions containing little dissolved sulfur but a lot of bicarbonate ions [39]. Some small scarce sulfide grains such as chalcopyrite, pyrite, and galena, observed in the ore, are strengthening this hypothesis, as hydrothermal siderite is commonly accompanied by such sulfides as that concentrate the little dissolved sulfur present in solution [39,45,46]. Goethite is the most common and most stable Fe oxihydroxide under atmospheric conditions (Figure 7a), and is widely observed in gossan of ore deposits. Hematite, present in minor proportions at Jbel Rhals, is thought to have precipitated shortly before goethite but to have turned into goethite as the latter is more stable under oxidizing conditions (Figure 7a). It is also possible, but quite unlikely, that hematite formed during hydrothermal processes, and that small amounts have been preserved from weathering. Large euhedral quartz is supposed to have formed during hydrothermal phases. The fine quartz veins cutting mineralization and basalts, and the small crystals observed at the surface of various minerals precipitated during the last stages of weathering. In supergene environments, manganese oxide minerals are represented by pyrolusite, cryptomelane, todorokite, nsutite, and other poorly defined phases [19,[48][49][50]. Pyrolusite, which is only stable under strongly oxidizing conditions and at neutral to basic pH (Figure 7b), is the most common Mn oxide at Jbel Rhals. Two generations of pyrolusite are recognized: the first precipitated with goethite, in pseudomorphose of siderite (Figure 4b-e), while the second formed later, during the last stages of weathering, at the surface of goethite crystals (Figure 4c), in cavities, and in veins cutting through goethite. Such early and late pyrolusite have already been observed in the Mn Imini district, 500 km southwestward [19]. Collomorph structures filling cavities and including goethite, pyrolusite and cryptomelane (Figure 4j,k) are thought to be contemporary of the second generation of pyrolusite, and to have precipitated from successive generations of supergene fluids of various compositions with increasing O 2 content to allow Mn(IV) in the form of pyrolusite. The small amount of these structures does not allow further conclusions about their genesis to be drawn. Later precipitation of Mn oxides, in comparison to Fe oxihydroxides, is due to the greater solubility of Mn in oxidizing fluids, and to its greater resistance to oxidation. The presence of Mn oxides (particularly pyrolusite) suggests that highly oxidizing conditions were reached, particularly in the vicinity of fissures and cavities [51].
Enrichment in U but depletion in Th of the Fe-Mn mineralized samples, in comparison to host rocks and basalts of Jbel Rhals (Figure 6b,c), is a typical supergene trend and is related to a selective U vs. Th mobilization during weathering processes. Soluble and mobile U in supergene fluids is preferentially leached and accumulated in neoformed minerals (oxihydroxides), while immobile Th is retained in weathering-resistant minerals or incorporated in rapidly precipitating minerals [52][53][54][55]. The concentration of U in Fe-Mn oxihydroxides and clays is related to the high specific surface of these minerals and the subsequent important adsorption capacity [56], but also to the neoformation of discrete minerals at the interface between Fe-Mn oxihydroxides and clays, and fluids [51,53,54,57].
In the same trend, host rocks are rich in immobile Zr, Rb, Nb, while mineralized samples are rich in Y, Mo, As, Sb, In, and chalcophile elements that may be adsorbed on Fe-Mn oxihydroxides (as observed in other Fe-Mn mineralization; [58]) and clays or incorporated into supergene minerals. Enrichment of As is related to the adsorption of this element by Fe oxihydroxides, in the form of arsenate or FeAsO 4 , in oxic environments [36,59].
The slightly negative Ce anomaly of poorly crystalline pyrolusite is the only noteworthy feature observed in Figure 6d. Ce anomaly results from oxidation of this element to Ce 4+ and to its subsequent decoupling from the other REE which maintain their trivalent ionic states and are leached by circulating water [60]. Oxidation to Ce 4+ is restricted to strongly oxidizing environments, and is usually observed in the most weathered part of the profiles, for Fe-Mn oxihydroxides patterns [24,48,61]. The lack of a general negative Ce anomaly at Jbel Rhals might indicate that the tetravalent Ce was incorporated in other mineral phases than pyrolusite, or that pyrolusite precipitated under unusually low pH conditions, as Ce 4+ is unstable below pH 4 and 5 [37,47].
The low-grade enrichment in REE and the preferential intake of HREE+Y ( Figure 6) of Fe-Mn oxihydroxides are typical features of supergene deposits associated to igneous rocks [25,62] and have been correlated with the presence of secondary phosphates close to weathered basalts in French Polynesia [61] and in Hawaii [63]. The highest REE + Y content of the amorphous, poorly crystalline, powdery Fe oxihydroxides ( Figure 6) is related to (1) the trapping of (M)REE by these minerals, under acid conditions, by coprecipitation and adsorption processes [64,65], (2) the sorption of REE onto the surface of clays that are common in powdery samples [66,67], (3) the presence of (Y, Gd, Dy) phosphates in the ore. The occurrence of multiple REE, whether La-Ce-Nd or Y-Gd-Dy (Y being usually considered with HREE to which it is chemically and physically similar), within a single mineral, is due to the similar ionic radii and trivalent oxidation state of these elements, and to the subsequent common substitution of REE for each other into crystal structures [62]. The origin of REE may be the dissolution of minerals observed in the parent rock (monazite, allanite, . . . ), or the release of trace concentrations present in primary and/or hydrothermal apatite, calcite, dolomite, feldspars, . . . , [62,66,68]. Here, MREE enrichment suggests that REEs originate from the weathering of primary phosphate minerals, and particularly of apatite [69,70]. (La, Ce, Nd) phosphates are only observed in host rock ( Figure 4f) and basalts, close to primary minerals, and are therefore supposed to be primary minerals that resisted weathering [66]. The low mobility of LREE may also have caused the rapid precipitation of LREE phosphates, close to parent rocks, while more mobile HREE were more importantly leached [71,72]. The proximity of (Y, Gd, Dy) phosphates with Fe-Mn ore (Figure 4g), quartz veins, calcite, pyrolusite and cavities (Figure 4h) suggests that these minerals are supergene, in agreement with the observations of [68,72,73], suggesting that HREE phosphates are generally concentrated in fissure fillings and voids, along with minerals precipitating during the late stages of weathering.
Cu Mineralization
The small amount of Cu mineralization in veins does not allow extensive conclusions about their genesis to be drawn. Goethite, malachite, chrysocolla, and Cu-oxides are observed in veins cutting through the Paleozoic schisteous basement (Figure 5a-c), the intermixed goethite-pyrolusite (Figure 5d-f), and Triassic conglomerates (Figure 5g-i). Textures and sequences highlighted in these veins suggest that these minerals are of supergene origin. Cu-oxides and hydrated silicates (Figure 5i) formed prior to and later than malachite, respectively; euhedral malachite needles precipitated prior to calcite, which is filling the cavities (Figure 5f). Goethite precipitated in these veins before Cu-minerals (Figure 5b), but is younger than the Fe-Mn intermixed ore. Late precipitation of carbonates indicates that during most of the weathering, acid conditions hindered the formation of these phases, and that higher pH values were only reached during the latest stages. The source of Cu may be Cu-sulfides that formed during hydrothermal processes, and were rapidly weathered to malachite, oxides and silicates when Eh increased. Some scarce grains of strongly weathered pyrite and the occurrence of Ni-rich goethite strengthen the hypothesis that Cu-, Fe-(and Ni-) sulfides existed in the primary ore.
The similar REE, major and minor elements patterns of the Cu and Fe-Mn mineralization ( Figure 6) may imply that the precipitation of these phases is related to the same supergene event(s); Fe and Mn were mobilized and precipitated during the first stages of supergene precipitation, while chalcophile elements remained in solution and precipitated later. The geochemical differences between the veins cutting schists and goethite, and those cutting conglomerates, notably for Co. and Ni contents (Figure 6), suggest that two generations of Cu-mineralized veins are present at Jbel Rhals. Veins cutting through conglomerates are thought to have formed later. Their REE pattern shows no fractionation, close to the host conglomerate, and their depletion in Y contrasts with other mineralized samples, confirming this hypothesis (Figure 6c,d).
Late Sulfates
Late sulfates (jarosite, melanterite and ferricopiapite) developed locally as coatings on galleries walls and roofs (Figure 4l) are supposed to be recent and related to mining activities. Their occurrence may indicate the initial presence of pyrite that would have been destabilized by strongly acid conditions prevailing at a very local scale.
Metallogenic Model of Formation
The textural, mineralogical and geochemical observations listed above suggest that the studied rocks underwent several episodes of transformation, and that the various minerals observed in the ores result from weathering events superimposed on hydrothermal altered rocks and hydrothermal siderite-based metal sulfide veins (Figure 8). The polyphased metallogenic history of Jbel Rhals polymetallic deposit is characterized by (1) the circulation of hydrothermal fluids shortly after the basalts flows, which triggered some hydrothermal alteration and the precipitation of associated minerals, and (2) the later circulation of oxidizing fluids that activated weathering of the ores and their environment ( Figure 8). The presence of basalts, whose Permian/Triassic flow are related to the Central Atlantic Magmatic Province (CAMP), supports the hypothesis of early hydrothermal ore formation at Jbel Rhals [46]. The hydrothermal hypogene ores are then considered to be late Permian to Triassic in age and to have been induced by the thermal heat flow and events triggered by the Permian-Triassic rifting of the Central Atlantic, just as suggested in other mineral deposits of North Africa [29,[74][75][76][77]. Among multiple outcomes, these hydrothermal processes may be responsible for the alteration of basalts, and for the formation of secondary phases such as chlorite and clays, siderite and sulfides. Altered rocks and mineralizations were later subjected to supergene processes that are probably related to the Cenozoic High Atlas orogeny. Several episodes of uplift (defined by [5,6,16]) generated the exhumation of series and ores and promoted their exposition to oxidizing atmospheric conditions and meteoric water, leading to their weathering. Intense fracturing of host rocks facilitated infiltration and percolation of mineralizing fluids, which were in addition not hindered by the previously altered basalt. The lack of carbonates in the host rocks prevented buffering of the fluids acidity and led to the precipitation of minerals stable under acidic conditions, such as goethite. Fe-Mn oxihydroxides precipitated first, and were later cut by Cu-mineralized veins.
The origin of metals is a matter of debate, since potential repeated mobilization and reprecipitation of minerals during hydrothermal and supergene processes surely overprinted initial features of the ores. Paleozoic schists clearly underwent alteration and weathering, as attested by the presence of clay and chlorite in some samples, but the lack of geochemical similitudes with ores does not support the hypothesis of their unique contribution to mineralization formation. Triassic conglomerates have been affected by weathering, as indicated by the formation of interstitial goethite and malachite in veins. Jbel Rhals basalts have been subjected to successive hydrothermal and supergene processes that undoubtedly modified the equilibrium of underneath formations, caused the input of chemical elements in underlying rocks, and had an important role in the subsequent formation of mineralizations. As suggested by [28] in other places than Jbel Rhals, it is plausible that fluids emanating from the alteration and weathering of the basalts contributed to the formation of some hydrothermal and supergene mineral phases. Interesting geochemical similarities noticed between basalts and mineralized samples support this hypothesis: they both show REE fractionation, slight LREE depletion, and enrichment in MREE, which is not the case of the schisteous host rocks ( Figure 6). The Coand Ni-rich signature of most mineralized samples ( Figure 6) also supports the hypothesis that basaltic rocks influence ore formation, as described elsewhere [58]. Clays and chlorite precipitation is considered to be related to feldspars alteration/weathering. Siderite and sulfides, goethite and hematite, formed under respectively reduced (hydrothermal) or oxidized (supergene) conditions, are thought to be related to the alteration/weathering of ferromagnesian minerals that may release large quantities of Fe [28,31]. Basalts are therefore regarded as one of the metal sources at Jbel Rhals, other sources being for instance the altered/weathered host rocks. to Triassic in age and to have been induced by the thermal heat flow and events triggered by the Permian-Triassic rifting of the Central Atlantic, just as suggested in other mineral deposits of North Africa [29,[74][75][76][77]. Among multiple outcomes, these hydrothermal processes may be responsible for the alteration of basalts, and for the formation of secondary phases such as chlorite and clays, siderite and sulfides. Altered rocks and mineralizations were later subjected to supergene processes that are probably related to the Cenozoic High Atlas orogeny. Several episodes of uplift (defined by [5,6,16]) generated the exhumation of series and ores and promoted their exposition to oxidizing atmospheric conditions and meteoric water, leading to their weathering. Intense fracturing of host rocks facilitated infiltration and percolation of mineralizing fluids, which were in addition not hindered by the previously altered basalt. The lack of carbonates in the host rocks prevented buffering of the fluids acidity and led to the precipitation of minerals stable under acidic conditions, such as goethite. Fe-Mn oxihydroxides precipitated first, and were later cut by Cu-mineralized veins. [4], and of the High Atlas uplift from [6].
The origin of metals is a matter of debate, since potential repeated mobilization and reprecipitation of minerals during hydrothermal and supergene processes surely overprinted initial features of the ores. Paleozoic schists clearly underwent alteration and weathering, as attested by the presence of clay and chlorite in some samples, but the lack of geochemical similitudes with ores does not support the hypothesis of their unique contribution to mineralization formation. Triassic conglomerates have been affected by weathering, as indicated by the formation of interstitial goethite and malachite in veins. Jbel Rhals basalts have been subjected to successive hydrothermal and supergene processes that undoubtedly modified the equilibrium of underneath formations, caused
Conclusions
The Jbel Rhals polymetallic deposit has a polyphased metallogenic history, with mineralization resulting from supergene processes superimposed over hydrothermal alteration. The supergene phases constitute the major mineralization currently observed (goethite, hematite, pyrolusite, cryptomelane, malachite, Cu-oxides and silicates, calcite, dolomite, REE(PO 4 ), late sulfates, quartz, clays, silicates) whereas little is left of the hydrothermal mineralization (siderite and minor sulfides). The flow of basalts into the Paleozoic schisteous basement, during Permian-Triassic, the subsequent circulation of hydrothermal fluids through basalts and host rocks, and the formation of the primary deposit, are related to the same geotectonic settings that are part of the intracontinental rifting stage. These hydrothermal processes triggered the alteration of schists and basalts, the leaching of some elements as iron from rocks through fluid-rock interaction, and the formation of a hydrothermal mineral assemblage presumably composed of siderite and sulfides that precipitated from high temperature fluids. During the Cenozoic Era, several episodes of uplift recorded in the High Atlas enabled the exhumation of basalts, host rocks, and hypogene ores, their subsequent weathering, and the formation of a supergene ore from oxidizing low temperature surface-derived fluids. Hydrothermal siderite has thereby been replaced, sometimes "in situ", by Fe-Mn oxihydroxides (goethite, pyrolusite), while dissolved sulfides were notably later involved in the malachite formation. These supergene processes are also related to the enrichment in HREE, Y, and mobile elements such as U in secondary minerals.
|
2018-02-09T18:16:43.313Z
|
2018-01-25T00:00:00.000
|
{
"year": 2018,
"sha1": "b33856aa2b1ee6e7f37daf74db0d0e3dd446b4a4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/8/2/39/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9eb91c5a310d567cb4dc76eb598c0f6350ee0e61",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
231614652
|
pes2o/s2orc
|
v3-fos-license
|
Evolution to alternative levels of stable diversity leaves areas of niche space unexplored
One of the oldest and most persistent questions in ecology and evolution is whether natural communities tend to evolve toward saturation and maximal diversity. Robert MacArthur’s classical theory of niche packing and the theory of adaptive radiations both imply that populations will diversify and fully partition any available niche space. However, the saturation of natural populations is still very much an open area of debate and investigation. Additionally, recent evolutionary theory suggests the existence of alternative evolutionary stable states (ESSs), which implies that some stable communities may not be fully saturated. Using models with classical Lotka-Volterra ecological dynamics and three formulations of evolutionary dynamics (a model using adaptive dynamics, an individual-based model, and a partial differential equation model), we show that following an adaptive radiation, communities can often get stuck in low diversity states when limited by mutations of small phenotypic effect. These low diversity metastable states can also be maintained by limited resources and finite population sizes. When small mutations and finite populations are considered together, it is clear that despite the presence of higher-diversity stable states, natural populations are likely not fully saturating their environment and leaving potential niche space unfilled. Additionally, within-species variation can further reduce community diversity from levels predicted by models that assume species-level homogeneity.
S1 Supporting information Full Model Details.
We consider a general model of logistic growth and frequency-dependent competition based on a d-dimensional phenotype [1,2]. For simplicity the model does not consider spatial interactions. Evolution is modeled in three ways: adaptive dynamics, an individual-based model, and partial differential equations.
Ecological Dynamics
Ecological dynamics follow logistic growth governed by a carrying capacity K( x) and a competition function α( x, y) where x and y are the phenotypes of competing types with x, y ∈ R d .
The competition function is defined such that α( x, x) = 1, and for symmetric competition α( x, y) < 1 for x = y. Thus, the Gaussian part of competition between types diminishes as their phenotypes becomes more distant and is maximal between individuals with the same phenotype. For symmetric competition the competition function is strictly Gaussian.
For asymmetric competition, the competition function also includes an additional term.
α( x, y) = exp This first term is the first order of Taylor expansion of a higher order, non-symmetric competition function and thus represents the simplest form of adding non-symmetric dynamics to the Gaussian function [1]. It includes the coefficients b kl . For the non-symmetric competition simulations discussed here, a specific set of b coefficients were chosen that resulted in periodic evolutionary dynamics [provided in ??]. The models were also run with many other values for these coefficients, including a survey of randomly chosen values, but results were qualitatively similar to the values chosen. For a review of how these values govern evolutionary dynamics in higher dimensions, please see Doebeli et al. [2]. The carrying capacity function K( x) represents the equilibrium population size of a population of only individuals with phenotype x. Two carrying capacity functions are discussed here. What is the quartic carrying capacity is defined as such. (3) The radially symmetric carrying capacity is as follows.
Both functions are similar in that they are maximal at the origin and of the fourth order. This ensures that there is stabilizing selection toward x = 0 and that the phenotype space that is viable is bounded near the origin. The quartic carrying capacity function has an approximately square peak, while the radially symmetric function is circular [ Fig. 1].
Together, ecological dynamics are as follows, with N i representing the population size of individuals with phenotype x i , in a total population of M different phenotypes, and an intrinsic growth rate of r.
Adaptive Dynamics
Using adaptive dynamics [3][4][5], the evolution of a phenotype x can be described by a system of differential equations d x/dt. This is derived from the invasion fitness f ( x, y), which is the per capita growth rate of a rare mutant with phenotype y in a resident population with phenotype x.
This invasion fitness function relates how the growth rate of the resident is always decreased with the introduction of a new mutant unless x = y when f ( x, y) = 0. From this we can derive the selection gradient, s, as the derivative of the invasion fitness with respect to the mutant, y, when it is equal to the resident, x.
The adaptive dynamics are defined as where M is the mutation-covariance matrix describing the rate, size, and covariance of mutations in each phenotypic dimension. For simplicity we assume this is the identity matrix. Any formulation of this matrix with a positive diagonal would only change the speed of the evolution in the different dimensions, but would not change the characteristics of any evolutionary dynamics or stable states. The adaptive dynamics are therefore a set of differential equations that describe the evolutionary dynamics in d dimensions for a given set of phenotypes. Of note, while the Gaussian part of the competition kernel α affects whether diversification can occur and multi-species dynamics, because the selection gradient is evaluated at y = x, in the adaptive dynamics of monomorphic populations the Gaussian part of α disappears and we are left with just the effects of asymmetric competition. Because of the exponential nature of carrying capacity functions, the second term of the selections gradient, ∂K( x) ∂xi 1 K( x) , reduces to just the partial derivative of the inner function with respect to the resident. For the quartic case the adaptive dynamics thus as follows For the radially symmetric case the adaptive dynamics are In order to include extinction and speciation events, an iterative algorithm is used: (1) solve for the ecological dynamics; (2) remove any newly extinct populations that fall below a minimum viable population size; (3) solve the adaptive dynamics for a given length of evolutionary time; (4) introduce a new mutant, with a phenotype a small, fixed distance from one of the resident populations or with a phenotype chosen from a Gaussian distribution with mean equal to the phenotype as one of the resident populations and a given variance σ 2 mut ; (5) remove the mutant if it is not ecologically viable (invasion fitness of the mutant is negative); and (6) repeat the process until either an evolutionary stable state or a given amount of time is reached.
At given intervals, clusters are calculated using a hierarchical clustering algorithm such that any two population with phenotypes within a small distance, z small , from each other are part of one cluster. Clusters are just an accounting device and do not affect dynamics. All parameters used can be found in ??.
Individual-Based Model
The individual-based model uses the same ecological dynamics as described above and simulated based on the Gillespie algorithm [6]. Individual birth rates are assumed to be constant and equal to 1, while death rates, δ, are frequency dependent and derived from the ecological dynamics.
Here the competition function, α, and the carrying capacity function, K, are the same as in the adaptive dynamics, while K max is a constant that controls the height of the carrying capacity function to convert it from continuous to discrete populations. These rates are taken directly from the Lotka-Volterra equations, where the per capita growth can be considered the birth rate minus the death rate.
The individual-based model is thus a direct analog of the adaptive dynamics. For a larger discussion on the derivation of the individual-based dynamics of this model please refer to [7][8][9]. The simulation algorithm is as follows: (1) initiate the population with a randomly chosen initial population of a predetermined size; (2) update or calculate all individual death rates; (3) calculate the sum of all birth and death rates, U = i (1 + δ i ); (4) increment time by a random amount drawn from an exponential distribution with mean equal to 1/U ; (5) chose a random birth or death with probability equal to the ratio of the rate and the sum of all rates; (6) if a death rate is chosen, remove the chosen individual; if a birth rate is chosen, create a new individual with phenotype chosen from a Gaussian distribution with mean equal to the parent phenotype and variance σ 2 mut ; (7) rerun steps 2-6 until the population is extinct or a specified time is reached.
Phenotypic clusters are calculated using a hierarchical clustering algorithm in which every individual in a cluster has a phenotype within a small distance of at least one other individual in the cluster. Individuals are added to a cluster at birth or a new cluster is created if the individual does not fit in any existing cluster. A cluster is updated when a member individual dies to see if the cluster splits. All parameters used can be found in ??.
Numerical stability analysis.
While we were unable to analytically determine the stability of the communities that arose in our simulations, we were able to test for evolutionary metastability numerically. However, this method is computationally expensive, so we were only able to test for stability on a small number of representative simulations.
To test the stability the final population in a simulation we extensively sampled random mutants around every resident in the community (250 mutants per resident). For each new mutant, we continued the adaptive dynamics simulation for a significant period of time (5% the length of the original simulation), while not allowing any other branching mutations to arise. We then checked whether the original residents and the mutant were all able to survive (N i > N small ) and the mutant was able to diversify (| z m − z i | > z small ∀i for a mutant with phenotype z m and residents z i ) from the resident population. Any simulation in which no mutant can invade, coexist, and diversify is deemed an evolutionarily stable community.
While this method is unable to definitively prove stability as it is always possible that an extremely rare mutant could diversify or drive multiple residents extinct, we can at least demonstrate the metastability (as defined in the main text; any state that is maintained for a period significantly longer than its convergence) of these communities. An illustration of metastability can be seen in Figure A. Here, the convergence to the metastable cycle with 10 species takes approximately 25 evolutionary time units, after which the system resides in the metastable state until the end of the simulation, which lasts for another 975 time units. A clearer depiction of the early convergent dynamics can be seen in the main text in Figure 3.
This numerical stability analysis was used to test each simulation illustrated in the main text (Figs 2 and 4). Using the same small mutations as the original simulations, not a single mutant (of over 30,000 tested) was able to invade, coexist with, and differentiate from the original community. The same was true when we instead tested Gaussian mutations (σ mut = 0.05). Only when we tested larger mutations (Gaussian mutations with σ mut = 0.1) were 7 mutants (0.02% of those attempted) from two different simulations able to break the stability and diversify the community into a new, higher diversity state.
At least in the representative simulations tested, only large or extremely rare mutants are able to shift the community, proving that these states are indeed evolutionarily metastable.
Figures from the numerical stability analysis for each of these simulations can be found on-line at https://www.zoology.ubc.ca/~rubin/AltEvoDiversity/.
PDE model.
In addition to the adaptive dynamics and individual-based models, we also ran partial differential equation simulations. PDEs represent the infinite population approximation of individual-based models [7,9] and are useful for relaxing the phenotypes represented as a delta function and the separation of timescale between ecology and evolution assumptions of adaptive dynamics without the stochasticity of individual-based models. The infinite population version of the individual based model gives the deterministic formulation of partial differential equations.
Here, N ( x, t) is distribution of the population with phenotype x at time t and δ mut is diffusion coefficient that can be thought of as analogous to the rate of mutations and will not have qualitative effect on our simulations. The competition and carrying capacity functions are the same as described above. The PDE is numerically solved over a lattice. Because of memory limitations, using a high resolution lattice is infeasible, though we were able to run all 2-dimensional PDE simulations on a 200 x 200 lattice. This lattice is more than detailed enough to reveal any peaks and patterns that appear. The local maxima of the resulting distributions represent centers of clusters of individuals in the individual-based model.
Notably, like the individual-based model, the PDEs are not restricted to approximation of adaptive dynamics that phenotypes are represented as delta functions.
Instead, the distributions produced by the PDEs "take up space" in the phenotype space. This can result in configurations with fewer local maxima than adaptive dynamics populations when run with the same parameters. Additionally, because the PDEs are a diffusion process, configurations cannot be trapped by small mutations. This makes the PDE simulations unsuited to studying the presence or absence of alternative stable configurations. They are however an useful comparison to the adaptive dynamics and individual-based models that are fully deterministic and free of the assumptions intrinsic to adaptive dynamics.
When PDE simulations are run with symmetric competition, the resulting population densities are configured in the same 4x4 grid and two concentric circles as the comparable adaptive dynamics simulations (Figs 2, B). The same concentric circles appear as ridges of high population density, but only the inner is peaked and far less distinctly than with the quartic grid. As the adaptive dynamics assumes species can be represented by a single phenotype without variance, continuous diversification around the circles is the only way for the population to mimic the circular ridge predicted by the PDE. This implies that with infinite population size (adaptive dynamics and PDE), simulations with symmetric competition and a radially symmetric carrying capacity eventually (though this process is slow) result in no distinct species or phenotypic clusters. However, as noted in the main text, this situation of continual diversification along concentric circles in trait space is likely a degenerate case that is cause by a radially symmetric carrying capacity function and Gaussian competition.
Within species phenotypic variation reduces diversity
For the asymmetric competition, PDE simulations with both quartic and radially symmetric carrying capacities result in similar, but less diverse patterns in comparison to the highest evolutionary stable diversity state from the adaptive dynamics simulations (10 versus 14 phenotypes for the quartic carrying capacity and 14 versus 16 phenotypes for the radially symmetric case) (Figs 4, B). The reduced diversity is likely because the peaks in the adaptive dynamics are delta peaks, while those in the PDE have non-zero variance in phenotype space, restricting the number of distinct peaks. The radially symmetric carrying capacity simulation did exhibit the same clockwise rotation in both rings as displayed in the adaptive dynamics and individual-based models (video available in on-line, ??).
There has been previous theoretical work that predicts variation within species clusters has a negative effect on coexistence between competing species [10]. As species now occupy a distribution in trait space, rather than a single point, niche differentiation between species is reduced, impeding the maintenance of higher numbers of distinct phenotypes. As the PDE is the infinite population limit of the individual-based models, we would expect the same lower diversity at the global ESC when the individual-based simulations are run with very large communities and mutation sizes greater than some (these simulations were computationally infeasible to run). While we don't expect the other results (the presence of locally stable ESSs or limit cycles) from this paper to be affected by the lower diversity, we feel it is a salient point worth considering when comparing theoretical models of diversification to expectations for natural populations.
All parameters used can be found in ??.
|
2021-01-16T14:22:37.799Z
|
2021-01-06T00:00:00.000
|
{
"year": 2021,
"sha1": "14b79ca4e463de3b3b97e2e0c47181a347592e9a",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1008650&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "67b329faffc80a894bdda4c0e4fd0c9d1d052ea2",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Biology"
]
}
|
234597405
|
pes2o/s2orc
|
v3-fos-license
|
HIV-Dementia Scale as a screening tool for the detection of subcortical cognitive deficits: validation of the Italian version
Mini-Mental State Examination (MMSE) lacks of sensitivity in detecting cognitive deficits associated with subcortical damage. The HIV-Dementia Scale (HDS), a screening tool originally created for detecting cognitive impairment due to subcortical damage in HIV + patients, has proved to be useful in other neurological diseases. Until now, an Italian version of the HDS is not available. We aimed at: (1) validating the HDS Italian version (HDS-IT) in a cohort of cognitively healthy subjects (CN); (2) exploring the suitability of HDS-IT in detecting cognitive impairment due to subcortical damage (scCI). The psychometric properties of the HDS-IT were assessed in 180 CN (mean age 67.6 ± 8.3, range 41–84) with regard to item-total correlation, test–retest reliability and convergent validity with MMSE. Item-total correlations ranged 0.44–0.72. Test–retest reliability was 0.70 (p < 0.001). The HDS-IT scores were positively associated with MMSE score (rS = 0.49, p < 0.001). Then, both the HDS-IT and the MMSE were administered to 44 scCI subjects (mean age 64.9 ± 10.6, range 41–84). Mean HDS-IT total score was close to the original version and significantly lower in the scCI group compared to CN (8.6 ± 3.6 vs. 12.6 ± 2.5, p < 0.001). ROC analysis yielded an optimal cutoff value of 11, with sensitivity of 0.70 and specificity of 0.82. Patients showed poorer scores on HDS-IT compared to CN (12.6 ± 2.5 vs. 8.6 ± 3.6, p < 0.001). Our results support the use of HDS-IT as a screening tool suitable for detecting cognitive deficits with prevalent subcortical pattern, being complementary to MMSE in clinical practice. Supplementary Information The online version contains supplementary material available at 10.1007/s00415-021-10592-9.
Introduction
Early detection of cognitive impairment in the ageing population represents an important issue. It is known that mild cognitive impairment (MCI) in adult patients is a frequent and heterogeneous condition that may be related to different underlying causes, especially neurodegenerative or cerebrovascular diseases [1,2]. Neurological disorders affecting the central nervous system are associated with a wide spectrum of clinical manifestations, often accompanied by presence of such cognitive dysfunctions, and in some cases, dementia.
To optimise the diagnostic workup, subjects should undergo a screening assessment for the characterisation of global cognitive profile, followed by an extensive assessment if cognitive deficits are detected [3]. Accordingly, an ideal screening tool should be relatively simple to administered, not time consuming and sensitive enough to allow the identification of patients deserving further, in-depth neuropsychological assessment [4]. In particular, screening tests for the assessment of global cognitive functioning should be able to highlight cognitive profiles with prevalent cortical (i.e. deficits in declarative memory, language, praxis and visuospatial abilities) vs. subcortical (i.e. deficits in attention and arousal, memory retrieval, speed of information processing, motivation and mood) pattern of cognitive impairment, so that clinicians may be better oriented in their examination. [5][6][7]. Anatomically, the cortical pattern is related to diseases involving primarily, but not exclusively, the association cortex of the cerebral hemispheres and the medial temporal lobes, and are typically characterised by aphasia, amnesia, agnosia, acalculia, and apraxia. The subcortical pattern occurs in disorders with predominant involvement of basal ganglia, thalamus, and structures of the brainstem, and is typically characterised by psychomotor slowing, memory impairment, affective and emotional disorders, and difficulties with strategy formation and problem solving [8]. However, though the historic cortical vs. subcortical dichotomy may be useful to identify preeminent neuropsychological profiles in clinical practice [9], the existence of "true" cortical and subcortical disorders is controversial from a functional/neuroanatomical perspective [10].
In clinical practice, the most popular screening tool for assessing global functioning is the Mini-Mental State Examination (MMSE) [11]. MMSE is composed by several items, most of them requiring the integrity of higher cortical functions (memory, language, orientation and visuo-constructive praxis). However, MMSE lacks of items assessing executive functions; so far, its sensitivity in detecting subcortical patterns of cognitive impairment is low [7,[12][13][14]. Several studies reported that the Montreal Cognitive Assessment (MoCA), another well-known screening test, is superior to MMSE for the early detection of cognitive impairment in ageing population [14,15], due to its item composition. In fact, the MoCA is more suitable than MMSE in assessing visuospatial and executive functions, representing a more challenging task to be used in clinical practice [3]. On the other hand, a brief bedside screening tool as the Frontal Assessment Battery (FAB) is ideal for the assessment of executive functions, despite, it cannot replace measures of global cognition as the MMSE. Studies that investigated the utility of FAB for differential diagnosis among different dementias gave mixed results, showing more suitability of specific FAB sub-items than the FAB total score in distinguishing cortical dementias, as Alzheimer's disease and fronto-temporal lobar dementia, and subcortical vascular cognitive impairment [16][17][18]. However, none of the abovementioned screening tools is adequate for assessing reaction times and speed processing, because both of them lack of time-dependent items.
Actually, subcortical cognitive impairment (scCI) is mainly related to damage in specific subcortical brain regions (i.e. thalamus, basal ganglia, midbrain), but it may also be a consequence of disruption in white matter connection fibres (white matter lesions, WMLs). Accordingly, pictures of WMLs, that may disrupt cortico-cortical intra-and inter-hemispheric, as well as cortico-subcortical connections, may cause this kind of cognitive impairment [19]. scCI represents a clinical feature of many neurological diseases, such as subcortical ischaemic vascular disease (SIVD), normal pressure hydrocephalus (NPH), multiple sclerosis (MS), Huntington's disease (HD), Parkinson's disease (PD), and progressive supranuclear palsy (PSP) [20]. Nowadays, it is known that, in clinical practice, an accurate neuropsychological assessment may allow to early detect clinical manifestations of these neurological disease prior to the dementia phase. Among the available tools for scCI detection, none is suitable to be applied as screening measure in clinical practice [21]. So far, a sensitive tool for revealing features of subcortical cognitive impairment is strongly recommended. The HIV-Dementia Scale (HDS) is a brief tool originally developed to assess subcortical deficits in individuals affected by HIV infection [22]. Since HDS proved to be useful in detecting scCI in HIV + patients, its suitability for detecting cognitive impairment in other neurological diseases with subcortical damage has been assessed, giving significant results in NPH and SIVD [21].
Later on, the International version of the HIV-Dementia Scale (I-HDS) [12] has been validated as a cross-cultural screening test to use for detection of HIV dementia within the worldwide community. Until now, normative data for the Italian population are lacking.
The objectives of this study are: (1) to carry out the validation of the HDS Italian version in a cohort of cognitively healthy elderly subjects (CN), and (2) to explore its sensitivity and specificity in detecting subcortical cognitive deficits in a clinical sample of subjects with neurological diseases associated with subcortical damage (scCI).
Participants and assessment procedure
We enrolled 180 CN recruited among relatives of patients attending our Memory Clinic or as volunteers after advertisement. The participants' inclusion criteria included: (i) age between 40 and 85, (ii) good physical and mental health, (iii) no concomitant uncontrolled medical diseases, (iv) Mini-Mental State Examination raw score ≥ 24, and (v) no dementia. Patients were classified as CN by means of extensive neuropsychological evaluation (see section below) assessing multiple cognitive domains. Scores within normal range in all cognitive domains led us to define a patient as CN. A subgroup of 27 subjects repeated the HDS-IT after a mean test-retest interval of 3-10 months (median 7).
We also enrolled 44 consecutive patients attending to our Memory Clinic for neurological disorders with subcortical features: 13 with multiple sclerosis (MS), 16 with subcortical ischaemic vascular disease (SIVD), 9 with normal pressure hydrocephalus (NPH) and 6 with HIV+ infection (HIV+). For patients with diagnosis of MS, we adopted the radiological criteria of minimum 4-9 white matter lesions [23]. For SIVD patients, we adopted the radiological criteria of score 2-3 in the Fazekas scale [24]. Patients with NPH were included on the basis of clinico-radiological diagnosis. Patients with HIV+ were included on the basis of serological diagnosis.
All of them showed subcortical cognitive impairment (scCI). ScCI was defined as a score ≥ 1.5 SD below the adjusted-mean in one or more cognitive domains, evaluated through an extensive neuropsychological battery, in patients with neurological disorders associated with subcortical damage. A clinical condition of dementia was excluded for all patients.
Neuropsychological testing
All subjects underwent the following neuropsychological battery: the MMSE [25] for the assessment of global cognitive functioning; the Rey Auditory Verbal Learning Test [26] for the evaluation of verbal learning and memory; the digit span forward and backward [27] for the evaluation of verbal short-term memory and working memory; the Trail Making Test-part A and B [28] for the evaluation of visuospatial selective and divided attention and mental shifting; the copy of drawings and copy of drawings with landmarks [26] and the Clock Drawing Test [29] for the evaluation of visuo-constructive praxis; the Raven's coloured progressive matrices '47 [30] for the evaluation of abstract logical reasoning; the phonemic fluency [31] and category fluency [32] for the evaluation of language. Clinical staging was assessed by means of the Clinical Dementia Rating Scale (CDR) [33].
The original version of HIV-Dementia Scale
The HDS original version consists of four subtests. Item 1-attention (max score = 4): modified from anti-saccadic error task [34]. The patient is asked to look the examiner's nose and then to focus on examiner's moving index finger, repeating the task with alternating hands. When the patient is comfortable, looking at the finger that moves, the examiner ask him/her to look at the not moving index finger. This task is practised until the patient becomes familiar with the procedure. Then the patient is asked to perform 20 serial anti-saccades. An error is marked when the patient looks towards the moving finger. Item 2-psychomotor speed (max score = 6): patient is asked to write the entire alphabet. If the patient is unable to perform it correctly, the examiner asks him/her to write the numbers from 1 to 26 and the time taken is recorded. The time taken to complete this task is converted into a numerical value from 0 to 6. Items 3memory recall (max score = 4): the patient is asked to repeat and remember four words. The four words to be recorded in the HDS-IT memory subtest correspond to the translation of the original version ("dog", "hat", "green" and "peach") [21]. Item 4-construction speed (max score = 2): the patient is asked to the draw of a copy cube. Primarily the examiner explains the figure copy, the time needed to copy is recorded and converted into a numerical score from 0 to 2. The maximum HDS score is 16.
Development of the Italian version
The HDS-IT was developed using forward-backward translation. Two researchers separately translated the English version into Italian, and then compared the two translations. The resulting draft was translated back into English by a native independent English speaker fluent in Italian language who did not know the original version of the scale. The Italian version was compared with the original English version, any discrepancy was discussed and a final version was adopted after reaching the full agreement. This final Italian version is reported in Supplementary file. As with the original English version, in HDS-IT the item of psychomotor speed consists of writing the numbers from 1 to 21 (instead of 1-26 as in the original version), due to Italian alphabet, composed of 21 letters rather than 26 as the English one.
As for the original version, the maximum score is 16.
To avoid interference with the recall of words (for instance, 'CAPPELLO' may interfere with the RAVLT list), we suggest that the administration of the test should be done at least 15 min before the verbal memory test.
Statistical analysis
The data were analysed using R version 3.5. Descriptive statistics were calculated. Student T test was applied to test significance of differences of continuous variables between scCI and CN. Mann-Whitney U test was used whenever appropriate. Gender difference between the groups was assessed via Chi-Square Test. Test-retest variability was calculated with Spearman correlation coefficient. Receiveroperating characteristics (ROC) curve analysis was carried out for evaluating accuracy of HDS in discriminating scCI from controls. The optimal cutoff value was determined according to Youden Index. AUC, sensitivity and specificity were provided along with their 95% CI according with the selected cutoff. In all the analyses, two-sided p values < 0.05 were considered as statistically significant.
Results
Demographic characteristics of participants are reported in Table 1. CN and scCI did not differ in the distribution of age, gender and education. CN subjects showed lower MMSE mean scores compared to scCI (26.2 ± 2.8 vs. 28.3 ± 1.3, p < 0.001).
Validation of the HDS Italian version
The HDS-IT total score was negatively associated with age (rS = − 0.18, p = 0.008), while it was positively associated with education (rS = 0.39, p < 0.001). No associations were found with gender (p = 0.571). HDS-IT and MMSE total scores were positively associated (rS = 0.49, p < 0.001).
Corrected item-total correlations ranged between 0.44 and 0.72. Moreover, the average of inter-item correlation was higher than 0.17. Test-retest reliability was assessed in 27 subjects, yielding a score of rS = 0.70 (p < 0.001). The mean duration between visits was 3-10 months (median: 7).
Exploring HDS-IT suitability in detecting subcortical cognitive deficits
Mean HDS-IT total score was close to the original version in both groups and significantly lower in the scCI compared to CN group (12.6 ± 2.5 vs. 8.6 ± 3.6, p < 0.001). Performing a sub-analysis comparing single items in the two groups, we found significant differences for item 2 (5.2 ± 1.4 vs. 2.7 ± 2.5, p < 0.001), and a trend toward significance for item 3 (3.0 ± 0.9 vs. 2.1 ± 1.2, p = 0.004). All complete results are shown in Table 2.
Discussion
The purposes of this study were to validate the Italian version of the HDS (HDS-IT) in a cohort of cognitively healthy elderly subjects and to explore its suitability as a sensitive screening tool for detecting subcortical cognitive impairment in subjects with neurological diseases associated with subcortical damage (scCI).
With respect to the first aim (validation of the HDS-IT in a cohort of cognitively healthy volunteers), our results displayed that the HDS-IT revealed good psychometric properties as well as the original version, shown by the criterion validity and test-retest reliability. Regarding the criterion validity, we found a trend in convergent validity between HDS and MMSE (i.e. the better the MMSE score, the better the HDS score). However, these measures did not overlap, because of their complementarity due to different items composition. About test-retest reliability, we found a robust test-retest correlation (rS = 0.70) with a mean interval of 3-10 months. Our result was in line with a previous study that found good performance at 3-9 weeks interval [22] and at 4 months [35]. The use of a wider time interval in our study may represent an advantage to control for possible learning or practice effect, defined as "the capability of an individual to learn and adjust" after repeated administration of a task [36]. This represents a critical issue in clinical practice, since this can affect the test-retest reliability of a task, particularly in cognitively healthy subjects [37,38]. Furthermore, we found that the HDS-IT was inversely associated with age and positively associated with education, whereas it was independent from gender. Our findings are in line with that observed for other screening tests, so a correction for age and education should be considered in future studies, for a better interpretation of the raw scores obtained at the HDS-IT. With regard to the second aim (i.e. the behaviour of the HDS-IT in a clinical sample of scCI patients compared to healthy control), we found that patients with scCI showed poorer scores on the HDS-IT compared to cognitively healthy individuals, even in those with MMSE < 28. This observation further supports the sensitivity of the HDS-IT in detecting cognitive deficits with prevalent subcortical pattern.In particular, those scCI patients who displayed normal scores on MMSE frequently displayed low scores on the HDS-IT. A previous study found significant correlations of the HDS scores with neuropsychological measures exploring attention/working memory, processing speed and executive functions, supporting the usefulness of this test for detection of subcortical cognitive deficits [39]. In our study, the capability of HDS-IT in detecting subcortical cognitive impairment was accomplished by comparing HDS-IT performance between patients with scCI and CN. Overall, our results support the use of HDS as a screening tool for detecting subcortical cognitive deficits, being complementary to MMSE in clinical practice.
Our study has some limitations. While all patients in the scCI group underwent a neuroimaging acquisition, this was not provided for some individuals in the control group. Therefore, the actual vascular load (i.e. white matter hyperintensities and subcortical damage) might have been underestimated in the control group. Moreover, the scCI group was heterogeneous in terms of diagnosis, although all of the patients shared a subcortical pathophysiology underlying their neurological disease. Only 6 patients with HIV infection have been included, diagnosed on the basis of clinical and laboratory data, while no brain imaging data were available. Therefore, in the present study, the suitability of the Italian version of the HDS in the original test's target population has not been replicated.
Further studies have to be performed to validate the HDS in other diseases causing scCI, such as Parkinson's disease or Huntington's disease, as recommended previously [21]. Furthermore, future studies can be useful to explore the performance at HDS-IT also in cortical-type neurodegenerative diseases (i.e. Alzheimer's disease), where a subcortical damage may be involved in the pathogenesis and accompany the neurodegenerative processes, though at a lower level of magnitude.
In conclusion, our results suggest that the HDS-IT is able to detect subcortical deficits in a population of patients with subcortical neurological disorders (i.e. SIVD, NPH and MS and HIV+). The HDS-IT showed good psychometric properties, so it may represent a suitable screening tool to be used in clinical practice, being complementary to MMSE.
Funding Open access funding provided by Università degli Studi di Perugia within the CRUI-CARE Agreement. The present study had no external funding source.
Data availability Data and material are available upon reasonable request.
Conflicts of interest
The authors declare that there is no conflict of interest.
Ethics approval
The study protocol was approved by the Local Ethics Committee (CEAS Umbria).
Consent to participate All participants gave their written consent.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
2021-05-16T06:18:26.020Z
|
2021-05-15T00:00:00.000
|
{
"year": 2021,
"sha1": "d580324b3c1324087d0625ace98c5516031f0cce",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00415-021-10592-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "444de52bcadb1be6b3a2716625bc36f2612dd064",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16306224
|
pes2o/s2orc
|
v3-fos-license
|
Association of rib anomalies and childhood cancers
Background: Congenital anomalies have been found more often in children with cancer than in those without. Rib abnormalities (RAs) have been associated with childhood cancer; however, studies have differed in the type of RAs and cancers implicated. Methods: Rib abnormalities were assessed predominantly by X-ray in a hospital-based case–control study. Results: There was a significant difference in the number of cases vs controls with RAs after controlling for age and sex, specifically for acute myelogenous leukaemia, renal tumours, and hepatoblastoma. Conclusion: The results of this study support previous reports that there is an association of rib anomalies with childhood cancer.
Multiple studies have demonstrated an association between morphological abnormalities and paediatric cancer (Evans et al, 1993;Narod et al, 1997;Merks et al, 2008). Associations between congenital anomalies and cancer predisposition syndromes are noted in single gene disorders such as Gorlin syndrome, Fanconi anaemia, and Wilm's tumour 1 mutation-related disorder (WT1) (Pelletier et al, 1991;Cowan et al, 1997;Kimonis et al, 1997;Alter et al, 2003). Even in the absence of single gene disorders, several epidemiological studies have provided data showing an association between childhood cancer and rib anomalies (RAs; Schumacher et al, 1992;Merks et al, 2005;Loder et al, 2007). Normally, an individual has 12 pairs of ribs with a total of 24 ribs. Abnormalities of the ribs can be numerical (e.g., 424 or o24 ribs) or structural (e.g., cervical ribs, bifid ribs, synostoses, and segmentation defects). All three previous studies examining RAs have reported an association with childhood cancer; however, the studies differed in the specific type of RAs implicated. This study sought to better understand and more robustly describe the association between childhood cancer and morphological defects of the ribs in a US population.
Subject selection
Rib anomalies were assessed in a hospital-based case -control study. Cases consisted of all paediatric haematology and oncology and bone marrow transplantation (BMT) patients treated at the University of Minnesota Medical Center-Fairview in Minneapolis, MN for malignancy during 2003 -2009. Cases must have been diagnosed between the ages of 0 -19 years and had been imaged during the study period. Children with a known syndrome (e.g., bone marrow failure syndromes, Down's syndrome, and mucco-polysaccaridoses) identified through our database were excluded; we expect that only very minimal number of syndromic cases were missed. Controls were randomly selected paediatric patients who received a chest X-ray at Fairview Ridges Hospital in Burnsville, MN during June and October of 2003 -2008. Controls were chosen from this community hospital as they more likely represent the general Twin Cities paediatric population. Indications for chest X-rays in controls included asthma or shortness of breath, bronchitis, chest pain, possible pneumonia, trauma, and others (e.g., foreign body). The study was approved by the University of Minnesota Institutional review board.
Data collection and quality assurance
Images were reviewed for numerical and structural RAs according to previously described scoring methods (Coury and Delaporte, 1954;Merks et al, 2005). The latter were cervical ribs (left, right, or bilateral not including transverse apophysomegalies), bifid ribs (left, right, or bilateral), rib synostoses/fusion (left, right, and bilateral), vertebral segmentation anomaly, and post-surgical repairs. Chest X-rays were evaluated whenever possible, with additional images evaluated when clarification was needed. When a chest X-ray was unavailable, magnetic resonance imaging or computed tomography (CT) was evaluated. An electronic database was created using FileMaker Pro 10 software (Filemaker Inc., Santa Clara, CA, USA) to assist in abstracting data. The abstraction instrument was first tested by two radiologists independently reviewed a random sample of 100 images to assess the interrater reliability (k statistic) of the abstraction tool. The initial abstraction tool showed substantial agreement for the availability of images (Kappa (k) ¼ 1.00), rib number (k ¼ 0.75), and bifid ribs (k ¼ 0.79) (Landis and Koch, 1977). Moderate agreement was noted for the evaluability of images (k ¼ 0.57) and cervical ribs (k ¼ 0.52). Improvements to the abstraction tool included clarification of the definition of which images were considered not evaluable due to the inability to clearly visualise the C7 and L1 vertebra. After the original review of images, two radiologists analysed all positive images. Of the 82 images scored positive by the resident, the radiologists found 77 positive images indicating a 94% agreement. A complete reanalysis of the radiologists' data found no substantial differences compared with the resident (data not shown).
Using the improved abstraction instrument one radiology resident reviewed each radiograph, followed by post hoc evaluation of a random sample of 100 images by two radiologists to determine interrater reliability of availability of images (k ¼ 1.0), evaluability of images (k ¼ 0.80), and the presence of RA (k ¼ 0.85). The high level of agreement between the radiologists and radiology resident provided confidence to allow the resident's evaluations to stand in the analysis. Blinding reviewers to case status was not entirely possible; however, they were not made aware of the study's hypothesis until completion of data collection.
Statistics
Pearson's w 2 -test was used to assess categorical data differences between cases and controls. Dichotomous variables were created for normal or abnormal ribs (any RA including abnormal rib number, cervical ribs, bifid ribs, and rib synostoses), rib number (24 and o24, 424), cervical and bifid ribs. Only one rib synostoses/fusion or segmentation defects were detected and so these were not analysed. All cases as well as individual cancer types with greater than five RAs were analysed. Unconditional logistic regression was used to calculate the ORs and 95% CI for RAs adjusting for age and sex.
Sensitivity analyses were completed to examine the possibility of bias introduced from the selection of cases and controls; these included analyses with each indication for X-ray in controls dropped in turn, exclusion of all individuals who resided outside the state of Minnesota, restriction by image type, and separate analyses of paediatric BMT and haematology/oncology patients. All statistical analyses were performed using SAS Version 9.2 software (SAS Institute Inc., Cary, NC, USA).
RESULTS
There were 625 eligible cases (paediatric haematology/oncology, n ¼ 409, BMT, n ¼ 216) and 1499 eligible controls (Table 1). Controls had a higher percentage of available images (93.2%), but a lower percentage of evaluable images (81.2%) ( Table 2). Cases that had cancer types not frequently requiring chest imaging (e.g., brain tumours) were more likely to have no images available but on the whole most cases had multiple chest X-rays or CT images. Controls on the other hand were selected based on having had a single chest X-ray and the indications for imaging were less likely to necessitate a chest X-ray of the entire rib cage or follow-up imaging. Reasons for non-evaluation included inability to visualise cervical vertebra (C7), lumbar vertebra (L1), both C7 and L1 or poor quality. The resulting ratio of evaluable images for cases to controls was 1 : 2.5. The radiologist used various means for identifying RAs with the preference first via X-ray followed by CT scan. The image types available differed in cases and controls (Po0.0001). Controls had a higher percentage of X-rays (99.6% vs 88%) and cases had a higher percentage of available CTs (12.0% vs 0.2%) ( Table 1). Study participants varied significantly by age at first chest imaging, ethnicity, and residence but did not differ by gender (Table 2).
Rib number assessed as an integer differed in cases and controls (Fisher's exact P-value ¼ 0.008) ( Table 3). When categorised as less than or greater than 24 ribs, the crude analysis was borderline significant with an OR of 1.57 (95% CI: 0.98, 2.53) and when adjusted for age and sex attained significance with an OR of 1.66 (95% CI: 1.00, 2.74) ( Table 4). A similar association was seen after excluding BMT cases (OR ¼ 1.78, 95% CI: 1.01, 3.12).
Presence of any RA was also borderline significant in the crude analysis (OR ¼ 1.55, 95% CI: 0.98, 2.46), but significant after adjustment (OR ¼ 1.60, 95% CI: 1.0, 2.65) ( Table 4). Removing Table 5. Cases with renal tumours and acute myelogenous leukaemia (AML) had a statistically significant increased odds of RAs compared with all controls in both the crude and adjusted models. Hepatoblastoma was not included in our main analysis due to a limited number of cases, but two out of the five cases had a RA (OR crude ¼ 14.14, 95% CI: 2.34, 88.83, OR adjusted ¼ 14.43, 95% CI: 2.34, 88.83). All other cancer types did not show a significant association with RAs (Table 5). The same analysis was completed for the association of abnormal rib number as opposed to total RAs and individual cancer type with similar findings (data not shown).
Dropping each control X-ray indication or BMT cases did not markedly alter results (data not shown). Excluding all individuals living outside the state of Minnesota or X-ray only images greatly reduced the number of analysable cases making the associations essentially null. Interestingly, when RAs and rib number were separated into two ethnic categories (Caucasian and non-Caucasian) a statistically significant result was noted for non-Caucasians (OR ¼ 2.13, 95% CI: 1.1, 4.08) but not in Caucasians alone (OR ¼ 1.06, 95% CI: 0.44, 2.53).
DISCUSSION
We detected a moderate but fairly consistent association of RAs and childhood cancer, which remained despite several sensitivity analyses that we performed to compensate for limitations. The findings further appeared to be strongest in AML, renal tumours, and hepatoblastoma. Our results offer partial confirmation of three previous studies on the topic. Loder et al (2007) and Schumacher et al (1992) reported an association of RAs with abnormal rib number whereas Merks et al (2005) failed to confirm this association. Merks et al and Schumacher et al (1992) found an association of cervical ribs and overall childhood cancer, but the studies showed discrepant results between which childhood cancers were associated with cervical ribs. It should be noted that transverse apophysomegalies were not included in our definition of cervical ribs, which may account for the lack of a positive association found in our study. Our results are similar to Loder et al (2007) with a small percentage of identifiable cervical ribs and Abbreviations: OR ¼ odds ratio; 95% CI ¼ 95% confidence interval; BMT ¼ bone marrow transplantation. Any rib anomaly including abnormal rib number, cervical ribs, bifid ribs, and rib synostoses. The total rib anomalies are not the sum of abnormal rib number and rib abnormalities due to some individuals who had both abnormal rib number and abnormality. Each case was only counted once. Any noticeable surgical alterations were excluded from the analysis (control ¼ 2 and cases ¼ 4). a Fisher's exact test was used to determine the P-value for cervical ribs in the crude analysis. b Adjusted for sex and age at first chest imaging. (Schumacher et al, 1992). Our results exhibited an increased number of RAs in CNS neoplasms but the result was not significant. A reduced number of CNS and miscellaneous intracranial and intraspinal neoplasms in our analysis may have prohibited power to detect a significant association. The US study by Loder et al (2007) is closest to our population and may account for the similarity of results between the studies. Data on RAs may not be generalisable to all ethnic populations. To the extent that we could rely on the data supplied and examine ethnicity (Caucasian and non-Caucasian), we found an association limited to non-Caucasians although this is not a large percentage of our study population. Future studies may help to better understand the role of ethnicity on RAs and the association with childhood cancer.
The hospital-based case -control study design had several limitations. The study population was created from a convenience sample which to some extent limited our analyses. Case and control selection has the potential to introduce bias. Cases in our study were obtained from a tertiary care centre. Cases were likely unaware of RAs and therefore unlikely to be differentially ascertained based on this basis. The underlying cohort that gave rise to the cases in our study would be difficult to define. In an attempt to best replicate the cohort, controls were obtained from a hospital-based clinic to better represent the paediatric population of Minnesota, but we cannot rule out bias due to mismatch between sources of cases and controls. Controls were included in our study for a variety of conditions to dilute any effect of one condition increasing the likelihood of obtaining a chest X-ray due to having a RA.
Our results indicate a modest but fairly robust association between RAs and paediatric cancer generally in line with other findings. Particularly, RAs were associated with increased odds of the individual cancer types AML, renal tumours, and hepatoblastoma. Paediatric cancer aetiology remains an elusive area of research with great need for better understanding. Prospective studies examining the genetics of RAs and childhood cancer may provide insight into the pathways leading to the development of paediatric cancer. The existence of common genetic and/or developmental origins in congenital anomalies and childhood cancer is a rich area for future investigation.
|
2016-05-12T22:15:10.714Z
|
2011-09-13T00:00:00.000
|
{
"year": 2011,
"sha1": "f6ac8a564a006619795e1eb7ce2737313ee1ccd0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/bjc2011366.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f6ac8a564a006619795e1eb7ce2737313ee1ccd0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
127575655
|
pes2o/s2orc
|
v3-fos-license
|
MARKOV CHAIN MONTE CARLO ANALYSIS OF CHOLERA EPIDEMIC
Several mathematical models have been designed to understand the dynamics of cholera epidemic from which some models considered direct and indirect transmission. In this study a system of ordinary differential equations is developed by splitting the class of infected individuals into symptomatic and asymptomatic infected individuals with the incorporation of water treatment as a control strategy. Theoretically, the developed model is analysed by studying the stability of equilibrium points. The results of the analysis shows that there exist a locally stable disease free equilibrium point, E0 when R0 < 1 and endemic equilibrium, E∗ when R0 > 1. Numerically, the identifiability of parameters is done by least square and Markov chain Monte Carlo methods. Both methods are used as tools to analyze the developed model. The results show that the parameters are identifiable.
Introduction
Cholera is a living testimony of poor sanitary conditions.It is a severe water borne infectious disease caused by the vibrio cholerae bacterium [25].Cholera has short incubation period, from less than one day to five days.It is characterized by severe watery diarrhoea caused by the production of cholera toxin by vibrio cholera bacteria in small intestine and it can cause death within three to four hours if untreated.It is caused by eating food or by drinking unsafe water which are contaminated with vibrio cholerae.It has been proved that pathogenic vibrio cholerae can survive refrigeration and freezing in food supplies [24].
Cholera has been declared by World health organization (WHO) as the public health problem.
Hence, there is a need of finding better ways of dealing with cholera so as to reduce cases of cholera in different countries in the world.According to WHO report, about 1.4 to 4.3 million cases of cholera are reported each year worldwide and more than 140,000 deaths per year are reported due to cholera.The cholera outbreak in Tanzania which began in August 2015 has resulted in over 24,000 cases as of 20 April, 2016 and caused 378 deaths [27].There are several measures which have been suggested by WHO to prevent cholera these measures are environmental sanitation, water treatment, provision of clean water, provision of education on the effect of cholera [26].
A number of studies have been conducted to highlight the spread of infectious diseases in deterministic context.For example, the mechanistic model of the SIRS form, for cholera in [9], explains monthly cholera deaths counts in the twenty-six districts of the former British East Indian province of Bengal during the period 1891-1940.The model incorporated both transmission due to human prevalence via a mass action term and transmission from the environmental reservoir.One of the three models proposed is a two path model which includes a class for severe infectious as well as a class for mild, in apparent infectious.However, the model does not allow for feedback from infected individuals into the environment reservoir.The SIWR model of [19,5] allows for infections from both a water compartment (W) and direct transmission and considers the feedback created by infected individuals contaminating the water.To allow for the possibility of asymptomatic individuals excreting vibrio cholerae to water reservoir.The mathematical model with a compartment for asymptomatic infected individuals developed in [11], considers only direct transmission for disease.
There exist literature that deals with measures of cholera treatment.For example, a mathematical model for the dynamics of cholera with control measures such as educational campaigns, vaccination, sanitation, and treatment as control strategies in limiting disease are explained in [4].The mathematical model considers infected individual as a single compartment.However, it could be better to divide the infected individuals into two groups i.e., asymptomatic and symptomatic infected individuals in order to observe the contribution of vibrio cholerae to the environment from each compartment.The mathematical model (SIR-C) for modeling cholera dynamics with a control strategy in Ghana proposed in [16].The model considers the infected individuals as a single compartment with limited numerical analysis.In this study we extend the deterministic model developed in [16], by splitting infected compartment (I) into two sub groups, symptomatic infected (I s ) and asymptomatic infected (I a ) individuals in order to observe the contribution of vibrio cholerae to the environment from each compartment.
Model Formulation and Theoretical Analysis
We formulate the basic model for the dynamics of cholera with two subpopulation; bacteria (pathogen) and individuals.Individuals are subdivided into four developing compartments S, I s , I a and R, which all of them depend on time t, but the dependency has been dropped for notational convenience.Here S denotes susceptible individuals who contract disease at rate β and the influx of susceptible comes from a constant recruitment rate b and develop to infectious classes at probabilities p, q respectively.The symptomatic infected individuals (I s ) who become new infected from S at a probability p and contribute vibrio cholerae through excretion to the environment at a rate α 1 , dies due to natural death and due to disease at rates µ and d.The asymptomatic infected individuals (I a ) who become new infected from S at a probability q and contribute vibrio cholerae through excretion to the environment at a rate α 2 .R denotes the recovery of I s and I a at rates r 1 and r 2 respectively.The concentration of vibrio cholerae in water is denoted by B. The concentration of bacteria decrease due to mortality rate φ and due to water treatment at a rate δ .The total population is given by N(t) = S(t) + I s (t) + I a (t) + R(t) at any given time t.In formulating the model, the following assumptions are imposed: (1) The population is closed (i.e., there is neither immigration nor emigration), (2) The contribution of both symptomatic infected (I s ) and asymptomatic infected (I a ) individuals to the population of vibrio cholerae in the aquatic environment at the rates α 1 and α 2 respectively, (3) Human birth and death rates occurs at different rates (i.e., b and µ) respectively, (4) Water treatment is included in model as a control strategy, (5) The population is homogeneously mixed i.e., each individual within the population is susceptible to disease.
In this study we assume that, each susceptible individual has equal chance of acquiring cholera through the recruitment rate b and consuming water with vibrio cholerae in the reservoir at the force of infection λ = β B κ + B , where B κ + B is the ratio of vibrio cholerae concetration and κ is the concetration of vibrio cholerae in the water reservoir that will make a possibility of 50% of susceptible population infected.The cholera model can be described by the following deterministic system of nonlinear ordinary differential equations: with initial conditions S(0) > 0, I s (0) ≥ 0, I a (0) ≥ 0, R(0) ≥ 0, B(0) ≥ 0 and p + q = 1.
Computation of SIRB Basic Reproduction Number
In epidemiology a key parameter is the basic reproduction number R 0 , defined as the average number of secondary infectious cases transmitted by a single primary infectious cases introduced into a whole susceptible population [18].To compute R 0 , we use the next generation matrix approach as described in [20].It is obtained by taking the largest (dominant) eigenvalue value (spectral radius) of , where F i is the rate of appearance of new infection in compartment i, V i is the net transition between compartments, E 0 is the disease free equilibrium and X i stand for the terms in which the infection is in progression i.e., I s , I a and B in the model (2.1), that is Using the linearization method, the associated matrix at DFE for F and V can be computed as .
For the inverse of matrix V to exist this condition should hold |V | = 0 and µ, r 1 , r 2 , α 1 , α 2 , δ , µ and d are positive.Hence the next generation matrix becomes , where Then R 0 can now be computed as where
Positivity and Boundedness of Solutions
Model (2.1) can be shown that state variables are non-negative and the solutions remain positive for t ≥ 0. Also, the parameters in our model are assumed to positive.We also, show that the feasible solutions are bounded in a region such that Φ = (S, I s , I a , R, B) ∈ R 5 + .Lemma 2.1.Let the initial values of the parameters be {S(0) ≥ 0, I s (0) ≥ 0, I a (0) ≥ 0, R(0) ≥ 0, where b = bN.
We have that, by separating the variables of Equation (2.3) and integrating, we obtain Hence, By considering the second equation in Equation (2.1) We have that By separating the variables of Equation (2.4) and integrating, we obtain Hence, By considering the third equation in Equation (2.1) We have that By separating the variables of Equation (2.5) and integrating, we obtain Hence, The same method can be applied to the remaining equations in fourth and fifth in Equation (2.1) to obtain Hence, Hence, Therefore, the solution of model system (2.1) is always positive.This completes the proof.
Lemma 2.2.The solutions for the model system (2.1) are contained and remain in the region Φ for all time t ≥ 0 Proof.Consider the total population The integration factor I.F= e µt .The solution becomes where However, for the bacteria variable the boundedness is shown as follows By integrating this equation (I.F= e (δ +φ )t ), we get this solution where A is a constant.Then which implies that N and all other variable (S, I s , I a , R and B) is bounded and all the solutions starting in Φ approach, enter or stay in Φ.This completes the proof.
Local Stability of the Disease Free Equilibrium
Local stability of the DFE can be analyzed using R 0 as the bifurcation parameter, that is locally asymptotically stable if R 0 < 1 and unstable when R 0 > 1.The DFE of the model system (2.1) is given by The DFE E 0 of the system (2.1) is locally asymptotically stable if R 0 < 1 and Proof.The Jacobian matrix of the system at an arbitrary equilibrium is defined by .
The characteristic equation of the Jacobian matrix is given as follows Since the first and fourth columns of matrix above contains diagonal terms, they form eigenvalues λ 1 = −µ which is negative.The other eigenvalues are obtained after reducing the first and fourth columns and their corresponding rows, leading to where The simplification of Equation (2.7), leads to ) and Equation (2.8), can be written as where We can now write Equation (2.9), in the form where To ensure that the remaining eigenvalues of Equation (2.10) have negative real parts, we employ Routh-Hurwiz stability criterion [17].The conditions are M 1 > 0, M 3 > 0 and also, M 1 M 2 > M 3 .So, M 1 is already non-negative and M 3 is non-negative if and only if R 0 < 1.
The disease free equilibrium is locally asymptotically stable if R 0 < 1.We re-write M 3 as when R 0 < 1, M 3 is positive and R 0 > 1, M 3 is negative, under the condition that L is always positive, which is true since the values of the parameters are all positive.This proofs Theorem 2.3 i.e. the disease free equilibrium of the system is locally asymptotically stable if R 0 < 1 and unstable if R 0 > 1.This completes the proof.
Theorem 2.2.The DFE is globally asymptotically stable if R 0 < 1 Proof.For globally asymptotically stable, we use a concept of Metzler matrices proposed by [2].The system must be written as where X ∈ R m denotes the uninfected compartments and Z ∈ R n denotes the infected compartments.The DFE is globally asymptotically stable for the system provided that R 0 < 1 and conditions stated below should holds, H 1 : The system is defined on a positively invariant set Ω of the non-negative orthant.
That is dX dt = M(X, 0), X * is globally asymptotically stable.
Here G = ∂ N ∂ Z (X * , 0) is an M-matrix (the off diagonal element of G are non-negative).
Therefore X = (S, R) and Z = (I s , I a , B) At the disease free equilibrium, Z = B, I s , I a , but Z = 0, so , B = 0, I a = 0, I s = 0, and hence leads to Ṡ = bN − µS and Ṙ = −µR, and their corresponding solutions becomes It is true that R(t) → 0 and S(t) → bN µ as t → ∞, regardless of the values of R(0) and S(0).
Hence, we conclude that the system is globally asymptotically stable at the equilibrium point bN µ , 0 , thus H 1 are satisfied.Again the matrix N(X, Z) is given by .
Hence, we observe that N(X, Z) is less than zero i.e., N(X, Z) < 0. Therefore, H 2 is not satisfied.
We conclude that, the disease free equilibrium may not be globally asymptotically stable.This completes the proof.
Endemic Equilibrium Point and Local Stability
The endemic equilibrium points of the model system (2.1) is given by E * = (S * , I * s , I * a , R * , B * ) with I s = 0, I a = 0 and B = 0.It can be obtained by equating the RHS of each equation of the model system (2.1) equal to zero, which exists for R 0 > 1.
Theorem 2.3.If R 0 > 1, the endemic equilibrium E * exists and is locally asymptotically stable Proof.The local stability of the endemic equilibrium is established from the eigenvalues of the Jacobian matrix evaluated at endemic equilibrium, that is From which it is observed that λ = −µ < 0 is an eigenvalues.The other four eigenvalues can be obtained from the characteristic polynomial of the 4 × 4 block matrix.
The remaining condition is to have a stable system, that is Det(J 1 ) > 0.
The easier way to find the determinant of J 1 is to expand using column 2. Hence, we have where For Det(J 1 ) > 0, this condition should hold D 1 > D 3 .
Since tr(J 1 ) < 0 and Det(J 1 ) > 0. We conclude that the system is stable.Thus implying that, from Equation (2.2), R 0 > 1.Therefore, the endemic equilibrium is locally asymptotically stable when R 0 > 1.This completes the proof.where
Global Stability of Endemic Equilibrium Point
By direct substitution to Equation (2.11) we get (2.12) Also, the endemic equilibrium of the model system (2.1) is given by E * = (S * , I * s , I * a , R * , B * ).It can be obtained by equating the right hand side of each equation of the model system (2.1) equal to zero.Thus (2.13) Therefore, Equation (2.12) deduce to Putting the positive and negative terms together in the system (2.15) we obtained where Then, from Equation (2.16), if M < Z, then dV dt will be negative, implying that dV dt < 0. However, it follows that Therefore, the largest compact invariant set in is the singleton E * , where E * is the endemic equilibrium of the model system (3.1).By LaSalles's invariant principle, it implies that endemic equilibrium E * is globally asymptotically stable in Ω if M < Z.This completes the proof.
Numerical Results and Discussions
In this section, simulation and parameter estimation of the developed model in (2.1) is carried using least square and adaptive Markov chain Monte Carlo methods as in [13].Data are created by solving ODEs and then corrupted it with relative Gaussian noise whose standard deviation is 0.5.The parameter values used are literature values and by substituting these values to (2.1), the simulated ODEs (2.1) leads to the results shown in Figure 2. From Figure 1, we see that, susceptible variable is decreasing this is due to the fact that some of its members are immigrating to I a and I s compartments.As the time goes both I a and I s are increasing and later, decrease after a period of time this is due to control measures taken, R is increasing exponentially this implies that all individuals reaching the compartment R will never come back to the system and are supposed to remain within it.The parameters described in model system (2.1) were estimated by least-squares method.
This involves minimizing the sum of squares of residuals.The results are shown in Table 1.
From Table 1, it is observed that estimates of the parameters are indeed close to the literature Estimates 0.000066 0.32 887729 1.4 0.092 0.000586 0.045 0.022 0.12 0.46 0.644 0.27 0.0000402 values which, in this case, are treated as true values.This implies that, least square method performs well to the cholera model developed.
In MCMC parameter sampling, to know weather our chains have converged or not, we use assessment methods such as summary of MCMC, trace plots, scater plots, marginal posterior distribution and autocorrelation functions.The initial values used are S = 1000000, I s = 1, I a = 1, R = 1, B = 1 and we generated 100000 samples using initial covariance of 0.1.Also, we calculated the basic reproduction number R 0 using true values, estimated values and MCMC mean.The R 0 value are 5.7, 6.5 and 5.01 respectively.
A Summary of an MCMC Object
The results gives the posterior means, standard deviations and posterior quantiles for each chain and convergence diagnostic.From Table 2, the mean values are close to the least square From Figures 2 to 4, we observe that the chains seem to be stationary, there is no obvious trend or stuck, which implies good mixing of chain.
Autocorrelation Function
The autocorrelation functions measure how well the MCMC sampler performs by measuring the autocorrelation between parameters θ i and θ i+q at lag q.The smaller the autocorrelation values, the better mixing of the chains.The autocorrelation values for Figures 5 and 6 are decreasing exponentially and stabilizing around zero, this proves that the parameters are identifiable.From the MCMC figures, we get the information related to correlation, uncertainty, identifiability of parameters, convergence of Markov chain to the target distribution etc [23].The distributions that are skewed to the left have a negative coefficient of skewness and that skewed to the right have a positive value and the skewness for normal distribution is 0 and the kurtosis for normal distribution is 3 [7].A ratio greater than 3 indicates more values in the neighborhood of the mean and a ratio less than 3, indicates that the curve is flatter than the normal.See, Table 4 and Figure 7 for more details.However, it should be noted that parameters for ODEs can take asymptotic properties of any distribution.
Predictive MCMC Plots
We check the accuracy of the model through prediction plots.From Figure 8, the model predicted the data at 95% posterior limits which seen with the gray colour around the model solution.The variance of predictive distribution reflects the predictive accuracy of the model.
Conclusion
In this paper, we formulated a new SIRB epidemic model by splitting the infected compartment into two classes (I s and I a ) with the aim of modeling cholera epidemics.We have derived The basic reproduction number (R 0 ) is calculated and its value is seen to be 5.7 which agrees with the one found in [22] and is greater than 1.We observe that the disease was capable to invade susceptible population, but its spread was possibly stopped due to intervention and other measures taken.
The model parameters have been estimated by applying least squares estimation with the aim of fitting the SIRB ordinary differential equations to data.Also, Markov chain Monte Carlo (MCMC) method is used to estimate unknown parameters and other characteristics of the target posteriors by generating samples.Graphical representations are presented to illustrate and support the analytical results.The predictive distributions generated predicted the model to a large degree of accuracy.Finally, it is observed that both least squares and MCMC methods performed well to the cholera model developed.
Theorem 2 . 4 .
If R 0 > 1, the endemic equilibrium E * of the model system (2.1) is globally asymptotically stable.Proof.To establish the global stability of endemic equilibrium E * , we construct the derivative of positive Lyapunov function V as follows;
FIGURE 1 .
FIGURE 1.Time evolution of the susceptible, symptomatic, asymptomatic infected, recovered and bacteria classes.
FIGURE 2 .
FIGURE 2. Trace plots of estimated unknown parameters b and β using MCMC method.
FIGURE 3 .FIGURE 4 .
FIGURE 3. Trace plots of estimated unknown parameters p and q using MCMC method.
FIGURE 5 .
FIGURE 5. plots of Autocorrelation function of b and φ with 150 lags.
FIGURE 6 .
FIGURE 6. plots of Autocorrelation function of δ and d with 150 lags.
TABLE 4 . 1 FIGURE 7 .
FIGURE 7. Posterior distribution of unknown parameters with 10000 iterations by defining the normal with large variance as the prior distribution on the shape parameter.
Substituting the derivatives from Equation (2.1) to Equation (2.6) we get
TABLE 1 .
Estimated cholera epidemic model parameters by least square method.
TABLE 2 .
A summary of an MCMC of parameter value.
[6]h iteration[6].Through this we check whether the chains get stuck in a certain areas of the parameter space, which indicates bad mixing.
|
2019-04-23T13:21:36.821Z
|
2018-01-09T00:00:00.000
|
{
"year": 2018,
"sha1": "91eddd8a837488b10a31a8f7864c457545dc6b53",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.28919/jmcs/3801",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "91eddd8a837488b10a31a8f7864c457545dc6b53",
"s2fieldsofstudy": [
"Mathematics",
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
51951935
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic significance of preoperative CT findings in patients with advanced gastric cancer who underwent curative gastrectomy
Background Preoperative therapy has gained wide interest in advanced gastric cancer patients due to its potential advantages of improved disease control. Selection of high risk patients based on preoperative staging is crucial to choose the candidates for neoadjuvant therapy. Methods Our institutional review board approved this retrospective study and waived the requirement for patient consent. We searched 394 advanced gastric cancer patients (pT2-4) who underwent curative resection in 2010 without neoadjuvant therapies. Two abdominal radiologists independently reviewed the preoperative CT including tumor depth on CT (CT-tumor depth), which was categorized as follows: intramural, minimal extramural(<1mm), spiculated extramural(≥1mm) and nodular extramural infiltration. The impact of clinicoradiologic factors on disease recurrence and disease free survival (DFS) was evaluated. Recursive partitioning analysis was performed to suggest prediction models for recurrence. Results Of total 394 patients, 86 patients (21.8%) experienced recurrence. Spiculated (≥1mm) and nodular extramural tumor infiltration and CT size of 5-10cm were independent predictors of disease recurrence and significantly associated with worse DFS. Lymph node involvement on CT was not significantly associated with patient outcome. Among patients with same pT4a stage, the recurrence rate rises and DFS gets worse as the extramural tumor infiltration progresses (P < 0.001). The prediction model for recurrence revealed that size and CT-tumor depth were the two major discriminating factors. Conclusion CT-tumor depth and size could be used as independent predictors for prognosis. Preoperative CT can be used for prognostic stratification to select high risk patients for whom neoadjuvant therapies might be considered.
Methods
Our institutional review board approved this retrospective study and waived the requirement for patient consent. We searched 394 advanced gastric cancer patients (pT2-4) who underwent curative resection in 2010 without neoadjuvant therapies. Two abdominal radiologists independently reviewed the preoperative CT including tumor depth on CT (CT-tumor depth), which was categorized as follows: intramural, minimal extramural (<1mm), spiculated extramural(!1mm) and nodular extramural infiltration. The impact of clinicoradiologic factors on disease recurrence and disease free survival (DFS) was evaluated. Recursive partitioning analysis was performed to suggest prediction models for recurrence.
Results
Of total 394 patients, 86 patients (21.8%) experienced recurrence. Spiculated (!1mm) and nodular extramural tumor infiltration and CT size of 5-10cm were independent predictors of disease recurrence and significantly associated with worse DFS. Lymph node involvement on CT was not significantly associated with patient outcome. Among patients with same pT4a stage, the recurrence rate rises and DFS gets worse as the extramural tumor infiltration progresses (P < 0.001). The prediction model for recurrence revealed that size and CTtumor depth were the two major discriminating factors. PLOS
Introduction
Gastric cancer is the third and fifth leading worldwide cause of death in males and females, respectively [1] and is one of the most common cancers in Korea [2]. The only curative treatment for advanced gastric cancer is complete resection of tumors with negative margins (R0 resection) and D2 lymphadenectomy. However, a significant number of completely resected patients experience tumor recurrence [3,4]. For locally advanced gastric cancer, perioperative chemotherapy has been established as the standard treatment to overcome high rates of recurrence [5,6]. Recently, significant evidence has indicated the advantages of neoadjuvant therapies in patients with gastric cancer. Early treatment of distant microscopic disease and possible downstaging of the primary tumor which might be achieved by neoadjuvant therapies could yield a better outcome [7][8][9]. Several studies have reported that high R0 resection rate and survival were achieved with neoadjuvant chemotherapy followed by curative surgery [10][11][12].
In gastric cancer, computed tomography (CT) is the modality of choice for preoperative staging. To select candidates for neoadjuvant treatment, selection of high risk patients based on preoperative staging is needed. In cases of locally advanced rectal cancer, neoadjuvant chemoradiotherapy is selectively performed when high-risk findings for tumor recurrence (positive circumferential margin, extramural venous invasion, extramural tumor spread, etc.) are detected on preoperative magnetic resonance imaging (MRI) [13,14]. However, limited data are available to stratify patients with advanced gastric cancer.
In this study, we aimed to investigate preoperative prognostic stratification based on preoperative CT findings in patients with advanced gastric cancer in order to select high-risk patients who might benefit from neoadjuvant therapy.
Patient selection
This retrospective study was approved by institutional review board from our tertiary institution, Severance hospital, Yonsei University College of medicine. Requirement for informed consent was waived. After approval by the institutional review board, we retrospectively searched a total of 452 patients with advanced gastric cancer (pT2-4) who underwent curative surgery without neoadjuvant therapy in 2010. Patients who had double primary cancer (n = 17), histology other than adenocarcinoma (n = 1), less than 2 years of follow-up (n = 8), and history of previous endoscopic mucosal resection (n = 1) were excluded. Patients with insufficient preoperative CT images with slice thickness more than 5 mm were also excluded (n = 6). Thus, total 419 patients were analyzed. Demographic data (age and sex) were collected using electronic medical records.
VCT, GE Medical Systems, Milwaukee, WI, USA). Images were acquired from diaphragm level to the symphysis pubis with detector collimations of 16 x 0.75 mm or 64 x 0.6 mm. Other scanning parameters were as follows: 160 mAs; 120 kVp; table speed, 24 mm per rotation; and gantry rotation time, 0.5 seconds. For gastric distention, gas distention with 2 packs of effervescent granules was introduced. All patients received 120-150ml contrast medium intravenously using an automatic injector at a rate of 3-4 ml/s. Images of portal phases were obtained. Axial and coronal images were reconstructed with 3-mm-thick sections and a 3 mm interval.
Image review
Two board-certified abdominal radiologists with more than 10 years of experience independently reviewed preoperative CT images and arrived at a consensus in cases with discrepancy. Both were blinded to pathologic reports and clinical outcomes. The analyzed CT imaging characteristics were tumor depth, lymph node (LN) status, presence of extramural vascular invasion (EMVI), tumor size, longitudinal extent, and Borrmann type. Tumor depth on CT (CTtumor depth) was categorized into four groups. Group 1 was tumors confined to the stomach wall without extramural infiltration. Cases with extramural tumor infiltration were subdivided according to the degree of infiltration as follows: Group 2, transmural involvement of the tumor with minimal extramural infiltration less than 1 mm; group 3, transmural involvement of the tumor with 1 mm or more spiculated extramural infiltration; and group 4, transmural involvement of the tumor with nodular extramural infiltration (Figs 1 and 2). LN involvement on CT (CT-LN status) was categorized into two groups, N0-1 and N2-3, based on a previous study that reported that ! pN2 disease was diagnosed with a reasonably high sensitivity and specificity by CT [15]. Lymph nodes were considered metastatic if they had a short-axis diameter >8 mm. EMVI was identified as serpiginous extension of the tumor within a vascular structure. Tumor size was measured as the longest diameter on the axial or coronal plane. Tumor size was categorized into three groups: 1) less than 5 cm, 2) 5-10 cm, and 3) more than 10 cm. Longitudinal tumor extent was determined by an imaginary line from the gastroesophageal junction to the pyloric channel. Tumors which involved more than half of this line were classified as non-localized type and if not, those were classified as localized type. Tumors were analyzed according to the Borrmann classification and were classified as Borrmann type 4 in cases where infiltrative stomach cancer showed no definite ulceration or mass formation.
Pathology
Postoperative pathologic stage was determined using the seventh edition of the International Union Against Cancer (UICC)/American Joint Committee on Cancer (AJCC) staging system. Slides were prepared from formalin-fixed, paraffin-embedded tissue blocks for histological examination. Hematoxylin-eosin (H-E)-stained slides of each tumor were reviewed to assess lymphovascular invasion (LVI), defined as the presence of tumor cell clusters or individual tumor cells within an endothelium-lined space lumen or destruction of a lymphovascular wall by tumor cells. At our institute, histopathologic reports contain only the presence or absence of LVI and do not specify whether it is vascular or lymphatic invasion. Therefore, we evaluated the diagnostic performance of CT-detected EMVI using pathologic LVI as the reference standard. Histologic grade was also retrieved from the pathologic report.
Data analysis
The results of CT-tumor depth, CT-LN status, and EMVI on CT were compared with those of pT, pN staging, and pathologic LVI, respectively. The incidence of patients with pathologic serosal exposure among each subcategory of CT-tumor depth was calculated. Patients with pN2 or pN3 were considered to have 'advanced nodal status.' The incidence of patients with advanced nodal status was calculated for each subcategory of CT-tumor depth. The recurrence rate and DFS were evaluated according to CT-tumor depth in patients with the same pT4a stage.
Statistical analysis
Statistical analysis was performed using SPSS 20.0.0 (SPSS, Chicago, IL) and R software (version 3.2.2; R Development Core Team, Vienna, Austria). Weighted kappa statistics were used to evaluate the interobserver agreement for CT-tumor depth. Cochran-Mantel-Haenszel tests were performed to examine the linear trend between extent of CT-tumor depth and pathologic LN status. Univariate associations of clinicoradiologic factors such as age, sex, and CT findings with recurrence status were assessed using chi-square or Fisher's exact test. Subsequently, parameters with a P value less than 0.05 were included in multivariate logistic regression and the association of each parameter with recurrence was expressed as an odds ratio (OR) with a 95% confidence interval (CI).
Disease-free survival (DFS) after surgery was estimated from the date of operation to the date of recurrence or death using the Kaplan-Meier method and compared using the log-rank test. The impact of clinicoradiologic factors such as age, sex, and CT findings on DFS was evaluated using the multivariable Cox regression model. All clinicoradiologic variables except sex were entered into the multivariate model. Hazard radios (HR) and 95% CIs were generated.
On the basis of the results of logistic regression analysis for disease recurrence, recursive partitioning analysis (RPA) was performed to suggest prediction models for recurrence. Receiver operating characteristic (ROC) analysis was used to assess the discriminatory powers of the diagnostic tree model. The areas under the curve (AUCs) for receiver operating characteristic analysis were calculated.
All tests were 2-sided and P values less than 0.05 were considered statistically significant.
Patient characteristics
After retrospective image analysis of enrolled 419 patients, 25 who had no identifiable stomach cancer at preoperative CT were excluded. Characteristics of 394 patients are summarized in
Multivariate analysis of disease recurrence
The types of recurrence were classified as locoregional, peritoneal, or distant metastasis. Locoregional recurrence included a soft tissue mass at the gastric bed, upper retroperitoneal LN enlargement, or anastomosis site recurrence. Of 394 total patients, 86 (21.8%) experienced recurrence. Distant metastasis by hematogenous spread was the most frequent site of recurrence (59.3%, 51/86), followed by peritoneal recurrence (44.1%, 30/68) and locoregional recurrence (10.5%, 9/86). The liver was the most frequently involved site of recurrence (18.6%, 16/ 86). Majority had a single site of recurrence, while 12 patients (14.0%) had multiple sites of recurrence.
On univariate analysis, all radiologic variables were significantly associated with recurrence. Multivariate analysis with adjustment for age and CT size revealed that spiculated (!1 mm) and nodular extramural tumor infiltration and CT size of 5-10 cm were independent predictors of disease recurrence (Table 3).
Using recursive-partitioning analysis based on the classification and regression tree model, a prediction tree model for disease recurrence was established (Fig 3). Preoperative tumor size and CT-tumor depth were the two major discriminating factors for predicting disease recurrence. The sensitivity, specificity, and accuracy of this model were 51.2%, 96.1%, and 78.6%, respectively. ROC curve analyses revealed an AUC value of 0.736.
Disease-free survival after curative surgery
The median DFS after curative surgery was 44.9 months (95% CI 42.8-47.0 months). As shown in Fig 4, DFS was significantly worse in patients with CT findings of more advanced CT-tumor depth, CT-LN status, CT-detected EMVI, large size, non-localized longitudinal extent, and Borrmann type 4 (all P < 0.001).
Multivariate Cox regression model with adjustment of age and CT size revealed that spiculated or nodular extramural tumor infiltration, CT size of 5-10 cm, and non-localized tumor involvement of the stomach were significantly associated with worse DFS (Table 4).
Discussion
As neoadjuvant chemotherapy has emerged as an attractive treatment option in patients with advanced gastric cancer, CT staging prior to treatment has become more important for predicting prognosis. Traditionally, pathologic stage is the most important prognostic factor in gastric cancer. However, pathologic stage can only be determined after resection and preoperative treatment could alter the baseline pathologic stage. Effective and accurate preoperative staging is required in the era of preoperative therapy for resectable gastric cancer.
A large-scale retrospective study suggested that preoperative CT staging is an independent predictor of long-term survival and it should be regarded as a stratification factor in a randomized clinical trial of preoperative therapy in patients with gastric cancer [8]. However, their prognostic model was based on a study population including a considerable proportion of patients with lower clinical T stage (mucosa and submucosa). These early gastric cancer patients were excluded from the current study because they do not require neoadjuvant treatment. By selecting only advanced gastric cancer patients, the potential targets for neoadjuvant therapy became our study subjects and we aimed to identify patients who might benefit from neoadjuvant therapy.
The current seventh edition of the UICC/AJCC system separates tumors with extramural fat infiltration into T3 (subserosa invasion) and T4a (serosal exposure). This makes CT based T classification more challenging, because T3 and T4a lesions have similar CT findings (extramural soft tissue infiltration), and their differentiation is very difficult due to limitations of CT spatial resolution. For these reasons, we divided tumors as those with or without extramural infiltration. Tumors with extramural fat infiltration were classified into three subcategories according to the degree of extramural fat infiltration (minimal extramural (<1 mm), spiculated extramural (!1 mm), and nodular extramural), instead of cT staging, which corresponds to pathological T staging. Our results demonstrated that tumors with more advanced extramural infiltration showed higher rates of pathologic serosal exposure. Also, tumors with spiculated (!1 mm) and nodular extramural infiltration on CT indicate higher risk of recurrence and worse DFS compared to tumors with lesser extramural infiltration. Similar results have occurred in patients with the same pT4a stage, indicating that even among patients sharing the same pathologic T stage, prognosis can differ based on preoperative extramural tumor depth. The worse prognosis of tumors with spiculated (!1 mm) or nodular extramural infiltration might be explained by higher rates of pathologic serosal exposure. In addition, more frequent LN involvement and a larger number of metastatic LNs have been observed in tumors with advanced extramural infiltration. This contributes to their worse prognosis because lymph node involvement is the most important factor for overall survival in patients with gastric cancer following curative resection and the survival rates markedly decrease with an increase in the number of metastatic lymph nodes [16,17]. Therefore, we believe that our CT-tumor depth could be more easily assessed than T classification and might be used to stratify highrisk patients who might benefit from neoadjuvant therapies.
Pathologic N stage is one of the most reliable prognostic indicators for patients with resectable gastric cancer [17,18]. However, accurate preoperative N staging by CT is limited because differentiation of small LNs with micrometastasis and large reactive LNs is difficult based on size criteria [8]. In our study, we categorized preoperative LN status into two groups (N0-1 vs. N2-3) based on a previous report that contrast-enhanced CT offers reasonably high sensitivity and specificity for !pN2 [15]. However, LN status on preoperative CT was significantly associated with disease recurrence and DFS only on univariate analysis, but not on multivariate analysis. This result could be attributed to inaccurate preoperative N staging by CT.
Pathological studies show that vascular invasion of gastrointestinal cancer allows tumor cells to embolize through the portal circulation, resulting in distant metastases through hematogenous spread [19]. MRI-detected EMVI in patients with rectal cancer is an established independent significant risk factor for poor prognosis and is used to select patients for neoadjuvant therapies [14,19]. A previous pathological study demonstrated that EMVI was an independent pathologic feature for subsequent visceral metastases and worse disease-specific survival in patients with esophageal and gastric cancer [20,21]. Recently, there was a study which assessed the clinical significance of preoperative CT-detected EMVI. Although the study provided a relatively short follow up period and small sample size, it suggested that CT-detected EMVI-positive patients had significantly lower 1-year progression free survival (PFS) than EMVInegative patients, in addition, EMVI was also an independent prognostic factor in stage III gastric cancer [22]. According to our study, the CT-detected EMVI-positive group had a significantly worse 5-year DFS than the CT-detected EMVI-negative group. CT-detected EMVI showed a significant association with disease recurrence and DFS by univariate analysis, although it was not an independent factor by multivariate analysis. Thus, CT-detected EMVI might be used as a prognostic factor to stratify patients as high-risk who could undergo neoadjuvant therapy followed by surgery, as MRI-detected EMVI is used in rectal cancer. Until now, limited studies have addressed the clinical significance of CT-detected EMVI, and further studies are required for validation.
Tumors of 5-10 cm had significantly worse prognosis than tumors <5 cm. Tumors larger than 10 cm showed high odds ratio and hazard ratio for disease recurrence and DFS, respectively, but without statistical significance. This might be attributed to the small number of tumors larger than 10 cm (n = 21). Still, our study suggests that CT size is an independent prognostic factor and one of the major discriminators of the prognosis prediction model. A previously mentioned large retrospective study showed that tumor size ! 4.5 cm was significantly associated with worse overall survival in patients who underwent curative gastrectomy [8]. Tumor size was also one of the independent risk factors for disease recurrence among patients with T1-2 and lymph node-negative stomach cancer in the United States and China [23]. Additionally, macroscopic tumor size (!7 cm) was one of the most important risk factors for peritoneal recurrence in patients with advanced stomach cancer who underwent adjuvant chemotherapy after D2 gastrectomy [24]. These results implicate large tumor size as a risk factor for poor prognosis in patients with stomach cancer.
Our study has some limitations. First, this is a retrospective study. Inclusion of patients with tumor stage pT2-4 who did not undergo neoadjuvant therapy might have led to selection bias. Second, preoperative staging was based on CT alone, which is not the best modality for staging T classification and peritoneal carcinomatosis. Although we assessed CT-tumor depth instead of classical cT staging for better characterization, we did not consider endoscopic ultrasound (EUS) findings, which provide better resolution compared to CT. Thus, preoperative staging using EUS might be used for validating our preoperative staging using CT-tumor depth. However, despite the favorable performance of EUS in staging, the influence of EUS on patient management remains controversial. EUS seems to have a greater impact on management of early stages rather than in advanced stages, thus, its role has been more emphasized in patients with early gastric cancer [25]. In addition, EUS is not routinely performed in all patients with advanced gastric cancer while CT, which is the modality of choice for preoperative staging, is more feasible and available in majority of patients. Thus, we believe that preoperative staging using CT itself without EUS conveys significance for assessing the tumor characterization.
In conclusion, as neoadjuvant therapies in patients with advanced gastric cancer have gained wide interest, clinical staging prior to preoperative treatment has become more important for predicting prognosis. CT-tumor depth and CT size could be used as independent predictors for prognosis. Tumors with extramural infiltration more than 1 mm showed significantly higher disease recurrence and worse DFS. CT can be used for prognostic stratification to select high risk patients for whom neoadjuvant therapies might be considered.
Supporting information S1
|
2018-08-14T20:37:25.872Z
|
2018-08-09T00:00:00.000
|
{
"year": 2018,
"sha1": "2fd8f8180f16f3ac2bf8004c7f356d16a64e2f0d",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0202207&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2fd8f8180f16f3ac2bf8004c7f356d16a64e2f0d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
196418184
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous peri-articular femoral osteotomy and total knee arthroplasty for treatment of osteoarthritis associated with a severe valgus deformity of >45°
Primary total knee arthroplasty (TKA) in a knee with a severe valgus deformity is a challenging scenario for many surgeons. Approximately 10% of patients requiring TKA have a valgus deformity (defined as an anatomic valgus of >10°).1 Correction of the valgus deformity has produced variable clinical results in terms of correction of the deformity, instability, and the overall results. The valgus deformity consists of two components: an element of bone loss with metaphyseal remodeling, primarily from the lateral femoral condyle and lateral tibial plateau, and a soft-tissue contracture consisting of tight lateral structures (the iliotibial band, lateral collateral ligament, popliteus tendon, posterolateral capsule, and biceps femoris).
Introduction
Primary total knee arthroplasty (TKA) in a knee with a severe valgus deformity is a challenging scenario for many surgeons. Approximately 10% of patients requiring TKA have a valgus deformity (defined as an anatomic valgus of >10°). 1 Correction of the valgus deformity has produced variable clinical results in terms of correction of the deformity, instability, and the overall results. The valgus deformity consists of two components: an element of bone loss with metaphyseal remodeling, primarily from the lateral femoral condyle and lateral tibial plateau, and a soft-tissue contracture consisting of tight lateral structures (the iliotibial band, lateral collateral ligament, popliteus tendon, posterolateral capsule, and biceps femoris). Some authors have recommended staging a corrective osteotomy before total knee arthroplasty in the knee with >15° of valgus deformity. 2 Procedures including staged tibial osteotomy, extraarticular femoral osteotomy and simultaneous extra-articular femoral osteotomy and TKA have been described with various success rates. 1,3,4 We report a case of a simultaneous peri-articular femoral osteotomy and a total knee athroplasty for a symptomatic severe valgus knee of >45°.
Case report
A 76 year old woman presented with a painful, valgus right knee which had progressively worsened over past 2 years. On clinical examination she had a marked valgus deformity of her right knee measuring 49° using a goniometer. Range of motion on examination was measured to be 5° to 100° with an extensor lag of 15°; the valgus deformity was not correctable on clinical examination. Radiographs confirmed a massive valgus deformity of the right knee with a bony defect in the lateral femoral condyle and lateral tibial plateau ( Figure 1-3). The patient was consented to have a simultaneous peri-articular femoral osteotomy and total knee arthroplasty procedure in order to address the severe valgus deformity and osteoarthritis. The patient was informed that the data concerning her case would be submitted for publication and an informed consent obtained. Pre and post operative WOMAC and VAS scores were obtained.
Surgical technique
The procedure was performed under spinal anaesthesia with the use of a tourniquet. Using a medial parapatellar approach, the patella was everted and the knee flexed to 90°. The proposed osteotomy site was marked at the junction of the metaphysis and diaphysis of the femur and a 30° medial closing wedge osteotomy performed using an oscillating saw. This osteotomy was stabilized with bone clamps throughout the remainder of the procedure. The distal femur cut was made using an extra-medullary alignment guide; the femoral canal was then progressively reamed to allow the use of a femoral stem and the AP cuts were made in a standard 3° of external rotation. This left a large defect in the lateral femoral condyle, which was curetted and bone grafted. Attention was then turned to the tibia, the medullary canal was progressively reamed for the diaphyseal fitting tibial stem, and an intra-medullary guide used to cut the medial plateau. The lateral tibial plateau was freshened with the saw, and this defect addressed with the use of a tibial augment. A trial was performed; PCL and popliteus were released in order to obtain a balanced resection. The implants were cemented in place.
The patella tracked laterally at the end of the procedure, despite patella resurfacing. In order to address this quadricepsplasty and vastus medialis oblique (VMO) advancement was performed, allowing compensation for the change in alignment from extreme valgus to neutral alignment; the patella tracked well following this. The range of motion at the end of the operation was 0° to 120°.
Post operatively, full ROM and full weight bearing was permitted from post-operative day two. By six week follow up the patient was fully weight bearing without any aids; radiographs demonstrated bony union at the osteotomy site with good alignment (Figure 4-6). At one year review, the patient was pain free, with a ROM of 0° to 110° without a quadriceps lag, and with clinically acceptable alignment (Figure 7). Pre and postoperative knee scores are listed in Table 1.
Discussion
Performing a TKA in the setting of a valgus deformity involves addressing a variety of issues, including contracted lateral capsular and ligamentous structures, laxity of the medial collateral ligament, contracted or lax posterior soft tissues, osseous deficiency of the lateral femoral condyle and/or tibial plateaus, external rotational deformity of the distal femur, secondary remodeling of the femoral and tibial metadiaphyseal region, and patellar maltracking. 1 Despite advances in instrumentation, correcting a valgus deformity without relying on a constrained implant remains a particular challenge to most surgeons. 2,3 Over the last twenty years, numerous approaches and softtissue procedures have been advocated. [4][5][6] Whiteside recommended sequential releases of the iliotibial band, popliteus, lateral collateral ligament, and lateral head of the gastrocnemius, as well as a tibial tubercle transfer when the Q angle (the angle subtended by the quadriceps and patellar tendons) was >20°. 6 Many authors have also recommended a lateral parapatellar approach be utilized in the valgus knee, allowing easier access to the contracted soft tissue structures. [5][6][7][8] Soft tissue balancing can be especially difficult; medial soft tissue advancement is sometimes necessary 9-11 whilst constrained femoral components may be necessary in severely valgus knees in which the ligamentous balancing is especially tenuous. 6 Varus distal femoral osteotomy is indicated for some patients with isolated lateral compartment gonarthrosis with associated valgus deformity of the knee. The ideal patient has isolated lateral compartment arthritis with a moderate valgus deformity, is physiologically young, has an occupation or activity level that makes arthroplasty less appropriate, and has a normal body-mass index and satisfactory range of motion and stability of the knee. 12 Often, there is a treatment dilemma regarding whether varus distal femoral osteotomy, total knee arthroplasty, or unicompartmental arthroplasty is most appropriate for such patients. Insight regarding the outcome of varus distal femoral osteotomy and the consequences with regard to the outcome of subsequent reconstructive or salvage procedures is helpful for making an informed decision in the management of many of these patients. 13 The situation is further complicated when dealing with a severely valgus deformity in a symptomatic arthritic knee, with corrective osteotomy required in conjunction with TKA, either as a staged or simultaneous procedure.
Studies have looked at the results of proximal tibial osteotomy in regards to a subsequent TKA, demonstrating increased technical difficulty and higher complication rates when the results of conversion of a previously osteotomized knee to a TKA are compared with those of primary TKA alone. 13,14 Specific issues include difficult exposures secondary to patella infera, with increased risk of patellar tendon avulsion, and increased risk of delayed wound healing. Only a few studies have evaluated the effects of a varising distal femoral osteotomy on the results of subsequent TKA. 15,16 Nelson et al. 12 described their results following a series of staged extra-articular osteotomies and with subsequent TKA in 11 knees. Two knees had an excellent result, five had a good result, and four had a fair result; the patients with only a fair results had pain and malalignment in three knees, and pain and instability in one knee. Lonner et al. 17 described simultaneous extra articular femoral osteotomy and a TKR in another series of 11 patients; all of the patients had a good result post-operatively. In both of these studies, the deformity, and subsequent corrective osteotomy were extra-articular.
In the literature to date there has not previously been a report of a simultaneous peri-articular femoral osteotomy and total knee arthroplasty for a severe arthritic valgus knee of >45°. In this patient, excellent alignment both clinical and radiological was obtained, with the valgus deformity improving from 45° to 7° of valgus. Symptomatically, at 1 year follow up the patient is doing extremely well with good ROM and is pain free, without any complications. We believe that this operative technique in severely valgus knees allows for excellent correction of malalignment, aids soft tissue balance with good patient outcome.
|
2019-03-12T13:04:03.335Z
|
2014-07-30T00:00:00.000
|
{
"year": 2014,
"sha1": "99291befdcaf5dd6b5532834a2fe20c72cf4db86",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.15406/mojor.2014.01.00007",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e446b66b59a462eab79891f298f99af0e4c1d893",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237250761
|
pes2o/s2orc
|
v3-fos-license
|
Effects of CALM and SPACE Parent Training Programs on Rumination and Anxiety in Mothers With Bully Sons
Background: About one-third of children are involved in bullying in primary school. Parenting style, as family background, plays an essential role in bullying. This study aimed to compare the effects of the parent training programs of Coaching Approach Behavior and Leading by Modeling (CALM) and Supportive Parenting for Anxious Childhood Emotions (SPACE) on rumination and anxiety in mothers with a bully son. Methods: This was a quasi-experimental study with pre-test, post-test and a control group design. The setting was the primary schools for boys in district 4 of Tehran City, Iran, in 2020. The statistical population of the research included the mothers of bully sons in one of these schools, i.e., selected using a voluntary convenience sampling technique. In total, 60 mothers whose sons scored higher on the Illinois Bullying Scale (IBS; Espelage & Holt, 2001) were selected as the subjects and randomly assigned into 3 groups of 20 individuals (2 intervention groups & 1 control group). The necessary data were collected by the IBS, the Ruminative Response Scale (RRS; Nolen Hoeksema & Morrow, 1991), and the Self-Anxiety Scale. The intervention group subjects attended CALM or SPACE training programs for 13 two-hour weekly sessions. A three-month follow-up was also performed. The collected data were analyzed using repeated-measures Analysis of Variance (ANOVA) in SPSS v. 22. Results: The present study findings suggested a significant difference between the intervention and control groups in rumination (P=0.0001, F=47.54) and anxiety (P=0.0001, F=86.34) in the post-test phase. However, no significant difference was found between CALM (42.80±2.71) and SPACE (42.16±2.71) programs respecting the effects on rumination (P=0.36). In contrast, SPACE (44±2.71) and CALM (39.46±2.71) programs indicated significant differences concerning their impact on anxiety (P<0.032); the CALM program presented a greater impact on reducing anxiety than SPACE. The follow-up results indicated that the CALM program presented a greater retention effect than SPACE on decreasing anxiety in the studied mothers (42.76±1.02, P=0.0001). Conclusion: The obtained data revealed that the CALM and SPACE programs were effective in reducing maternal rumination and anxiety. However, CALM was more effective than SPACE in reducing maternal anxiety. School counselors, mental health professionals, psychiatric nurses, and school health nurses are suggested to apply the study findings.
Introduction
efore the 1990s, bullying was considered a subset of aggressive behavior in children and adolescents. The Centers for Disease Control and Prevention studied bullying in particular to prevent violence in 2019 and considered it a type of violence threatening individuals' health (Dawoud 2020). Peter-Paul Heinemann, a Swedish physician, first coined the term mobbing in 1969. Based on his observations, he proposed a hypothesis for understanding human destructive behavior. Another Swedish psychologist, Dan Olweus, later presented a different definition of a bully and a victim child. He considered a bully as an aggressive and impulsive individual who requires domination (Thomas Cook 2020). About one-third of children in primary school are involved in bullying (De Vries et al. 2017). Currently, 15%-30% of students are affected by bullying and its consequences (Hosseini & Rhmadrash 2020). The United Nations Educational, Scientific and Cultural Organization (UNESCO) data suggest that 246 million children and adolescents experience school violence, annually. In 2019, 32% of students in the world were bullied (Muhopliah, Tentama & Yuzarion 2020). According to studies conducted in Iran, about 80% of students believe in the presence of bullying. Another study reported a prevalence of bullying to be 38.4% among students aged 10 to 14 years (Hosseini & Rhmadrash 2020;Khodabakhshi-Koolaee & Darestani-Farahani 2020). Olweus defines bullying as a negative, repetitive activity characterized by a power imbalance between the bully and the victim. It can be considered an intentional and repeated physical, verbal, or psychological pressure on a weaker subject by an individual or a group of stronger individuals, i.e., usually associated with an unequal power between the involved parties. Bullying includes 4 main components, as follows: voluntariness and intentionality; persistence; power imbalance between the parties to the conflict, and occurrence in familiar social groups. Bullying is demonstrated either physically, directly or indirectly, and emotionally (Hosseini & Rhmadrash 2020). It is considered a multifactorial phenomenon, i.e., affected by the accumulation of individual, family, school, and social factors (Valle & Williams 2020).
Maternal mental health problems, such as depression and anxiety increase the risk of behavioral problems in children (Garcia & O' Neil 2020). Parents' mental health, their interaction with the child, and other life events, i.e., interpreted by the child significantly impact the assessment of children's behavioral problems. The lifetime prevalence of anxiety disorders in females is approximately 40%; twice as high as in males (Gregory et al. 2020). High levels of parental anxiety lead to inadequate formation of Emotion Regulation (ER) skills. Anxiety in mother-child relationships, in turn, affects the social learning processes; thus, such conditions lead to unsuitable parent-child role B Highlights • Parenting training programs help parents gain an understanding of their children's psychological, emotional, and behavioral problems.
• Bullying behavior is an intentional and repeated physical, verbal, or psychological pressure on weaker subjects by a stronger individual. Bullying individuals have educational, interpersonal, and psychological problems.
• The explored mothers with bullying sons have reported mental health problems, such as depression and anxiety. Moreover, they are always blamed by the school and other parents, because of their son's behaviors.
• The CALM and SPACE programs were effective in reducing maternal rumination and anxiety. However, CALM was more effective than SPACE in reducing maternal anxiety.
Plain Language Summary
Parent training programs help parents to learn the best ways to correct their children's behaviors and have a better relationship with the child. The CALM and SPACE programs could help mothers improve their parent-child relationship; this is achieved by recognizing their children's psychological and behavioral problems and reduce their problems and worries of themselves, such as anxiety and rumination. This research revealed that the mentioned programs helped the mothers of bullying sons to reduce their anxiety and rumination. modeling and the child learns inefficient ER strategies (Jamali & Khodabakhshi-Koolaee 2019).
Anxiety is a response to real or potential threats and can jeopardize an individual's state of balance and homeostasis (Garcia & O' Neil, 2020). It is a common type of emotional disorder that disrupts a subject's psychological functioning (Pang, Tu & Cai 2019). Research on rumination dates back to studies by scholars, like Aron Temkin Beck followed by Nolen-Hoeksema's research that highlighted a strong relationship between rumination thoughts and emotional disorders. Rumination predicts the onset of anxiety and triggers psychological stimulation and a negative emotional state (Feldhaus et al., 2020). It is a subject's mood in response to anxiety and involves an extensive passive focus on disturbing symptoms and their causes. Individuals often tend to reflect on the causes and negative consequences of events and overlook solutions (Chen et al. 2017). Over the past 3 decades, numerous interventions and programs were developed and implemented in different parts of the world, especially in Europe and North America, to combat bullying; these programs were often school-based (Olweus, Solberg & Breivik 2020). Olweus Bullying Prevention Program (OBPP) is the most widely used anti-bullying program in schools. Most of these programs are student-centered (Olweus, Solberg & Breivik 2020;Karats & Ozturk 2020;Bandi 2019).
Coaching Approach Behavior and Leading by Modeling (CALM) program is an adaptation of Parent-Child Interaction Therapy (PCIT). Polyafico et al. modified the treatment techniques based on parent-child interactions to target separation anxiety disorders as well as other anxiety disorders, such as social anxiety, generalized anxiety disorder, and specific phobias. The first stage involves Child-Directed Interactions (CDI). The second step addresses the special and unique aspect of the program. At this stage, special skills are taught to the parents, known as the DADDS (D: Escribe feared situation; A: Pproach feared situation (modeling); D: Irect command for child to approach; S: Tate intent to remain in situation and provide selective attention) steps, and determined based on positive attention to model brave behaviors and ignore anxiety-related symptoms (Bandi 2019;Puliafico, Comer & Albano 2013). It is a direct and short-term treatment that uses an innovative method with live guidance and indirect training through the bug in the ear to treat various anxiety problems in children. It requires skills training for parents to effectively manage their child's problematic behaviors (Puliafico, Comer & Albano 2013).
The Supportive Parenting for Anxious Childhood Emotions (SPACE) program was developed by Lebowitz. It is a suitable treatment for children aged 2-8 years and relies on parental education (Lebowitz 2013). It consists of 8 treatment parts and 5 modules at the discretion of the therapist, which allows treatment specifically with parents without the participation of their children (Lebowitz et al. 2014).
The SPACE program is an intervention for parental adaptive behaviors and concerns the child's anxiety. It does not teach specific skills to parents; however, SPACE focuses on how parents interact with the child and the characteristics of their relationship (Bui, Charney & Baker, 2020). There exists a gap in the literature despite the key role played by parents in increasing their child's ability to adapt to bullying. Accordingly, relatively few studies have focused on mental health and increasing parental adjustment (Benatov 2019). The analysis of anti-bullying programs indicates that an essential element of successful programs is to involve parents in parenting education sessions (Van Niejenhuis, Huitsing, & Veenstra 2020). Previous research identified the positive impact of anti-bullying programs. However, programs that cover broader areas, like parent-teacher involvement are more effective (Khodabakhshi Koolaee et al. 2015;Grief Green et al. 2020). Although the role of parenting is a widely researched area, variables, such as rumination and maternal anxiety in how to manage a bully child remain unaddressed. Research on the SPACE program has focused on anxiety disorders (Lebowitz, 2013) and obsessive-compulsive disorders (Lebowitz et al. 2014). Furthermore, studies on the CALM program have mainly focused on anxiety (Puliafico, Comer & Albano 2013). These programs were translated into Persian for the first time and implemented among the Iranian population. These programs were developed to help mothers solve their children's behavioral problems. Besides, child bullying, as one of the most important issues in schools and families, has been addressed with a focus on educating mothers. The present study aimed to determine the effects of the CALM and the SPACE programs on rumination and anxiety in mothers with bully sons.
Materials and Methods
This was a quasi-experimental study with pre-test, post-test and a control group design. A three-month follow-up was also performed in this research. The setting was boys' primary schools in district 4 of Tehran City, Iran, in 2020. The study population included the mothers of bullying sons who were recruited by voluntary convenience sampling method; accordingly, they were randomly assigned into 3 research groups. To this end, the researcher referred to one of the branches of the educational complex in Sepehr-e Marefat, in district 4 of Teh-ran. The bullying students were first identified with the help of the school principals and deputy principals as per the inclusion criteria. Next, the Illinois Bullying Scale (IBS) was administered to 70 bullying children. Eventually, 60 children who obtained the highest scores on the IBS were selected. The inclusion criteria were students generating disciplinary problems as per the school's disciplinary records and engaging in bullying behaviors, as diagnosed by the IBS. The exclusion criteria were absence from >2 training sessions and the mother's or child's simultaneous participation in another psychological program. The sample size was calculated for each group to be 15 subjects based on the effect size of 0.25, alpha of 0.05, and test power of 0.80. The research subjects were assigned into two intervention groups and one control group by random blocking method (n=20/group) (the sample attrition rate equaled 15).
The following instruments were used to collect the necessary data in this research: The Illinois Bullying Scale (IBS): The IBS was developed by Espelage and Holt (2001). It contains 18 items and 3 subscales, including bullying (9 items), victimization (4 items), and aggression (5 items) (Espelage & Holt 2001). Each item is scored on a five-point Likert-type scale, ranging from never (0) to ≥7 times (4). Each subscale is measured via a separate score. Subscale scores are computed by summing the respective items. Scores range from 0 to 74. A high score on each subscale indicates the respondent's greater frequent engagement in the same behavior. The reliability index for the whole scale was estimated equal to 0.87 using Cronbach's alpha coefficient (Espelage & Holt 2001). In Iran, Balootbangan and Talepasand translated the questionnaire into Persian and reported the relevant Cronbach's alpha coefficient as 0.87 for the total scale. The Content Validity Ratio (CVR) of the scale's translated version was 0.72 and its Content Validity Index (CVI) was measured as 0.81 (Akbari Balootbangan & Talepasand, 2015). In this research, the Cronbach's alpha coefficient for the total scale was equal to 0.75.
The Ruminative Response Scale (RRS):
The RRS is a subscale of the Response Styles Questionnaire developed by Nolen Hoeksema and Morrow (1991). The scale contains 22 items, scored based on a four-point Likerttype scale, ranging from never (1) to 4 (often), depending on the extent to which the respondent uses rumination as a response to boring moods. The total score is calculated as the sum of all individual items. The minimum and maximum obtainable scores are 22 and 88, respectively. The Cronbach's alpha coefficient of this scale ranges from 0.88 to 0.91 (Nolen-Hoeksema & Morrow 1991). Mansouri et al. (2012), in Iran, translated the scale into Persian and reported the Cronbach's alpha coefficient of 0.90 for the total scale. The Cronbach's alpha coefficient of its Persian version in this research was equal to 0.85.
Zung Self -rating Anxiety Scale (SAS): It was developed by Zung (1971). The SAS contains 20 items that measure anxiety levels. Each item is scored on a 4-point Likert-type scale. The total score ranges from 20 to 80. In total, 15 items measure emotional symptoms and 5 items assess physical symptoms. Items 5, 9, 13, and 19 are scored reversely. The reliability coefficient of this scale is 0.80, indicating its high reliability. The reliability of the scale was calculated as 0.87 using Cronbach's alpha coefficient in Iran. Besides, its face validity and content validity were also confirmed (Setyowati, Chung & Yusuf 2019). The reliability index of the scale was calculated as 0.77 by Cronbach's alpha coefficient in this study.
The researcher contacted the students' mothers and provided the required explanations about the research project. Then, the RRS and SAS were completed by all participating mothers in the intervention and control groups. The research participants in each intervention group attended 13 training sessions.
The two programs were translated from English to Persian and divided into 45-minute sessions. The content of the SPACE program was extracted and translated into Persian by the researchers following a study by Lebowitz and associates (Lebowitz et al. 2014). This protocol was performed in thirteen 45-minute sessions. The CALM program was also translated and implemented in thirteen 45-minute sessions, following previous studies (Puliafico, Comer & Albano 2013;Cooper-Vince et al. 2016;Huang et al. 2019). The content of the training sessions is presented in Tables 1 and 2. The sessions were conducted in groups by the researcher in the school hall of Sepehre Marefat Educational Complex, in district 4 of Tehran City, Iran. The mothers in both research groups attended the sessions two days a week (Sundays & Tuesdays) from 10 AM to 12 PM. All related guidelines, brochures, and pamphlets were prepared and distributed to the explored mothers. The researcher also provided her phone number to the mothers for further questions and guidance.
The questionnaires were re-administered to all study groups at the end of the intervention, also at the threemonth follow-up phase to assess the retention effects of the provided programs. The chart of the study process is illustrated in Figure 1. Finally, after 3 months, the follow-up examination was administered to all research participants. The collected data were analyzed using repeated-measures Analysis of Variance (ANOVA) in SPSS v. 22.
Results
The study participants' group-wise demographic data are presented in Table 3. There was no significant difference in age, educational level, and occupational status between the research subjects (P>0.05). Table 4 displays the Mean±SD scores of the dependent variables in pre-test, post-test, and follow-up stages in all study groups. The Independent Samples t-test data indicated no significant differences in the pre-test scores of the intervention groups and the controls concerning the dependent variables (P>0.05). Table 4, SPACE and CALM programs could reduce the mean scores of rumination and anxiety reported by the mothers in the post-test and follow-up stages. However, inferential statistics were used to examine if differences were significant. It was revealed that both study programs were effective in reducing maternal rumination and anxiety in the post-test and follow-up stages. Before running the repeated-measures ANOVA, the assumptions of parametric tests were checked. Accordingly, the results of the Shapiro-Wilk test suggested that the assumption of the normal distribution of the data for rumination and anxiety in the intervention and control groups was established in the pre-test, post-test, and follow-up stages (P<0.05). Besides, the assumption of homogeneity of variances was measured by Levene's test. The relevant results were not significant, indicating that the assumption of homogeneity of variances was established (P>0.05). Furthermore, the Box's M test data revealed that the equality of multiple variance-covariance matrices was established; thus, the F-test could be implemented. The results of Mauchly's test of sphericity indicated that the data sphericity assumption was not observed for rumination and anxiety. Accordingly, the assumption of sphericity was not established, indicating the relationships between the variables were subject to change the values of the dependent variable; thus, enhancing the odds of type I error rates. Accordingly, the alternative analysis (Greenhouse-Geisser correction) Providing positive feedback to parents Practicing more intensively to learn skills to prepare for generalizing what has been learned in the sessions to the outside world 9
As per
Mother/Child Reviewing the child's skills and behaviors Reviewing instructions and adjusting them to the child's age
& 11
Mother/Child Providing positive feedback to parents for a good relationship with their child Directing parents to a higher level of control of anxiety
Mother/Child
Providing positive feedback to parents Directing parents to a higher level Encouraging the continuous practice of skills for reducing and management of her anxiety and negative thoughts
Mother
Group discussion for evaluating the outcomes was used to reduce the odds of type I error by reducing the degree of freedom.
The results of Greenhouse-Geisser correction test identified that the time or step of assessment significantly affected maternal rumination and anxiety; this finding explained 45% and 60% of the differences in the variances of these variables, respectively. Besides, the effect of group membership (SPACE & CALM programs) was significant on maternal rumination and anxiety scores; accordingly, this statistic explained 27% and 39% of the difference in the scores of these variables, respectively. These findings suggested that the effect of the type of treatment and time factor interaction on rumination and maternal anxiety scores was significant and explained 22% and 51% of the difference in maternal rumination and anxiety scores, respectively. The statistical power indicates high statistical accuracy and adequate sample size. Overall, SPACE and CALM programs affected maternal rumination and anxiety in different steps. Table 5 demonstrated the pairwise comparisons of the mean scores of rumination and anxiety in the study participants according to the evaluation step. Table 5, there was a significant difference between the mean scores of rumination and anxiety in the pre-test, post-test, and follow-up stages. This finding suggests that SPACE and CALM programs significantly reduced rumination and anxiety in the post-test and follow-up steps, compared to the pre-test stage. The results of descriptive statistics signified a decrease in the mean scores of maternal rumination and anxiety after the intervention. There was no significant difference in the mean scores of maternal rumination and anxiety between the post-test stage and follow-up stages. Accordingly, the mean scores of maternal rumination and anxiety significantly reduced in the post-test; the observed changes in these variables were also retained in the follow-up stage. Overall, the CALM and SPACE programs significantly reduced the mean scores of maternal rumination and anxiety in the post-test and also follow-up stages. Table 6 lists the pairwise comparisons of the mean scores of rumination and anxiety in the study participants by group.
As per
Moreover, significant differences existed between the CALM (45.80), SPACE (42.16), and control (54.36) groups in terms of rumination (P<0.05) ( Table 7). Therefore, the CALM and SPACE programs reduced the examined mothers' rumination. However, there was no significant difference in the mean scores of ruminations between the CALM (45.80) and SPACE (42.16) groups (P≥0.05); accordingly, the CALM and SPACE programs Instructing the group members to use reinforcement immediately after the occurrence of the desirable behavior
Mother
Describing the problem without blaming and reflecting the child's feelings by the parents and uttering reinforcing statements
Mother
Reducing negative adaptive behaviors of parents and using descriptive and reflective statements
Mother
Emotion regulation skills training Explaining the physiological, emotional, and cognitive aspects of emotions and exercises related to emotional release
Mother
Reviewing emotion regulation skills and focusing on alternative thoughts and questions practicing using tables and graphs
Mother
Using support groups Reviewing the use of group discussions in the meeting
11
Mother Managing and coping with destructive behaviors and detailed investigation of behaviors
Mother
Reviewing all sessions and achieving the therapeutic goal, defining the supervisory role of parents
Mother
Group discussion were not different respecting effectiveness in reducing the explored mothers' rumination. Moreover, there was a significant difference in the mean scores of anxiety group the CALM (44) and SPACE (39.46) groups (P<0.05); thus, both provided programs effectively reduced the examined mothers' anxiety (P<0.05). Furthermore, the SPACE program was more effective than the CALM program in reducing the study participants' anxiety.
Discussion
The present study compared the effects of the CALM and SPACE programs on rumination and anxiety in mothers with bully sons. The relevant results suggested that the SPACE program was more effective in reducing the examined mothers' anxiety. The SPACE program is dynamic and adaptive. It simulates the treatment space with real-life conditions and problems; thus, it can gen-11 Diagram 1. The flow diagram of the study process eralize the findings to environments outside the research setting. Besides, the mother and child can simultaneously participate in the SPACE program; this is a crucial advantage of this program. The present study findings were in line with those of a study by Huang et al. (2019). They cited that school-based anti-bullying programs with a parent component are beneficial for controlling bullying children. Parents' participants in school programs help the parent-child relationships (Huang et al. 2019). Kaminetsky (2020) stated that that the CALM program increased mothers' positive parenting skills, while their negative parenting skills decreased following treatment. This result highlights that the CALM program can be effective for reducing maternal and child anxiety.
The mother's behavior can prepare the child to learn coping skills or be exposed to further problems (Brumariu & Kerns, 2015). Accordingly, Brumariu and Kerns (2015) found a relationship between the emotions of mothers and children and the onset of anxiety symptoms. The mothers of more anxious children provide less attention, love, and affection for their children and talk less to the child when something goes wrong. Additionally, more anxious mothers express less affection and love to their children; they are less likely to engage in conversations with the child when encountering a problem. Moreover, more anxious children are less likely to talk to others. Finally, they demonstrated that the type of behavior and emotions of anxious mothers present a reciprocal effect on their children's anxiety. Furthermore, the CALM program familiars parents with the basic concepts of support and adjustment (Brumariu & Kerns, 2015). Lebowitz et al. (2020) argued that the SPACE program is highly appropriate for reducing anxiety in children. This program helps mothers and children by involving parents in the treatment process (Lebowitz et al., 2020). However, in another study, they identified no difference between the SPACE program and the cognitivebehavioral therapy, where both approaches were effective (Lebowitz et al., 2020). Furthermore, both programs were effective in reducing rumination in mothers. Ruminants focus on the content of thoughts and self-criticism. Self-Criticism and rumination are incompatible psychological processes that adversely affect parenting and increase parental stress (Lebowitz et al., 2020). Moreira and Canavarro (2018) studied the effects of rumination on parental stress and mindful parenting dimensions, including careful listening; compassion for the child; nonjudgmental acceptance of parental performance; child's emotional awareness, and ER in parents. The relevant results indicated that rumination was reversely associated with all aspects of mindful parenting dimensions and parental stress (Moreira & Canavarro, 2018 Overall, the CALM and SPACE programs were effective in reducing anxiety and rumination symptoms in the examined mothers. However, the CALM program was more effective in reducing anxiety symptoms than the SPACE program. The CALM program further emphasis on parent-child interactions (Harcourt, Jaseperse & Green 2014;Lebowitz & Shimshoni 2018). Besides, the CALM program provides the parent the power to coach and supervise during the session and enjoys a more suitable feedback system; therefore, it can help improve the parent-child relationship and parenting skills (Comer et al., 2012).
The research sample was limited to the mothers of male primary school students in Sepehr-e Marefat Educational Complex, district 4 of Tehran, i.e., a study limitation.
Conclusion
The obtained results revealed that the mothers of bullying children experience high levels of anxiety and rumination. Therefore, parenting education programs, such as CALM and SPACE can improve mothers' awareness and parenting skills. These programs also reduce their rumination and anxiety. Parents are at the forefront of identifying, preventing, and helping the treatment of their children's bullying behavior; thus, applying parent training programs can play an effective role in reducing mothers' and their children's psychological problems. The school counselors, mental health professionals, psychiatric nurses, and school health nurses are suggested to apply the collected findings.
Compliance with ethical guidelines
This study was approved by the Ethics Committee of the Tehran North Branch, Islamic Azad University, (Code: IR.IAU.TNB.REC.1399.006). All ethical principles are considered in this article. The participants were informed about the purpose of the research and its implementation stages. They were also assured about the confidentiality of their information and were free to leave the study whenever they wished, and if desired, the research results would be available to them.
Funding
This study was extracted from the PhD. dissertation of the first author at the Department of Counseling, Faculty of Humanities and Social Sciences, Tehran North Branch, Islamic Azad University, Tehran.
|
2021-08-20T18:26:58.847Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8801c028e55edf31d887c7e8db2d436b31ba4296",
"oa_license": "CCBYNC",
"oa_url": "http://jccnc.iums.ac.ir/files/site1/user_files_aa60ca/anahita123-A-10-33-15-3ed24e9.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6095e9982acfac2f2527397d6abae319ca4f0a50",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
61215345
|
pes2o/s2orc
|
v3-fos-license
|
60 PROGRAMMER’S NICHE Portable C++ for R Packages
Package checking errors are more com- mon on Solaris than Linux. In many cases, these errors are due to non-portable C++ code. This ar- ticle reviews some commonly recurring problems in C++ code found in R packages and suggests
Portable C++ for R Packages by Martyn Plummer
Abstract Package checking errors are more common on Solaris than Linux.In many cases, these errors are due to non-portable C++ code.This article reviews some commonly recurring problems in C++ code found in R packages and suggests solutions.
CRAN packages are tested regularly on both Linux and Solaris.The results of these tests can be found at http://cran.r-project.org/web/checks/check_summary.html.Currently, 24 packages generate errors on Linux while 125 packages generate errors on Solaris.1 A major contribution to the higher frequency of errors on Solaris is lack of portability of C++ code.The CRAN Solaris checks use the Oracle Solaris Studio 12.2 compiler, which has a much more stringent interpretation of the C++ standard than the GCC 4.6.1 compiler used for the checks on on Linux, and will therefore reject code that compiles correctly with GCC.
It seems plausible that most R package developers work with GCC and are therefore not aware of portability issues in their C++ code until these are shown by the CRAN checks on Solaris.In fact, many of the testing errors are due to a few commonly recurring problems in C++.The aims of this article are to describe these problems, to help package authors diagnose them from the Solaris error message, and to suggest good practice for avoiding them.
The scope of the article is limited to basic use of C++.It does not cover the use of the Rcpp package (Eddelbuettel and Francois, 2011) and the Scythe statistical library (Pemstein et al., 2011), which are used to support C++ code in some R packages, nor issues involved in writing your own templates.
Before describing the portability issues in detail, it is important to consider two general principles that underlie most portability problems.
Firstly, C++ is not a superset of C. The current C standard is ISO/IEC 9899:1999, usually referred to as C99 after its year of publication.Most C++ compilers support the ISO/IEC 14882:1998 (C++98 ), which predates it.2Thus, the two languages have diverged, and there are features in C99 that are not available in C++98.
The g++ compiler allows C99 features in C++ code.These features will not be accepted by other compilers that adhere more closely to the C++98 standard.If your code uses C99 features, then it is not portable.
The C++ standard is evolving.In August 2011, the ISO approved a new C++ standard which was published in September 2011 and is known as C++11 .
This should remove much of the divergence between the two languages.However, it may take some time for the new C++11 standard to be widely implemented in C++ compilers and libraries.Therefore this article was written with C++98 in mind.
The second general issue is that g++ has a permissive interpretation of the C++ standard, and will typically interpret ambiguous code for you.Other compilers require stricter conformance to the standard and will need hints for interpreting ambiguous code.Unlike the first issue, this is unlikely to change with the evolving C++ standard.
The following sections each describe a specific issue that leads to C++ portability problems.By far the most common error message produced on Solaris is 'The function foo must have a prototype'.In the following, this is referred to as a missing prototype error.Problems and solutions are illustrated using C++ code snippets.In order to keep the examples short, ellipses are used in place of code that is not strictly necessary to illustrate the problem.
C99 functions
Table 1 shows some C functions that were introduced with the C99 standard and are not supported by C++98.These functions are accepted by g++ and will therefore pass R package checks using this compiler, but will fail on Solaris with a missing prototype error.
Table 1: Some expressions using C99 functions and their portable replacements using functions declared in the '<Rmath.h>'R packages have access to C functions exposed by the R API, which provides a simple workaround for these functions.All of the expressions in the left hand column of Table 1 can be replaced by portable expressions on the right hand side if the header '<Rmath.h>' is included.
A less frequently used C99 function is the cube root function cbrt.The expression cbrt(x) can be replaced by std::pow(x, (1./3.)) using the pow function defined in the header '<cmath>'.
C99 macros for special values
The C99 standard also introduced the constants NAN and INFINITY as well as the macros isfinite, isinf, isnan and fpclassify to test for them.None of these are part of the C++98 standard.Attempts to use the macros on Solaris will result in a missing prototype error, and the constants will result in the error message "NAN/INFINITY not defined".
As with the C99 functions above, the R API provides some facilities to replace this missing functionality.The R macros R_FINITE and ISNAN and the R function R_IsNan are described in the R manual "Writing R Extensions" and are accessed by including the header file '<R.h>'.They are not exactly equivalent to the C99 macros because they are adapted to deal with R's missing value NA_REAL.
If you need access to a non-finite value then '<R_ext/Arith.h>'provides R_PosInf, R_NegInf and R_NaReal (more commonly used as NA_REAL).
Variable-length arrays
A variable-length array is created when the size of the array is determined at runtime, not compile time.A simple example is ... } Variable length arrays are not part of the C++98 standard.On Solaris, they produce the error message "An integer constant expression is required within the array subscript operator".
Variable-length arrays can be replaced by an instantiation of the vector template from the Standard Template Library (STL).Elements of an STL vector are accessed using square bracket notation just like arrays, so it suffices to replace the definition of the array with std::vector<double> A(n); The STL vector template includes a destructor that will free the memory when A goes out of scope.
A function that accepts a pointer to the beginning of an array can be modified to accept a reference to an STL vector.For example, this void fun(double *A, unsigned int length) { ... } may be replaced with void fun(std::vector<double> &A) { unsigned int length = A.size(); ... } Note that an STL vector can be queried to determine its size.Hence the size does not need to be passed as an additional argument.
External library functions that expect a pointer to a C array, such as BLAS or LAPACK routines, may also be used with STL vectors.The C++ standard guarantees that the elements of a vector are stored contiguously.The address of the first element (e.g.&a[0]) is thus a pointer to the start of an underlying C array that can be passed to external functions.For example: int n = 20; std::vector<double> a(n); ... // fill in a double nrm2 = cblas_dnrm2(n, &a[0], 1); Note however that boolean vectors are an exception to this rule.They may be packed to save memory, so it is not safe to assume a one-to-one correspondence between the underlying storage of a boolean vector and a boolean array.
Function overloading
The C++ standard library provides overloaded versions of most mathematical functions, with versions that accept (and return) a float, double or long double.
If an integer constant is passed to these functions, then g++ will decide for you which of the overloaded functions to use.For example, this expression is accepted by g++.
#include <cmath> using std::sqrt; double z = sqrt(2); The Oracle Solaris Studio compiler will produce the error message 'Overloading ambiguity between std::sqrt(double) and std::sqrt(float)'.It requires a hint about which version to use.This hint can be supplied by ensuring that the constant is interpreted as a double double z = sqrt(2.); In this case '2.' is a double constant, rather than an integer, because it includes a decimal point.To use a float or long double, add the qualifying suffix F or L respectively.
The same error message arises when an integer variable is passed to an overloaded function inside an expression that does not evaluate to a floating point number.In this example, the compiler does not know if m should be considered a float, double or long double inside the sqrt function because the return type is bool.The hint can be supplied using a cast: return n > m * sqrt(static_cast<double>(m));
Namespaces for C functions
As noted above, C99 functions are not part of the C++98 standard.C functions from the previous C90 standard are allowed in C++98 code, but their use is complicated by the issue of namespaces.This is a common cause of missing prototype errors on Solaris.
The C++ standard library offers two distinct sets of header files for C90 functions.One set is called '<cname>' and the other is called '<name.h>'where "name" is the base name of the header (e.g.'math', 'stdio', . . .).It should be noted that the '<name.h>'headers in the C++ standard library are not the same files as their namesakes provided by the C standard library.
Both sets of header files in the C++ standard library provide the same declarations and definitions, but differ according to whether they provide them in the standard namespace or the global namespace.The namespace determines the function calling convention that must be used: • Functions in the standard namespace need to be referred to by prefixing std:: to the function name.Alternatively, in source files (but not header files) the directive using std::foo; may be used at the beginning of the file to instruct the compiler that foo always refers to a function in the standard namespace.
• Functions in the global namespace can be referred to without any prefix, except when the function is overloaded in another namespace.
In this case the scope resolution prefix '::' must be used.For example: Although the C++98 standard specifies which namespaces the headers '<cname>' and '<name.h>'should use, it has not been widely followed.In fact, the C++11 standard has been modified to conform to the current behaviour of C++ compilers (JTC1/SC22/WG21 -The C++ Standards Committee, 2011, Appendix D.5), namely: • Header '<cname>' provides declarations and definitions in the namespace std.It may or may not provide them in the global namespace.
• Header '<name.h>'provides declarations and definitions in the global namespace.It may or may not provide them in the namespace std.
The permissiveness of this standard makes it difficult to test code for portability.If you use the '<cname>' headers, then g++ puts functions in both the standard and global namespaces, so you may freely mix the two calling conventions.However, the Oracle Solaris Studio compiler will reject C function calls that are not resolved to the standard namespace.
The key to portability of C functions in C++ code is to use one set of C headers consistently and check the code on a platform that does not use both namespaces at the same time.This rules out g++ for testing code with the '<cname>' headers.Conversely, the GNU C++ standard library header '<name.h>'does not put functions in the std namespace, so g++ may be used to test code using '<name.h>'headers.This is far from an ideal solution.These headers were meant to simplify porting of C code to C++ and are not supposed to be used for new C++ code.However, the fact that the C++11 standard still includes these deprecated headers suggests a tacit acceptance that their use is still widespread.
Some g++ shortcuts
The g++ compiler provides two further shortcuts for the programmer which may occasionally throw up missing prototype errors on Solaris.
Headers in the GNU C++ standard library may be implicitly included in other headers.For example, in version 4.5.1, the '<fstream>' and '<stdexcept>' headers both include the '<string>' header.If your source file includes either of these two headers, then you may may use strings without an #include <string>; statement, relying on this implicit inclusion.Other implementations of the C++ standard library may not do this implicit inclusion.
When faced with a missing prototype error on Solaris, it is worth checking a suitable reference (such as http://www.cplusplus.com) to find out which header declares the function according to the C++ standard, and then ensure that this header is explicitly included in your code.
A second shortcut provided by g++ is argumentdependent name lookup (also known as Koenig lookup), which may allow you to omit scope resolution of functions in the standard namespace.For example, #include <algorithm> #include <vector> using std::vector; The R Journal Vol.3/2, December 2011 ISSN 2073-4859 void fun (vector<double> &y) { sort(y.begin(),y.end()); } Since the arguments to the sort algorithm are in the std namespace, gcc looks in the same namespace for a definition of sort.This code therefore compiles correctly with gcc.Compilers that do not support Koenig lookup require the sort function to be resolved as std::sort.
Conclusions
The best way to check for portability of C++ code is simply to test it with as many compilers on as many platforms as possible.On Linux, various third-party commercial compilers are available at zero cost.The Intel C++ Composer XE compiler is free for non-commercial software development (Note that you must comply with Intel's definition of noncommercial); the PathScale EkoPath 4 compiler recently became open source; the Oracle Solaris Studio compilers may be downloaded for free from the Oracle web site (subject to license terms).The use of these alternative compilers should help to detect problems not detected by GCC, although it may not uncover all portability issues since they also rely on the GNU implementation of the C++ standard library.It is also possible to set up alternate testing platforms inside a virtual machine, although the details of this are beyond the scope of this article.
Recognizing that most R package authors do not have the time to set up their own testing platforms, this article should help them to interpret the feedback from the CRAN tests on Solaris, which provide a rigorous test of conformity to the current C++ standard.Much of the advice in this article also applies to a C++ front-end or an external library to which an R package may be linked.
An important limitation of this article is the assumption that package authors are free to modify the source code.In fact, many R packages are wrappers around code written by a third party.Of course, all CRAN packages published under an open source license may be modified according to the licence conditions.However, some of the solutions proposed here, such as using functions from the R API, may not be suitable for third-party code as they require maintaining a patched copy.Nevertheless, it may still be useful to send feedback to the upstream maintainer.
|
2014-10-01T00:00:00.000Z
|
2011-01-01T00:00:00.000
|
{
"year": 2011,
"sha1": "525c0cc2847335d46d06273eea279145c7ad02ca",
"oa_license": "CCBY",
"oa_url": "https://journal.r-project.org/archive/2011/RJ-2011-020/RJ-2011-020.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "525c0cc2847335d46d06273eea279145c7ad02ca",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257316149
|
pes2o/s2orc
|
v3-fos-license
|
Nasal-temporal asymmetric changes in retinal peripheral refractive error in myopic adolescents induced by overnight orthokeratology lenses
Objective To observe the changes in peripheral refraction in myopic adolescents after overnight orthokeratology and its influencing factors. Methods This was a prospective study among young myopic adolescents aged 8–14 years (n = 21). The peripheral refraction of the subjects was measured at 5, 10, 15, 20, 25, and 30° from the nasal and temporal side to the central fixation by WAM-5500 Open-field refractometer. The axial length, baseline spherical equivalent refraction, and other parameters were measured. The data were measured at baseline and 1, 3, and 12 months after wearing orthokeratology lenses. Results The relative peripheral refraction at the nasal and temporal side from central to 30° eccentricity revealed relative hyperopic defocus in all subjects at baseline measurement. One month after wearing the orthokeratology lenses, the relative peripheral refraction changed to myopic defocus, the nasal-temporal relative peripheral refraction was asymmetric, and the observed difference was statistically significant. Positive correlations were found between the change amount of nasal relative peripheral refraction and baseline spherical equivalent refraction, the baseline nasal relative peripheral refraction was higher than that on the temporal side, and after orthokeratology, the value of nasal relative peripheral refraction was lower than that on the temporal side. The changes at 30° on both sides were correlated to the axial elongation (rNasal = 0.565, rTemporal = 0.526, p < 0.05). Conclusion This study demonstrated that after orthokeratology, relative peripheral hyperopia in the myopic patients turned into relative peripheral myopia, and the nasal-temporal asymmetry changed significantly after orthokeratology, which was correlated with the baseline refractive state.
While there are no definite conclusions regarding the principles and mechanisms of orthokeratology lens controlling myopia, according to previous studies, the comparative systematic and perfect possible mechanism might be explained by the theory of the peripheral refraction (10).
Peripheral Refraction (PR) is the retinal, refractive state of different eccentricities from the central fixation, and the relative peripheral refraction (RPR) is the refractive state of the peripheral retina relative to the central fixation (10). The PR can vary between myopia patients (11). The concept of peripheral refraction was first proposed by Hoogerheide and Rempt et al. (12,13) in 1971. According to the classification of RPR, it can generally be divided into three types: relative peripheral hyperopia (RPH), relative peripheral myopia (RPM), and relative peripheral emmetropia (RPEm). Previous animal experiments have demonstrated that different peripheral defocus areas around the retina induce different refractive states (14- 17).
The design of the orthokeratology lens is exactly in line with the optical principle of peripheral defocus (18). Previous studies have found that after wearing the orthokeratology lens, the peripheral refractive state of myopia patients goes through "myopic shift" (10,19). It also shows the asymmetric change in the nasal and temporal side (20), where the asymmetry refraction might have more effect on the myopia progression.
The principle and mechanism of orthokeratology lens to control myopia have not been accurately determined. It is currently accepted that the reverse geometric design of the orthokeratology lens causes the RPM of the wearers. In the present study, we examined the influence of the orthokeratology lens on peripheral refraction and explored the nasal-temporal asymmetry changes of peripheral refraction after the orthokeratology.
Study design
This prospectively designed, self-controlled observational study was conducted in accordance with the tenets of the Declaration of Helsinki and approval from the Ethics Committee of West China Hospital, Sichuan University (2017.43). This study was a part of our published study (21). All the subjects or their guardians signed written informed consent before being recruited into the study.
Inclusion and exclusion criteria
This study included myopic patients aged 8∼14 years, with the degree of myopia between -1.00D and -5.00D and the degree of astigmatism ≤ 1.50D. Good compliance and regular visits were also vital. Patients with contact lens contraindications (such as dry eye, eyelid gland dysfunction, eyelid insufficiency, allergic rhinitis, etc.), eye trauma surgery, severe allergies to cycloplegia drugs (compound toppicacamine eye drops) or contact lens care solution, strabismus, amblyopia, or other eye diseases, and general diseases (such as diabetes, Down's syndrome, rheumatoid arthritis, etc.) were excluded.
Orthokeratology lens fitting process
Referring to the orthokeratology lens fitting standard (22), all subjects in this study were fitted with the same brand of orthokeratology lens so as to reduce the differences and effects between different brands and minimize such variations in different measurements due to various lens designs. The lenses were Euclid VST designs (Euclid Systems Corporation, USA). The lenses were manufactured with the oprifocon A (Boston Equalens II) which the oxygen permeability (DK) was 85 × 10 −11 (cm 2 /s) [mLO 2 /(mL × mmHg)]. During the fitting process, the lens were ensured to maintain the contral position of the lens on the cornea, and the central postion of the overnight orthokeratology lens was checked by the corneal topography after the lens were removed by the wearers during the follow-up period.
Peripheral refraction
The data from both eyes were highly correlated and only the data of the right eye were included in further analysis.
The PR of the right eye was measured by WAM-5500 open-field auto-refractor. To exclude the influence of accommodation in dynamic measurement, the patients were required to accept the cycloplegia (compound tropicamide eye drops) every 5 minutes for a total of 4 times. After the last time, they were required to close eyes and rest for 20 minutes, a penlight was used to evaluate the dilation of the pupil before the measurement, and the accommodative amplitude was measured to ensure the influence of accommodation have been eliminated.
The subject, whose head was fixed by the frontal support and chin rest, were asked to stare at the measuring plate at 33 cm (Figure 1), the measuring plate was fixed to the rod, which is using to fix the visual target, after which the optometrist would move the Maltese cross along the scale (central fixation to nasal 30 • , temporal 30 • , 5 • as dividing points), measuring each eccentricity for nine times and recording it with spherical equivalent (SE).
Relative Peripheral Refraction (RPR) is defined as the difference of refractive state between each eccentricity and the central retina. The RPR equation was as follows: where SE X represents the SE of the X eccentricity from the center and SE c represents the SE of the central fixation.
Statistical analysis SPSS 22.0 and STATA16.0 software were used for the statistical analyses. Normality was tested with the Shapiro-Wilk normality test. Continuous variables were expressed as means with standard deviation (SD), or medians with the interquartile range (IQR), as appropriate. Single-factor repeated-measures analysis of variance (ANOVA) was used to compare the measured data at each time point. Paired t-test and Wilcoxon signed-rank test were used to observe the difference of RPR at each eccentricity. Non-parametric data were analyzed by Friedman test followed by Dunn's multiple comparisons test. Correlations were analyzed by using Spearman's (non-normality) or Pearson (normality) correlation. All values were rounded to three decimal digits. P < 0.05 represented a statistically significant difference.
Demographics
A total of 21 subjects who accepted the peripheral refraction measurement were enrolled in the present study. They were measured at baseline (BL), 1, 3, and 12 months after wearing the lenses. The data from right eye were enrolled to RPR analysis to avoid bias. Among them, two subjects missed the last peripheral refraction measurement due to studying abroad and thus were excluded from the data processing. The demographics of the subjects are shown in Table 1.
Peripheral refraction
The PR of these subjects was measured before and after the orthokeratology lens ( Figure 2). Also, the RPR of subjects is shown in Figure 3. During the follow-up period, the defocus area around the nasal and temporal periphery changed. Figure 3 shows the RPR changes from relative hyperopic to relative myopic defocus. The significant differences of RPR at each eccentricity in the different follow-up periods were calculated (Table 2).
During the follow-up period, the mean value of the axial length in right eyes of the participants was 24.78 ± 0.93 mm, the mean value of the axial length of subjects after 12 months follow-up period was 24.95 ± 0.88 mm. The mean value of axial length elongation was 0.01 ± 0.04 mm at the 1st month follow-up, 0.04 ± 0.07 mm at the 3rd month follow-up and 0.17 ± 0.14 mm at the 12th month follow-up (Figure 4), it suggested that the the value at the end of the follow-up (12th month followup) was significantly different from the baseline value (p = 0.0004). Table 2 demonstrated that the nasal-temporal asymmetry still existed before and after the orthokeratology lens, however, it should be noted that the difference changed after the orthokeratology lens. The value of nasal RPR at each eccentricity was larger than that in the temporal RPR before orthokeratology. In contrast, after the orthokeratology lens, the value of the temporal RPR at different eccentricities was larger than that of the nasal RPR. However, the asymmetry of RPR in the nasal and temporal side existed until 20 • eccentricity after orthokeratology, while beyond (p = 0.014), and T30 (p = 0.021), and there were no significant changes in other positions ( Figure 5).
After 1 month of orthokeratology, the change amount of RPR was not related to the SE 0 (Table 3). After 3 months of wearing the lens, a positive correlation was found between the RPR and the SE 0 , and the asymmetric effect on the nasal and temporal side was observed. Also, a stronger correlation was found on the nasal side. One year after wearing the lens, the number of RPR changes associated with the SE 0 decreased at different eccentricities but still showed nasal and temporal asymmetry and a stronger correlation with the number of nasal RPR changes.
Discussion
Due to the unique design of the orthokeratology lens, when the lens covers the wearer's cornea, the reverse geometric design of the lens can come in direct contact with the cornea's front .
FIGURE
Axial length elongation of subjects during the follow-up period. ***represent the significance between the values at the th month follow-up and baseline (p < . ); ### represent the significance between the values at the st, rd and th month follow-up (p < . ).
surface, thus changing the corneal curvature (19, 23) and RPR through some mechanisms (10). In addition to reshaping the corneal form, the orthokeratology lens may function under the principle of peripheral refraction to control myopia. This study measured the orthokeratology lens wearer in the nasal, temporal gaze range of the refractive state. Our results revealed that the RPR of myopia patients changed after orthokeratology. We also highlighted the asymmetric changes of RPR at the nasal and temporal side before and after the orthokeratology lens and explored the reasons that might cause the asymmetric RPR.
Since horizontal PR has been more significantly related to myopia than vertical PR (24, 25), only horizontal PR was measured in this study. Most subjects were RPH before the orthokeratology. Preclinical studies showed that peripheral retinal focus could promote myopia progression (26,27). In their clinical studies, Hoogerheide and Rempt et al. (12,13) firstly suggested that the myopia progression in patients with RPH was faster than in those with RPM in the same age group, while the young adults with positive parental myopia had increasing AL and RPH compared to the control group with negative parental myopia (28).
This study found that after the orthokeratology, the RPR of wearers changed from PRH to PRM, and the difference was statistically significant. These results prove that the design principle of the orthokeratology lens does affect the RPR of the lens wearers (29). The previous study compared the RPR of myopic patients with a single vision frame lens, revealing that the subjects with a single vision frame lens did not change with PRH, and the orthokeratology group patients changed from PRH to PRM, which was consistent with our study results (30,31). According to the results of this study, the initial nasal RPR was larger than that on the temporal side. After wearing the orthokeratology lens, the value of nasal RPR was smaller than that in the temporal side and changed into PRM. Moreover, it seems that orthokeratology promoted the symmetric RPR changes in the nasal and temporal side of lens wearers (19). The specific cause of the RPR changes might be related to the mechanism of the myopia control effect exerted by the orthokeratology lens.
This study also found that the RPR amount of N30 and T30 after 3 months of lens wearing was positively correlated to the AL. This conclusion is similar to the results of Gifford et al. (30,31), who confirmed that the peripheral defocus could directly affect the number of changes in the AL, i.e., the progression of myopia (32,33) suggested that the change of the AL could be preliminarily predicted according to the change of RPR. Based on the results of the one-year follow-up, the researchers found that the RPR at the nasal side was positively related to the SE 0 , which is similar to the results of Charman et al. (34), and might be one of the reasons explaining the asymmetric changes in the nasal and temporal side. These results can serve as a reference for designing individualized orthokeratology lenses for better myopia control efficacy.
Nevertheless, this study had some shortcomings, the small sample size leading to bias in the results, so further research on the issue should be conducted with a larger sample size. Furthermore, our study was limited in terms of follow-up time, and follow-up duration of this study need to be improved as well.
Conclusion
Taken together, these results suggested that the correlations mentioned below revealed a nasal-temporal asymmetry in RPR, where the SE 0 was more strongly correlated with the nasal RPR changes. The correlation was one of the reasons for the asymmetric changes in the nasal, temporal RPR after orthokeratology, which can serve as a reference for designing individualized orthokeratology lenses for better myopia control efficacy.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Ethics Committee of West China Hospital, Sichuan University. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. . /fneur. .
|
2023-03-04T16:04:21.124Z
|
2023-03-02T00:00:00.000
|
{
"year": 2023,
"sha1": "cb064b46813236cb69ba87a4062e3e44287405a2",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2022.1006112/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9b5ab70e4574cdf9c6a4ff8e866eba3f90e4137",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
155919302
|
pes2o/s2orc
|
v3-fos-license
|
Assessing Rainfall and Temperature Changes in Semi-Arid Areas of Tanzania
This paper examines the variability of rainfall and temperature in Igunga and Kishapu Districts using time series data (1985 to 2016) from Tanzania Meteorological Agency. The regression analysis results show rainfall variability of R = 0.096 in Igunga and R = 0.186 in Kishapu which implies that about 0.96% and 1.86% of the changes in rainfall across the districts are associated with changes in weather variables. A considerable change of amount of rains was evident in Igunga than in Kishapu District. In both districts there was a change of months with the most rains. Generally rainfall showed a decreasing trend in both districts. The paper also examined temperature trends in the two districts; the findings showed an increasing trend throughout October in both districts. From this point of view, higher temperatures can increase evapo-transpiration that in turn can have an effect on moisture for the crops adversely affecting pasture productivity for livestock, and leading to a shortage of water for both crops and livestock. Annual rainfall variability trends, however, increased indicating that annual variability was somewhat a common feature in the study districts. So, districts efforts should be directed towards the support of crop and livestock adjustments in order to buffer impacts of rainfall and temperature variability during critical periods for growing of crops and pastures.
droughts in Tanzania, studies show that the frequency of droughts has increased over the past few decades, especially in semi-arid areas such as Dodoma, Shinyanga, Singida, Tabora and some parts of Arusha and Iringa [2] [5] [6]. Rainfall variability data analysed for the period between 1974 and 2005 in semi-arid of Shinyanga Rural District in Tanzania, reported no significant decrease over time. However, decreasing measured rainfall and increasing temperature for the period between 1992 and 2007 were reported in Manyoni, another semi-arid area in Tanzania [5] [7].
Climate studies indicate that mean temperatures and precipitation in the country have changed over time [8]- [13]. The Fifth Report of the Intergovernmental Panel on Climate Change (IPCC) on climate variability in Tanzania provides more evidence on the occurrence of the phenomenon than previous reports [14]. The average annual rainfall of Tanzania shows a very high level of variability over the past years [15]. Literature reveals a high degree of agreement that climate variability and change have already happened, and that they are global phenomena [12] [13] [16] [17]. The proponents of the phenomena are of the view that rainfall is decreasing, while temperature is increasing over time.
However, they fail to explain seasonal variability particularly within crop growing seasons over time. Some scholars are of the view that climate variability is not new in semi-arid regions; and that such variability in climate has been affecting smallholder farmers and pastoralists for many decades [18] [19] [20] [21] [22]. According to [6] for example, inter-annual variability of rainfall and temperature in Tanzania is common. Frequent dry spells have resulted in reduced crop yields and increased food shortages leading to food insecurity [5]. Furthermore, annual rainfall data analysis shows a decreasing trend at the rate of 3.3% per decade, more so in southern Tanzania, while the mean annual temperature has increased by 0.23˚C per decade during the period between 1960 and 2003 [23]. Both day time and night time temperatures show an increasing trend, particularly during January and February; but nighttime temperatures reveal an increasing trend at 19.8% per year relative to day time temperature, which increased at 13.6% per year between 1960 and 2003 [24].
Description of the Study Areas
Igunga and Kishapu districts were selected purposively for the study by being situated in semi-arid regions. Igunga District is located in Tabora Region and lies between latitudes 3˚51'S and 4˚48'S of Equator and longitudes 33˚22'E and 34˚8'E of Greenwich ( Figure 1). Igunga District [28] mm to 700 mm per annum. The rainfall season spans the period from November to April. The southern and south western parts of the district get more rain than the northern and north eastern parts [28]. About three-fifth of the district's population cultivate cotton and sunflower, which are the main cash crops.
Kishapu District covers an area of 9226 km 2 , and lies between longitudes 36˚30'E and 33˚30'E and latitudes 3˚45'S and 5˚00'S. The total population is 272,999 and 35,500 households with an average household size of 8 people.
89.5% of the people live in rural areas [30], 75% own livestock and 39% practice mixed crop and livestock farming system [31]. The mean annual rainfall in Kishapu lies between 600 mm and 800 mm and surface temperature ranges from 16˚C in June to 30˚C in October. The area lies at an altitude 1000 -1200 m above sea level. The highest temperature is experienced in October, just before the onset of rainfall. A dry spell normally occurs between mid-January and February [31]. The rainfall regime in both districts is unimodal, which starts in November and ends in April [32]. The major cash crops are cotton, sunflower, groundnuts, green gram onions, pigeon peas and cowpeas. Livestock comprise cattle, goats, sheep and donkeys. The major food crops grown are sweet potatoes, sorghum and maize. Other economic activities are mining and sunflower oil processing.
Materials and Methods
Daily and monthly rainfall data for Igunga District from 1985 to 2016 (31 years) and Climatic data such as rainfall and temperature were analysed using Excel to generate tables and graphs. SPSS was employed to generate means and variances, skewness and kurtosis which were used to assess the changes in climate [33].
Maximum temperatures are recorded during the day time and thus, play a critical role in controlling evapo-transpiration and drying up of water bodies [23].
On the other hand, minimum temperatures are obtained during the night. For rainfall variability, the analysis focused on annual and seasonal variability trends because variabilty can reveal dry and wet periods over time .
During data analysis, both variability and monthly means were computed.
The variability was computed as a deviation from a long-term (annual) mean.
The rainfall and temperature annual variability are presented in (Tables 1-4).
The hypothesis that the study districts did not experience significant increasing trends in inter-annual rainfall variability for a period between 1985 and 2016 was tested using a p-value at 5% level of significance. To analyze meteorological data, R-Statistical package was used to perform simple regression analysis for rainfall and temperature data. Several studies have also used in this approach in the past to analyse evidence of climate change [33] [34].
The dependent variable [Y (j)] was the physical factor (mean rainfall, mean minimum temperature, mean maximum temperature) and independent va-
Trends in Monthly Rainfall and Temperature in Kishapu and Igunga Districts
Rainfall is a major climate parameter with the highest degree of spatial and indicates that there were significant variations in rainfall between the months.
These results are in line with a study conducted by [36] when analyzing global monthly mean precipitation. It is important to note that variability of rainfall patterns leads to a redistribution of rainfall. Further, the precipitation had a decreasing trend over time, this led to an increase in extreme droughts and shortages of water during April to September. Therefore, an adaptation response to their perceptions would be appropriate and helpful to government efforts to avoid potential agricultural losses. has had a consistent falling pattern. Consistent rainfall patterns suggest that it rained at the time farmers and agro-pastoralists expected it to rain [23]. Nevertheless, rainfall variability, punctuating the general trend, can be beneficial when the increasing annual rainfall trend exceeds the range for semi-arid areas. The common situation for Igunga and Kishapu districts is that there was seasonal rainfall variability in both districts and the crop growing season normally began November and went on to the end of January.
In Kishapu District, temperature on the other hand, shows little variation with increasing trends in terms of minimum and maximum temperature. Table 2 shows that October and December are the two months that indicate the highest temperature, implying that it was the hottest months throughout the period be- In Igunga District, the highest daytime temperature occurred in the months of October, November and December, while the lowest occurred in the months of November and January ranging from 24.0˚C to 33.6˚C (Table 4). Results showed that the mean maximum temperature for Igunga district in Tabora region, was 38.67˚C in February over the period between 1985 and 2016. This implies that the periods of highest temperature were also dry periods. It also implies that the increase in maximum temperature was accompanied by decreasing amount of rainfall in Igunga District in particular (Table 1). Maximum temperature is normally recorded during the day time, thus its increase most probably reduces soil moisture through evaporation and evapotranspiration, which in turn negatively affects crop and pasture development. This has been also reported in Singida District, Tanzania by [23].
The findings in Table 1 and (Figure 2), maximum temperatures were increasing ( Figure 3). However, in the literature report [9] the semi-arid areas have rainfall which ranges between 200 and 800 mm per annum [9]. adversely affecting smallholder farmers and agro-pastoralists whose livelihoods largely depends on the rain-fed farming system. Seasonal variability of rainfall and temperature was similarly reported by [24] and [35] at the national level in Tanzania. Figure 3 in Kishapu the Inter-annual variability revealed R 2 of 0.078 for annual temperature. This translates into moderate temperature variability of 7.8% related to changes in time. For Igunga, the R 2 was 0.1724, which translates into strong temperature variability of 17.24% over time. However, in Kishapu change in time explained temperature variability by 7.8% and in Igunga temperature variability by 17.24%. This suggests strong annual trends for temperature variability for the period under concern. Table 5, the p-value was 0.007 (P > 0.005 ) implying that the change was not statistically significant at 5% level of significance. The R 2 was 0.1195, translating into 0.1% of the inter-annual temperature variability that was associated with change in time between 1985-2016.
Further, the correlation between temperature and time is significant as shown by the significance of Pearson Correlation test at the 1% probability level ( Table 5). The result indicates that there is no statistical significant trend in the aveage temperature over time in the study areas. The R-squared statistic also indicates a weak relationship between the variables average temperature and time (season).
Rainfall and Temperature Variation during Growing Season in Kishapu and Igunga
Annual crop yields for five major crops: maize, sorghum, cotton, sunflower and planting, weeding and harvesting, which are linked directly to rainfall events.
The period of planting is the time of greatest uncertainty and risk because planting decisions largely assemble the other activities and may not easily be implemented after certain time.
Results in Table 1 [23] in a study of climate variability in Shinyanga and Singida. Intra-seasonal factors, such as timing of the onset of first rains affecting the crop planting regimes [39], the distribution and length of rain during the growing season [40] and effectiveness of rains in each precipitation event [39], are the real criteria that determine the effectiveness and success of farming. Furthermore, in Figure 5, findings revealed that the linear rainfall trend in Kishapu was decreasing over time like Igunga rainfall anomaly trend. However, the R 2 was 0.00051 and 0.0237 in October and November respectively of rainfall variability and that variability were higher at Kishapu station compared to Igunga rainfall station. According to [23], climate variability and change occurred through increased rainfall unpredictability; increased frequency of drought; increased duration of dry spells and warmings. Such climatic changes increased farmers and agro-pastoralist mobility in search for water and pasture for livestock and also brought in changes of crop varieties to be grown.
Conclusions and Recommendations
This paper examined trends in seasonal variability of rainfall and temperature in Igunga and Kishapu districts. Statistically the temperature showed an increased trend while precipitation was characterized by large inter-annual variability and a decreased amount during rainy season. Low levels of precipitation were found from June to September while high levels of precipitation occurred in December.
These trends were confirmed by local perceptions, found from qualitative study.
Specifically the study assessed trends in the monthly temperature and rainfall variability. Based on the results and discussions, the paper concludes that, the meteorological data show increasing average temperatures and decreasing annual rainfall patterns for the past 31 years examined (1985 and 2016). The analysis indicates that the onset and end of rainfall during the growing period has become more erratic and unpredictable since 1985. Annual rainfall variability trends, however, increased minimally indicating that annual variability was somewhat a common feature in the study districts. The analysis indicated a shift on the onset of long rains from October/November to December/January with shortening of rainfall period and increased frequency of drought. As far as temperature was concerned, it had an increasing trend, and also the maximum temperatures showed considerable variability, a typical characteristic of semi-arid areas. Further, whereas annual variability showed rainfall variability in both districts, seasonal trends showed considerable rainfall and temperature variability within and between seasons. When analyzing climate variability, it is therefore important to consider both seasonal and inter-annual variability so as to have a clear understanding regarding the degree of climate variability and change.
Based on the results and conclusion, the paper recommends that the local government authorities and other independent initiatives should be directed towards the support of crop and livestock adjustments in order to cushion the impacts of rainfall and temperature variability during critical periods for growing of crops and pastures. Capacity building to the farmers and livestock keepers through training is still needed especially on appropriate adaptation strategies to climate change. Such strategies can be applied to address risks caused by insufficient rainfall, higher day time temperature, as well as rainfall and temperature variability, which have been demonstrated in this paper. Climate change policies should be actively incorporated in all developmental plans. For example water harvesting can be promoted for irrigation to supplement rain-fed agriculture and for consumption by people and livestock in both districts; as well as setting up policies for animal breeding and crop varieties in the study areas for providing highly tolerant breeds, pastures and crop varieties tolerant to drought.
|
2019-05-17T14:19:48.545Z
|
2019-05-05T00:00:00.000
|
{
"year": 2019,
"sha1": "bd35774bc3a24827782b397327b5a558e8cd3795",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=92240",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "bd35774bc3a24827782b397327b5a558e8cd3795",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
255682762
|
pes2o/s2orc
|
v3-fos-license
|
Tight Junction Protein Signaling and Cancer Biology
Tight junctions (TJs) are intercellular protein complexes that preserve tissue homeostasis and integrity through the control of paracellular permeability and cell polarity. Recent findings have revealed the functional role of TJ proteins outside TJs and beyond their classical cellular functions as selective gatekeepers. This is illustrated by the dysregulation in TJ protein expression levels in response to external and intracellular stimuli, notably during tumorigenesis. A large body of knowledge has uncovered the well-established functional role of TJ proteins in cancer pathogenesis. Mechanistically, TJ proteins act as bidirectional signaling hubs that connect the extracellular compartment to the intracellular compartment. By modulating key signaling pathways, TJ proteins are crucial players in the regulation of cell proliferation, migration, and differentiation, all of which being essential cancer hallmarks crucial for tumor growth and metastasis. TJ proteins also promote the acquisition of stem cell phenotypes in cancer cells. These findings highlight their contribution to carcinogenesis and therapeutic resistance. Moreover, recent preclinical and clinical studies have used TJ proteins as therapeutic targets or prognostic markers. This review summarizes the functional role of TJ proteins in cancer biology and their impact for novel strategies to prevent and treat cancer.
Two types of barrier functions are described to be mediated by TJs, namely the gate and the fence functions [1]. Indeed, the regulation of paracellular permeability is one of the major physiological functions of TJs, as this controls the transport of molecules across tissues, and between different body compartments [6]. In this context, selective permeability is tightly controlled by size and charge. The size-selective pathway allows the diffusion of macromolecules up to a size limit of ∼30-60 Å, in contrast to the charge-selective small-pore pathway with pore diameters of ∼4-8 Å [7]. On the other hand, TJ proteins are involved in the fence function, also known as the intramembrane diffusion barrier, which refers to the segregation of the apical and basolateral plasma membranes [8].
Beyond their classical functions, TJ proteins also modulate polarity, differentiation, growth and proliferation, as well as cell migration and motility, as these molecules act as arbitrators and transducers of cell-to-cell adhesion and signaling cascades [9]. Consequently, TJ protein alteration during pathological processes has been described to interfere with the proper functioning of the cellular machinery and homeostasis maintenance [10]. In cancer, changes in the expression and/or localization of these molecules are frequently reported ( Figure 1) [11]. However, many of these studies are observational, and well-founded mechanistic studies that link these multi-component complexes to disease pathogenesis are lacking [12]. Dissecting the role of TJ complexes in various aspects of oncogenic transformation could catalyze our conception of TJ proteins as bidirectional signaling hubs and advance our perception of their multifunctional nature as an inherent player of the evolutionarily conserved signaling cascades. In this context, this review aims to tackle the interrelation between TJ components and signal transduction in cancer pathophysiology from a molecular, rather than a cellular biology perspective. Considering the frequent dysregulation in malignant diseases, we further discuss TJ components as targets of anti-cancer treatments. anchorage-independent growth and Raf-1-mediated transformation of salivary gland epithelial cells [29]. In addition, occludin extracellular loop 2 regulates the localization of TGFÎ 2 , the TGF-β type I receptor responsible for inducing the dissolution of tight junctions and acquisition of a mesenchymal phenotype [30]. Taken together, an intimate link appears to bridge TJ proteins to the well-established signaling pathways in carcinogenesis. Indeed, various TJ components crosstalk with a myriad of signaling pathways by directly modulating these cascades or through intermediate proteins, such as kinases and phosphatases. In addition, TJ proteins conserve the ability to rapidly modulate their functional properties and permeability in response to oncogenic stimuli. These diverse regulatory mechanisms not only allow the fine tuning and transmission of external signals to the cell interior, but also position TJ protein dynamics as a key aspect in cancer initiation and progression.
Tight Junction Proteins as Central Mediators of Cancer Hallmarks
The following section thoroughly examines the functional crosstalk between a variety of TJ components and the myriad of signaling pathways that control cell dynamics, in terms of survival, migration, epithelial-to-mesenchymal transition and stemness ( Figure 2). The role of TJ components in epigenetic cell regulation is also discussed. where its overexpression inhibits anchorage-independent growth and Raf-1-mediated transformation of salivary gland epithelial cells [29]. In addition, occludin extracellular loop 2 regulates the localization of TGFÎ 2 , the TGF-β type I receptor responsible for inducing the dissolution of tight junctions and acquisition of a mesenchymal phenotype [30]. Taken together, an intimate link appears to bridge TJ proteins to the well-established signaling pathways in carcinogenesis. Indeed, various TJ components crosstalk with a myriad of signaling pathways by directly modulating these cascades or through intermediate proteins, such as kinases and phosphatases. In addition, TJ proteins conserve the ability to rapidly modulate their functional properties and permeability in response to oncogenic stimuli. These diverse regulatory mechanisms not only allow the fine tuning and transmission of external signals to the cell interior, but also position TJ protein dynamics as a key aspect in cancer initiation and progression.
Tight Junction Proteins as Central Mediators of Cancer Hallmarks
The following section thoroughly examines the functional crosstalk between a variety of TJ components and the myriad of signaling pathways that control cell dynamics, in terms of survival, migration, epithelial-to-mesenchymal transition and stemness ( Figure 2). The role of TJ components in epigenetic cell regulation is also discussed. Tight junction proteins are identified as potential mediators of apoptosis/anikiosis resistance. This is mainly mediated through the upregulation of caspase-3, PARP and Bcl-2, in parallel to the modulation of Src and Akt phosphorylation, coupled to β-catenin overexpression. (b) Tight junction proteins are also potential mediators of cancer stemlike phenotype acquisition, where they contribute to self-renewal and dedifferentiation, as well as therapeutic resistance and poor prognosis. In this setting, CLDN1 overexpression increases Notch signaling (b1), in parallel to the induction of MMP-9 expression and p-ERK signaling (b2), as well as through its non-canonical role in regulating Notch/PI3K/Wnt/β-cateninSer552 signaling. (c) Tight junction proteins are identified as potential mediators of apoptosis/anikiosis resistance. This is mainly mediated through the upregulation of caspase-3, PARP and Bcl-2, in parallel to the modulation of Src and Akt phosphorylation, coupled to β-catenin overexpression. (b) Tight junction proteins are also potential mediators of cancer stemlike phenotype acquisition, where they contribute to self-renewal and dedifferentiation, as well as therapeutic resistance and poor prognosis. In this setting, CLDN1 overexpression increases Notch signaling (b1), in parallel to the induction of MMP-9 expression and p-ERK signaling (b2), as well as through its non-canonical role in regulating Notch/PI3K/Wnt/β-cateninSer552 signaling. (c) Beside stemness, tight junction proteins are crucial regulators of cell migration and plasticity, where they activate key signaling pathways related to epithelial-to-mesenchymal transition (EMT). Indeed, CLDN1 was described to promote mesenchymal transformation by upregulating Slug and Zeb1 through the activation of the c-Abl-Ras-Raf-1-ERK1/2 signaling axis (c1). CLDN6 was also described to promote EMT by increasing YAP1 nuclear translocation and enhancing its interaction with Snail1 (c2), as well as through the activation of the PI3K/Akt pathway, along with JAM-A. MarvelD3 has been suggested to regulate migration through the JNK pathway (c3), and CLDN9 was shown to promote invasive cellular behavior by activating the Tyk2/STAT3 signaling pathway (c4). TJ proteins were also described to promote collective cell migration (c5). ALDH: aldehyde dehydrogenase; CLDN: claudin; JAM: junction adhesion molecule; NCID: Notch intracellular domain; PARP: poly (ADP-ribose) polymerase; TNF-α: tumor necrosis factor-alpha.
Modulation of Proliferation and Apoptosis Resistance in Cancer Initiation and Progression
Standing as the most fundamental trait of cancer cells, sustained proliferative signaling is the first cancer hallmark among others to support tumor development and progression [31]. Emerging evidence points toward TJ proteins as essential regulators of cell proliferation through multiple mechanisms, including, but not limited to, microenvironment alteration, transcriptional regulation, and alteration in the cellular localization of cell cycle control proteins [32]. In this context, knockdown of the tight junction protein 1 (TJP1) gene expression significantly decreased bladder cancer cells' growth, via dysfunction of the miR-455-TJP1 axis, where the latter suppressed TJP1 expression by directly targeting its 3 -untranslated region [33]. Another microRNA, miR-497, was also described to inhibit cell proliferation upon decreased CLDN2 expression. Mechanistically, treatment with the HDAC inhibitors trichostatin A and sodium butyrate decreases the stability of CLDN2 mRNA through the elevation of miR-497 [34]. CLDN2 expression increased cellular proliferation, anchorage-independent growth and tumor growth in vivo, potentially via the epidermal growth factor receptor (EGFR) transactivation, a key regulator of colorectal carcinogenesis [35]. Furthermore, inhibition of cell proliferation in lung adenocarcinoma is partly related to the decrease in CLDN2 expression upon treatment with the DNA methyltransferase inhibitor azacitidine (AZA) [34]. This is explained by the decrease in NF-κB phosphorylation and binding to the promoter region of CLDN2 [36]. Besides, CLDN2 retains its ability to complex with ZO-1, ZONAB, and cyclin D1, sequestering them in the nucleus, which results in enhanced cellular proliferation in lung adenocarcinoma [37]. Moreover, lipidoid-formulated CLDN3 siRNA intratumoral and intraperitoneal injections significantly reduced cell proliferation in OVCAR-3 xenografts, and tumor burden in MISIIR/TAg transgenic mice, respectively [38]. In addition, CLDN4 overexpression increased MCF-7 cells' proliferation, with tumor size in nude mice transplanted with CLDN4-silenced MCF-7 cells being reduced [39].
Cell cycle regulators/effectors and apoptotic stimuli link cellular proliferation to apoptosis, with the dysregulation of this programmed cell death being one of the leading drivers of tumorigenesis [40]. For example, it was shown that CLDN1 knockdown couples cycle arrest to apoptosis due to the upregulation of β-catenin expression [41]. In this context, the functional role of TJ proteins has been described in apoptosis, with their disruption considered as an early event that initiates caspase activation and cell death through the interaction of occludin and CLDNs with the extrinsic apoptotic signaling pathway [42]. Indeed, CLDN1 exhibits anti-apoptotic effects under tamoxifen or TNF-α treatment in MCF-7 cells, with its knockdown increasing the expression of caspase-8 and cleaved poly (ADP-ribose) polymerase (PARP) [41,43]. An anti-apoptotic function of CLDN1 was also corroborated in nasopharyngeal carcinoma cell lines under serum deprivation or 5-fluorouracil treatment [44]. Furthermore, caspase-3 activation was increased following treatment with the apoptotic inducer staurosporine upon CLDN4 expression loss in ovarian tumor cells [45]. In line with these findings, CLDN4 overexpression reduced the rate of cell apoptosis [39]. On the other hand, resistance to anikois, an apoptosis subtype, is a critical prerequisite for carcinoma progression [46]. CLDN1 expression confers resistance to anoikis in colon cancer cells. This is mainly dependent on the activation of Src, a tyrosine kinase identified to promote anoikis resistance, with which CLDN1 was described to form a multiprotein complex, along with ZO-1 [47]. This results in the modulation of Akt phosphorylation and increased Bcl-2 expression. Furthermore, CLDN1-mediated anoikis resistance and Scr involvement were also described in gastric cancer [48]. A potential role for β-catenin has been described in CLDN1-regulated anoikis resistance in gastric cancer cells, with β-catenin overexpression also reactivating Akt and Src signaling [48]. Interestingly, the contribution of CLDNs to anoikis resistance, and their potential therapeutic druggable potential are highlighted by the antitumor effect of the quinazoline-based doxazosin derivative DZ-50 [49]. Indeed, this agent interferes with tumor growth and metastasis via sensitizing cells to anoikis, by targeting key functional intercellular interactions, focal adhesions, and tight junctions in prostate cancer. In this context, treatment with DZ-50 resulted in decreased cell survival, migration and adhesion to extracellular matrix components, with one of the primary downregulated targets of DZ-50 being CLDN11 [49]. It is worthy to mention that a pro-apoptotic role of CLDNs and occludin has also been reported [42]. For example, the number of apoptotic cells was decreased upon intratumoral injection of CLDN3 siRNA into OVCAR-3 xenografts [38]. This differential functionality could be related to the experimental model or the pathophysiological state in which these proteins were studied. Indeed, similar pro-and anti-apoptotic roles were reported for other molecules, for instance nitric oxide, where its dual contrasting apoptotic functions were dependent on its concentration, flux and cell type [50].
Beside descriptive findings, the interrelation between tight junction dynamics and intracellular signaling pathways that control cell cycles and apoptosis has not been fully explored. This highlights the unmet need to study the functional links between TJ modulation and prominent pathways that regulate proliferation and apoptosis, to link extracellular signals to transcription factors in the nucleus. These mainly include the MAPK/Ras/Raf/ERK, PI3K/Akt, Janus kinase/signal transducer and activator of transcription (JAK/STAT), wingless-related integration site (Wnt), and TGF-β pathways [51]. However, this is complexified by the functional crosstalk between these transduction cascades that not only regulate cell death and proliferation, but also converge to modulate invasion, metastasis and cellular plasticity, as discussed in the succeeding section.
Members of the TJ Protein Family as Regulators of Cell Migration and Plasticity
Abnormal TJ protein expression is linked to changes in cell plasticity and differential induction or suppression of EMT, thus highlighting these proteins as major regulators of invasion and metastasis [52]. In this context, CLDN1 expression was linked to enhanced invasiveness and metastasis of multiple malignancies, such as gastric carcinoma [53] and colon cancer [23,54]. Moreover, CLDN1 was described to promote mesenchymal transformation in hepatocellular carcinoma (HCC) by upregulating Slug and Zeb1 [55], and to enhance the invasive ability of oral squamous cell carcinoma by promoting the cleavage of laminin-5 gamma2 chains via MMP-2 and membrane-type MMP-1 [56]. Indeed, it is well conceived that CLDNs enhance cell invasion via the activation of MMPs [57]. In particular, CLDN1 activation of MMP-2 is mediated through the stimulation of the protein kinase C (PKC) signaling pathway in a panel of melanoma cell lines [58]. CLDN6 was also shown to induce MMP-2 activation through CLDN1. The latter is described to interact with the extracellular proMMP-2 through its ECLs, resulting in its activation by MMP-14 in human adenocarcinoma gastric cancer cells [59]. CLDN2, on the other hand, was described to increase the mRNA level and enzymatic activity of MMP-9 through elevating Sp1 nuclear distribution in the human lung adenocarcinoma cell line A549 [60]. In turn, proteolytically active MMPs not only degrade the ECM, but also form new cell-matrix and cell-cell attachments and change the adhesive phenotype of tumor cells toward EMT [61]. Beside proteases, CLDN7 expression was described to regulate E-cadherin expression and invasion in esophageal squamous cell carcinoma cells [62]. In addition, CLDN7 is required to recruit EpCAM for TACE/presenilin2, resulting in the generation of the EpCAM-cleaved intracellular domain, EpIC. The latter is responsible of the induction of EMT-associated transcription factor expression [63]. This broadens the conceptual framework for the mechanisms by which TJ proteins modulate invasion.
On the other hand, although CLDN4 overexpression increased the migration of breast cancer cells [39], the invasiveness and metastatic potential of pancreatic cancer cells was shown to be decreased upon CLDN4 expression [64]. In addition, CLDN7 downregulation may promote invasion and metastasis of colorectal cancer [65], and was positively correlated with the depth of invasion, lymphatic vessel invasion and lymph node metastasis in esophageal squamous cell carcinoma [66]. This positive correlation was also described for venous invasion and liver metastasis in the context of colorectal cancer [67] and distant metastases in high-grade serous ovarian carcinoma patients [68]. MarvelD3 was transcriptionally downregulated during mesenchymal transition in pancreatic cancer cells [69], whereas its expression inhibits EMT, along with NF-κB pathway inactivation, a main regulator of EMT and cell metastasis [70]. In line with these findings, MarvelD3, a dynamic regulator of the MEKK1-c-Jun NH 2 -terminal kinase (JNK) pathway, was described to reduce pancreatic cancer cells' tumor formation in vivo and Caco-2 cells' proliferation and migration. Indeed, MarvelD3 recruits MEK kinase 1 (MEKK1), an MAPK kinase, leading to the downregulation of JNK phosphorylation. Subsequently, this inhibits JNK-mediated transcriptional mechanisms that regulate cell behavior, including migration [71]. This highlights the fact that the altered expression of TJ proteins in various cancer types and tissues might differentially modulate cell migration, invasion and metastasis, with a potential disparity in transduction pathway activation. Despite this, it is well reported that these TJ proteins act as a pivot for intracellular signaling pathways that modulate metastasis in several malignancies [52], as detailed in the following section.
TGF-β-dependent pathway signaling. Among the pathways established to induce EMT, canonical SMAD-dependent TGF-β signaling is described to be a major driver [72]. In fact, TGF-β-induced cell migration is linked to the induction of CLDN1 expression in ovarian cancer cells [73]. Additionally, the RNA-binding motif protein 38 (RBM38), a pivotal mediator of TGF-β-induced EMT, positively regulates ZO-1 transcription via direct binding to AU/U-rich elements in its mRNA 3 -UTR in breast cancer [74]. On the other hand, SMAD2 suppresses CLDN6 expression through DNMT1-mediated methylation of CLDN6 promoter, thus promoting cell migration and invasion in breast cancer [75]. Beside direct modulation, CLDNs can contribute to the release of active TGF-β, thus enhancing EMT. For example, CLDN1 activates the membrane-type 1 matrix metalloproteinase, as mentioned previously [56], which is responsible for the proteolytical release of TGF-β from the subendothelial matrix [76]. Similarly, TGF-β1, its receptor, and receptor-mediated signaling are partly activated by MMP-2, also induced by CLDN1 [77].
Ras-Raf-MEK-ERK and PI3K/Akt signaling. TGF-β-induced EMT can also occur through non-SMAD pathways, for instance Ras-Raf-MEK-ERK and PI3K/Akt pathways [72]. The c-Abl-Ras-Raf-1-ERK1/2 signaling axis was shown to be activated upon CLDN1 expression, with the subsequent upregulation of Slug and Zeb1 and EMT induction in Chang cells [55]. In this context, PKCα regulated CLDN1 expression via Snail-and MAPK/ERKdependent pathways during EMT in human pancreatic cancer, shedding the light on the potential therapeutic status of PKCα inhibitors [78]. In line with these findings, Snail, but not Zeb1 nor Twist1, was also highlighted for its CLDN6-mediated invasive abilities in gastric cancer. Indeed, CLDN6 increased YAP1 nuclear translocation, which enhanced the interaction between YAP1 and Snail1 to promote EMT [79]. Furthermore, Snail enhanced the migration of squamous cell carcinoma by inducing the expression of CLDN11 through the tyrosine-mediated phosphorylation of the latter, which activated Src and suppressed RhoA activity [80]. CLDN6 also enhanced endometrial carcinoma cell migration via the PI3K/AKT/mTOR signaling pathway [81]. Junctional adhesion molecule-A (JAM-A) led to EMT via the activation of the PI3K/Akt pathway in human nasopharyngeal carcinoma [82]. On the other hand, expression of CLDN1 inhibited the migration potential of human osteosarcoma cells through the inhibition of the Ras/Raf/MEK/ERK signaling pathway [83].
Furthermore, CLDN3 and CLDN4 impeded EMT in ovarian carcinoma through the activation of the PI3K/Akt pathway [84], in line with CLDN7-mediated inhibition of cell migration and invasion through the ERK/MAPK signaling pathway in human lung cancer cells [85]. In parallel, CLDN18 suppressed human lung adenocarcinoma cell motility by inhibiting the PI3K/PDK1/Akt signaling pathway [86]. In addition, it has been shown that occludin downregulation in the context of Ras-Raf-driven epithelial transformation might play an essential role in mediating the loss of the structure and function of epithelial TJs through the MEK-ERK signaling pathway [29]. This differential regulation of cell behavior highlights the context-dependent role of TJ proteins as inducers or inhibitors of EMT in carcinogenesis.
Wnt/β-catenin/T-cell and lymphoid enhancer (TCF-LEF) signaling. Dysregulation of Wnt/β-catenin signaling can trigger the induction of EMT, which could lead to metastasis. This is mainly mediated by β-catenin nuclear translocation and binding to TCF-LEF factors, which activates the transcription of target genes with a pro-invasive expression profile [87]. Smad4 is a central intracellular signal transduction component of the TGF-β family that was described to mediate invasion suppressive effects in colon cancer via the modulation of β-catenin/TCF-LEF activity, resulting in the repression of CLDN1 transcription [88]. This is further confirmed by the positive correlation of CLDN1 expression levels with β-catenin levels in gastric cancer [89]. Furthermore, CLDN3-modulated EMT is mediated through the regulation of the Wnt/β-catenin signaling pathway in lung squamous cell carcinoma [90], a role also shared by CLDN4 [91]. CLDN7 expression is significantly correlated with lymph node metastasis in salivary adenoid cystic carcinoma, where its expression regulates metastasis also through the modulation of Wnt/β-catenin signaling [92].
STAT signaling. The activity of a multitude of master EMT transcription factors that function to stimulate rapid transitions between epithelial and mesenchymal phenotypes is regulated by STAT3 [93]. CLDN9 was shown to promote the invasive behavior of hepatocytes in vitro, by activating the Tyk2/STAT3 signaling pathway [94]. On the other hand, Tyk2/STAT1 signaling was described in CLDN12-induced EMT in lung squamous cell carcinoma [95].
Beside signaling pathways, the localization of tight junction proteins is also altered during EMT. Enhanced non-junctional CLDN1 expression, e.g., in the nucleus or cytoplasm, was detected in colon carcinomas and metastatic lesions, in contrast to cell membranerestricted staining in normal colonic mucosa [23]. It is worth mentioning that besides single tumor cell metastasis, collective cell migration is also described as a fundamental process where migrating clusters maintain cell-to-cell junctions and exhibit a higher invasive capacity and therapeutic resistance compared to single cell migration [96]. In this context, the aberrant expression of CLDN1 was shown to support collective cell migration [97]. In addition, CLDN11 was described to prompt the formation of circulating tumor cell clusters through the activation of a Snail-CLDN11 axis in head and neck cancers [80]. In line with these findings, the overexpression of CLDN11 and occludin was described to enhance the collective migration of peritumoral cancer-associated fibroblasts via TGF-β secretion [98,99].
Whether CLDNs retain the junctions between cancer cells or form cell adhesion complexes to enhance the metastatic efficiency remains unclear. Further studies are needed to decipher the additional aspects regarding the mechanisms by which TJ proteins modulate the modes of EMT-mediated cancer migration. This can be further clarified by coupling the study of the altered signaling pathways to the localization of the dysregulated TJ proteins. Importantly, and in the light of the heterogenic roles of TJ components at different stages of the metastatic cascade, context-dependent and individual assessment of these proteins' role could reveal novel therapeutic targets. As many of the migration and EMT studies cited above have been conducted in conventional cell culture systems, the adoption of experimental models that reflect better physiological or pathological characteristics raises a crucial unmet need.
Functional Role of Tight Junction Proteins in Cancer Cell Stemness
With their first identification in 1994, cancer stem cells (CSCs) emerged as a critical cancer subpopulation subset endowed with tumor-initiating properties, self-renewal abilities and multi-lineage differentiation, opening the door toward tumor heterogeneity and therapeutic resistance [100].
A growing body of evidence points toward the important role of tight junction proteins, in particular CLDNs, in cancer stem-like cell biology [101]. In this setting, claudin functions are regulated at various levels, and by distinct mechanisms. cDNA microarray analysis identified CLDN1 as one of the most significantly upregulated genes in ovarian cancer-initiating cells [102]. Moreover, CLDN1 overexpression was shown to induce dedifferentiation in primary colon adenocarcinoma [23]. CLDN2 promoted the self-renewal of colon cancer cells in vitro, as well as colorectal cancer self-renewal in vivo, along with increasing the population of ALDH High stem-like cells and favoring phenotypic transitions from ALDH Low toward ALDH High subpopulations [99]. Furthermore, CLDN3 was uncovered as a positive regulator of cancer stemness in non-squamous non-small cell lung carcinoma, where stemness suppression and chemoresistance reversal were observed upon CLDN3 transcriptional activity downregulation [103]. In contrast, CLDN1 depletion increased the invasive and CSC-like properties of hepatocellular carcinoma cell lines [104]. In line with these findings, CLDN7 deficiency was shown to confer stemness properties and to promote tumor-initiating cell features in colorectal cancer stem cells [22].
CSC regulation is complex and multiple intracellular signaling pathways and extracellular factors have been shown to be implicated. Of those, the Hedgehog (Hh) and Notch pathways are highlighted as key signals in this specific cell phenotype [105].
Hedgehog pathway. As an evolutionarily conserved pathway that is essential for cell fate determination, the aberrant activation of the Hh pathway serves as a crucial asset for CSC function and maintenance during tumorigenesis [106]. In this context, the expression of CLDN3, CLDN5, occludin, and JAM-A was increased in response to the activation of Hedgehog signaling. In contrast, treatment with cyclopamine, an Hh pathway inhibitor, decreased the expression of these proteins [107]. Cyclopamine also downregulated CLDN4 and occludin expression in colon cancer stem cells [108]. Importantly, CLDN1 is pinpointed as a direct transcriptional target of Hh pathway activation in breast cancer, as evidenced by the correlation between membranous CLDN1 expression and Hh paracrine pathway activation [109].
Notch pathway. Beside the Hh pathway, CLDN1 was identified as one of the dynamic regulators of Notch signaling [25]. CLDN1 overexpression increased Notch and Wnt signaling at the transcriptomic level, as evidenced by an increase in Hes1 and a decrease in Math1 expression in a mouse colon cancer model [110]. Indeed, CLDN1 upregulation activates Notch signaling in parallel to the induction of MMP-9 expression and p-ERK signaling, thus interfering with cellular differentiation, and enhancing susceptibility to mucosal inflammation and hyperplasia [25]. Beside the mentioned mechanisms, upregulated CLDN1 expression was shown to promote Notch signaling through its noncanonical role in regulating Notch/PI3K/Wnt/β-cateninSer552 signaling, which underlies the induction of colitis-associated cancer [26]. Interestingly, this CLDN1/Notch axis could be therapeutically targeted by CLDN1-specific monoclonal antibodies, with the latter resulting in the inhibition of Notch cleavage in HCC cell-based and CDX animal models [111]. Moreover, the Notch signaling pathway was identified as one of the pathways that is regulated by CLDN5 in the context of lung cancer brain metastasis [112]. In addition, upregulated Notch expression in holoclones was associated with CLDN7 expression in colon adenocarcinoma [113].
Miscellaneous pathways. Other pathways are also implicated in CSC biology [114]. For example, the CLDN2-dependent regulation of stem-like cell self-renewal is mediated through the activation of YAP and downstream repression of miR-222-3p [99]. Moreover, indirect protein interaction was reported between CLDN7 and SOX9, a vital player in CSC self-renewal and a master regulator of several stem cell markers [22]. The human growth hormone (hGH)-STAT3-CLDN1 axis was described to be responsible for invasive and CSC-like properties in HCC [104]. Furthermore, CLDN1 regulation during ovarian cancer-initiating cell proliferation and invasion was shown to be mediated by miR-155, where the endogenous mature form of the latter may inhibit cancer-initiating cell growth via reducing CLDN1 expression by targeting its mRNA on the 3'-UTR [115].
Beside the potential role of TJ proteins in the regulation of CSC cell biology, a subsequent correlation with poor prognosis, therapeutic resistance and relapse could be speculated, all of which being key characteristics of CSCs [100]. Indeed, multiple tight junction proteins were highlighted as potential biomarkers in the prognostic outcome of cancer patients. For example, elevated JAM-A expression is significantly correlated with poor prognosis in breast cancer patients [116]. Partitioning defective protein 3 (Par3) and ZO-1 clustering on the cell membrane are indicators of poor prognosis in lung squamous cell carcinoma [117]. Moreover, CLDN1 is correlated with a poor prognosis of oral squamous cell carcinoma [118] and lung adenocarcinoma [119]. The poor prognosis of gastric and breast cancer patients is associated with CLDN4 overexpression, in line with the poor prognostic value of CLDN7 in gastric cancer [120][121][122].
On the other hand, the development of therapeutic resistance is a major challenge facing cancer therapy, in which CSCs play an important role [100]. Various mechanisms are employed by TJ proteins to mediate chemoresistance, including their effects on apoptosis and autophagy, as well as on drug transporters. In this setting, cisplatin resistance in non-small cell lung cancer was shown to be promoted by CLDN1-induced activation of autophagy via the activation of ULK1 phosphorylation [123]. Doxorubicin resistance was also reported in lung adenocarcinoma cells, where CLDN1 is speculated to inhibit the penetration of anticancer drugs into the target area [124]. CLDN4 knockdown increased cellular accumulation and sensitivity to cisplatin, pointing towards the potential involvement of CLDN4 in platinum resistance in ovarian cancer [125], in line with the enhanced sensitivity toward carboplatin and paclitaxel upon CLDN1 knockdown in ovarian cancer cells [126]. This could be partly explained by the interactions of some CLDNs with the microtubule network, notably tubulin, resulting in the re-shaping of its structure and polymerization toward a reduced apoptotic response to the microtubule-targeting paclitaxel [127]. Alternatively, relapse-free survival (RFS) was significantly shorter in high versus low CLDN2 or CLDN5 expression in breast cancer [128,129], with high CLDN4 expression also being associated with worse RFS [121]. Furthermore, the substantial association between high CLDN2 expression in cancer-associated fibroblasts and shorter survival in 5-fluorouraciland oxaliplatin-treated metastatic colorectal cancer patients has been reported [130]. In addition, cytoplasmic CLDN3 and CLDN7 expression was associated with poor RFS in triple-negative breast cancer [131]. In the context of HCC, CLDN10 expression was underlined as a molecular marker of disease recurrence after curative hepatectomy [132]. Nevertheless, various studies have reported a negative correlation between tight junction protein expression and poor prognosis, therapeutic resistance, or disease relapse [101]. Furthermore, contradictory findings were also reported for the same CLDN molecule. For example, in renal cell carcinoma, CLDN1 expression was correlated with shortened disease-specific patient survival; however, an opposite finding was described for papillary renal cell carcinoma [133]. This emphasizes the heterogenous nature of tumors and the potential differential expression pattern of tight junction proteins during distinct stages of tumor development, as well as cellular differentiation. This requires a careful assessment of these molecules, as the interplay between cancer stem cell enrichment and TJ proteins can reveal potential clinical perspectives of the latter as clinicopathologic parameters or prognostic factors.
TJ Proteins and Epigenetic Regulation in Cancer
In a recent update published in 2022, Hanahan proposed "nonmutational epigenetic reprogramming" as a new distinctive enabling characteristic that expedites the acquisition of cancer hallmark capabilities during tumor development and progression [134], high-lighting the importance of these modifications in oncogenic transformation. In this setting, various epigenetic mechanisms are reported to regulate tight junction protein expression in favor of malignant transformation.
Although playing an essential role in biologic processes, aberrant promoter methylation is extensively associated with carcinogenesis [135]. The CpG island hypermethylation of occludin promoter enhances the tumorigenic, invasive, and metastatic properties of cancer cells [16]. JAM-3 is frequently downregulated in colorectal cancer through DNA methylation [136], with the latter also silencing CLDN1 [137]. CLDN3 promoter hypermethylation is described in HCC and advanced gastric adenocarcinoma [138,139], as well as the hypermethylation of CLDN11 promoter in melanomagenesis [140]. On the other hand, CLDN4 upregulation during early gastric tumorigenesis is strongly associated with DNA hypomethylation, along with decreased repressive H3K27me3 and H4K20me3 histone marks, and an increased active H3K4me3 and H4Ac histone marks [141]. CLDN5 is also identified as an aberrant methylation target in pancreatic carcinoma [142], as well as CLDN6 in breast and esophageal squamous cell carcinoma [143,144], and CLDN7 in breast ductal [145] and colorectal carcinoma [146]. Interestingly, differential CLDN4 expression reported as CLDN4 overexpression in differentiated carcinomas compared to the downregulation in invasive/high-grade bladder tumors was associated with a low versus high level of CLDN4 methylation, respectively [147]. This is also noted with CLDN1, with an increased promoter methylation-expression pattern in recurrent ovarian cancer, compared to the primary cancer [126]. In addition, CLDN1 promoter CpG island methylation was relatively frequent in estrogen receptor-positive breast cancer, but not estrogen receptor-negative samples [148]. This could point towards a complex and differential epigenetic-mediated expression pattern of TJ proteins in different tumor stages and grades, highlighting their potential as promising prognostic markers. In fact, CLDN1 has been described as a strong prognostic indicator of disease recurrence and poor patient survival in stage II colon cancer [149]. Furthermore, the combined low expression of CLDN3, -4, -7 and -8 in metaplastic and basal-like breast cancer is proposed to be a strong predictor of disease recurrence [150].
Epigenetic modulation of tight junction proteins is best demonstrated through treatment with epigenetic modulators/inhibitors. Indeed, AZA, a DNA methylation inhibitor, was found to downregulate CLDN2 expression [34]. Interestingly, in this context, epigenetic modulation could intersect with intracellular signaling pathways, in addition to regulating gene transcription. For instance, it was reported that CLDN2 is downregulated through the inhibition of Akt and NF-κB phosphorylation by AZA [34]. Indeed, interactions between the epigenetic machinery and various signaling pathways, including MAPK, Notch, Wnt, JAK/STAT, NF-κB and JNK pathways, have been described [151]. It could be of interest to link epigenetic modifications to signaling pathways in order to broaden our understanding of the complex, yet intersecting, mechanisms by which a cell induces transcriptional changes in response to external and internal signals during carcinogenesis. In this context, special attention should be paid to the dynamic nature of the tumor microenvironment, as it is correlated with the epigenetic re-programming and the activation of tumor-promoting signaling cascades by harboring various hormones and growth factors. For example, estrogen stimulation upregulates CLDN1 expression in cervical adenocarcinoma via G protein-coupled receptor 30 through ERK and/or Akt signaling [152].
Taken together, signaling mediated by members of the TJ protein family plays a key role in the pathogenesis of solid tumors and is associated with tumor initiation, progression, and metastasis. These functional effects correlate with differential subcellular delocalization and expression patterns. However, the complexity and plasticity associated with the functions of TJ proteins in the mediation of tumorigenesis go beyond signaling pathway perturbations. This includes the loss of membrane polarity via tight junction abnormalities, increased paracellular permeability and junctional remodeling. The resulting dysregulation of this machinery can have deleterious effects not only on cellular homeostasis, but also on the interactions with the extracellular matrix. Coupling the perturbations in cell-to-cell adhesion to adhesion-independent signal transduction perturbations will broaden our understanding of how TJ molecules contribute to cancer pathogenesis.
Members of the Tight Junction Protein Family as Targets for Cancer Treatment
Considering the critical role of tight junctions in carcinogenesis, these proteins are currently perceived as potential therapeutic targets, with multiple approaches being employed, ranging from monoclonal antibodies (mAbs) and targeting molecules, to therapeutic gene delivery [13].
Antibodies that target the TJ components have been recently described, with their pharmaceutical activities being explored in experimental and clinical settings. Importantly, this therapeutic targeting is not limited to junctional proteins, but has also been applied to exposed TJ proteins located outside the TJs. Overexpression of non-junctionally exposed CLDN1 has been described at the basolateral membrane of human hepatocytes during advanced liver fibrosis and hepatocellular carcinoma, where CLDN1 mediates key regulatory cell functions, such as differentiation, proliferation, and migration, by recruiting signaling proteins in response to extracellular stimuli [153]. Non-junctionally exposed CLDN1 serves as a cell entry factor of HCV-a major cause of liver cancer worldwide [154]. Interestingly, treatment with humanized monoclonal antibodies (mAbs) that selectively target the extracellular loop 1 of non-junctionally exposed CLDN1 suppressed liver cancer growth and EMT in patient-derived ex-vivo models and reprogrammed the tumor microenvironment in patient-derived HCC spheroids, an effect also confirmed in in vivo proof of concept CDX and PDX models [155]. Inhibition of cancer growth and invasion was mainly mediated through interference with oncogenic signaling pathways, notably the Notch cascade [111]. In addition, these highly specific mAbs demonstrated significant and robust antitumoral effects in vivo across cell line-derived and patient-derived xenograft models for intra-and extrahepatic cholangiocarcinoma [156]. Safety studies on non-human primates showed no detectable adverse events even at high steady-state concentrations, hence providing a preclinical proof-of-concept for CLDN1-specific mAbs for liver cancer prevention and treatment [157]. Collectively, these data provide an opportunity for the clinical development of CLDN1-specific antibodies for liver cancer. Given the high expression of CLDN1 in other solid tumors [110,158,159], CLDN1-targeting approaches may be used to treat a broad range of solid tumors. On the other hand, treatment with the human-rat chimeric IgG1 1A2 CLDN2-targeting antibody attenuated fibrosarcoma tumor growth without remarkable side effects [160]. The anti-CLDN4 extracellular domain antibody 4D3 synergistically enhanced the antitumoral effects of 5-fluorouracil or the anti-EGFR antibody C225 (cetucimab) in colorectal cancer [161]. KM3900, a monoclonal antibody that recognizes the extracellular loop 2 of CLDN4, induced antibody-dependent cellular cytotoxicity and complement-dependent cytotoxicity in vitro, while inhibiting pancreatic and ovarian tumor growth in SCID mice in vivo [162]. Similar anti-tumor activity was also noted upon treatment with KM3907, a dual-targeting monoclonal antibody against the extracellular loop 1 of CLDN3 and CLDN4 [163]. The anti-CLDN6 antibody IMAB027, also known as ASP1650, has been studied in women with recurrent advanced ovarian cancer [164], and men with incurable platinum refractory germ cell tumors [165]. The CLDN18.2-targeting antibodies zolbetuximab (IMAB362; claudixmab) and NBL-015 were also studied in gastroesophageal cancer [166] and in patients with advanced solid tumors [167], respectively. Notably, NBL-015 was granted the status of orphan-drug designation (ODD) by the U.S. Food and Drug Administration (FDA) for the treatment of pancreatic and gastric cancers, including cancer of gastroesophageal junctions [167]. The ODD status has also been granted to I-Mab's TJ-CD4B, a first clinical-stage bispecific antibody that binds to CLDN18.2 and the co-stimulatory molecule 4-1BB on T cells to exert a tumor-killing effect in the setting of gastric cancer [168]. Of note, the anti-JAM-C mAb H225 abolished mantle cell lymphoma cell engraftment in a xenograft model [169]. An injection of anti-JAM-A mAb 6F4 significantly inhibited the growth of epidermoid carcinoma xenograft models of MCF-7 and A431 cells [170]. Data that summarize the current results and status of anti-CLDN antibodies in clinical trials are reviewed in Table 1. Open-label, multi-center, dose-escalation phase I clinical study in patients with advanced solid tumors started in 2021 [167] Moreover, engineered Clostridium perfringens enterotoxin (CPE)-related molecules, including toxin-conjugated CPE fragments, have demonstrated antitumor effects. For instance, an injection of a CLDN4-targeting molecule, consisting of the fusion of the C-terminal fragment of CPE (C-CPE) and the protein synthesis inhibitory factor (PSIF) derived from Pseudomonas aeruginosa exotoxins, reduced tumor growth in vivo [174]. Another fusion molecule that targets CLDN4 in ovarian cancer cells was also described, with CPE being fused to TNF at its NH(2)-terminal end [175]. Targeted gene therapy of CLDN3 and/or four overexpressing colon cancer cell lines was also described through the use of an optimized CPE-expressing vector that functions as targeted suicide gene therapy, with the latter resulting in rapid and effective tumor cell killing in vitro and in vivo [176]. Nevertheless, CPE immunogenicity and potential toxicity might limit its clinical applications [177]. This could be overcome with local administration, or by adopting alternative technologies, such as the development of monoclonal antibodies that target these TJ proteins, as mentioned above. Other treatment modalities for cancer immunotherapy include the incorporation of a panel of engineered CLDN6 variants into the membrane of retrovirus-derived virus-like particles (VLPs), eliciting complement-dependent cytotoxicity in solid tumors [178]. The latter was also noted upon the administration of a combination of the measles virus and the CLDN6 tumor vaccine [179].
Of note, the anti-JAM-C antibody antitumoral effect was mediated through the inhibition of ERK1/2 phosphorylation [169]. Superior attention to the signaling pathways implicated needs to be granted to thoroughly understand the molecular mechanism of action that underlies the effectiveness of those therapeutic tools. In particular, this could allow the potential repositioning of some of these therapeutic antibodies in the setting of uncurable or resistant malignancies.
Conclusions and Perspectives
A large body of research in the last decade has shown that TJ proteins are not only statically expressed in tight junctions to contribute to barrier function, but are dynamically involved in a wide array of cellular processes that regulate proliferation, migration, plasticity, and differentiation, all of which are central to cancer initiation and progression. The differential expression of TJ proteins in cancer, combined with the gain and loss of functions, has unveiled their important functional role in carcinogenesis in a tissue-and context-dependent matter, as well as during diverse stages of cancer progression, including invasion, metastasis, or relapse. Pre-clinical and clinical studies that target several members of the TJ protein family have reported the targetable properties of these molecules, as well as their safety and efficacy. Furthermore, recent clinical studies using monoclonal antibodies have demonstrated that TJ proteins are a valuable target to improve the outcome of solid tumors. Acknowledgments: The illustrations in the figures were created with BioRender.com, which was accessed on 15 July 2022.
Conflicts of Interest:
Inserm, the University of Strasbourg, Strasbourg University Hospitals, have filed patents and patent application using CLDN1-specific antibodies for the prevention and treatment of HCC (led by TFB). TFB is a founder, shareholder and advisor of Alentis Therapeutics, developing monoclonal antibodies for the treatment of fibrotic diseases and cancer.
|
2023-01-12T16:47:26.388Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ffedef9a75f5ab4fd428d2e1cdc74a72cf2b4b6c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4409/12/2/243/pdf?version=1673001982",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b8b8736d05e479e232be9ec889e7e77eeb059833",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
253510519
|
pes2o/s2orc
|
v3-fos-license
|
NLPeer: A Unified Resource for the Computational Study of Peer Review
Peer review constitutes a core component of scholarly publishing; yet it demands substantial expertise and training, and is susceptible to errors and biases. Various applications of NLP for peer reviewing assistance aim to support reviewers in this complex process, but the lack of clearly licensed datasets and multi-domain corpora prevent the systematic study of NLP for peer review. To remedy this, we introduce NLPeer– the first ethically sourced multidomain corpus of more than 5k papers and 11k review reports from five different venues. In addition to the new datasets of paper drafts, camera-ready versions and peer reviews from the NLP community, we establish a unified data representation and augment previous peer review datasets to include parsed and structured paper representations, rich metadata and versioning information. We complement our resource with implementations and analysis of three reviewing assistance tasks, including a novel guided skimming task.Our work paves the path towards systematic, multi-faceted, evidence-based study of peer review in NLP and beyond. The data and code are publicly available.
Introduction
Research publication is the primary unit of scientific communication. To ensure publication quality and to prioritise research outputs, most scientific communities rely on peer review (Johnson et al., 2018) -a distributed procedure where independent referees determine if a manuscript adheres to the standards of the field. Despite its utility and wide application, peer review is an effortful activity that requires expertise and is prone to bias (Tomkins et al., 2017;Lee et al., 2013;Stelmakh et al., 2021). An active line of research in NLP for peer review strive to address these challenges by supporting the Figure 1: NLPEER unites openly licensed datasets from different research communities, reviewing systems and time periods, including three previously unreleased text collections: ARR-22, COLING-20 and F1000-22. underlying editorial process (e.g. Price and Flach, 2017;Kang et al., 2018;Shah, 2019), decision making (e.g. Shen et al., 2022;Dycke et al., 2021;Ghosal et al., 2019), review writing (Yuan et al., 2022), and by studying review discourse (e.g. Kennard et al., 2022;Kuznetsov et al., 2022;Hua et al., 2019;Cheng et al., 2020;Ghosal et al., 2022b).
Despite the methodological advancements, several factors prevent NLP research for peer review at large. The computational study of peer review lacks a (1) solid data foundation: reviewing data is rarely public and comes with legal and ethical challenges; existing sources of peer reviewing data and the derivative datasets are not licensed, which legally prevents reuse and redistribution (Dycke et al., 2022). Peer reviewing practices vary across research communities (Walker and Rocha da Silva, 2015;Bornmann, 2011) -yet the vast majority of NLP research in peer review so far focused on a few machine learning conferences that make their data available through the OpenReview.net platform (e.g. Kennard et al., 2022;Shen et al., 2022). A (2) multi-domain perspective on peer review is thus missing, and the transferability of findings between different communities and reviewing workflows remains unclear. Finally, a (3) unified data model for representing peer reviewing data is lacking: most existing datasets of peer reviews adhere to task-specific data models and formats, making it hard to develop and evaluate approaches for peer review support across datasets and domains.
To address these issues, we introduce NLPEER. We apply a state-of-the-art workflow (Dycke et al., 2022) to gather ethically and legally compliant reviewing data from natural language processing (NLP) and computational linguistics (CL) communities. We complement it with multi-domain reviewing data from the F1000 Research 3 platform and historical data from the openly licensed portion of the PeerRead (Kang et al., 2018) corpus. The resulting resource (Figure 1) is the most comprehensive collection of clearly licensed, open peer reviewing datasets available to NLP to date.
NLPEER includes peer reviews, paper drafts and revisions from diverse research fields and reviewing systems, over the time span from as early as 2012 until 2022. This -for the first time -enables systematic computational study of peer review across domains, communities, reviewing systems and time. The paper revisions make NLPEER well-suited for the study of collaborative text work.
To facilitate the analysis, we unify the datasets under a common data model that preserves document structure, non-textual elements and is well suited for cross-document analysis. To explore the new possibilities opened by our resource, we conduct cross-domain experiments on review score prediction (to encourage consistent review scores), pragmatic labeling (to encourage balanced reviews) and the novel guided skimming for peer review task (to help guide review focus), along with easy-to-extend implementations. Our results indicate substantial variation in performance of NLP assistance between venues and research communities, point at synergies between different approaches to review structure analysis, and pave the path towards exploiting cross-document links between peer reviews and research papers for language model benchmarking.
In summary, this work contributes (1) the first unified, openly licensed, multi-domain collection of datasets for the computational study of peer review, including (2) two novel datasets of peer reviews from the NLP and CL communities, and complemented by a (3) descriptive analysis of the resulting data and (4) extensive experiments in three applied NLP tasks for peer reviewing assistance.
Peer Reviewing Terminology
During peer review, authors submit their paper 4 to the editors, often via a peer reviewing platform. The manuscript is distributed among reviewers who produce reviewing reports -or reviews -evaluating the submission. As a result, the submission might be accepted, rejected or further adjusted, producing a revision. Some reviewing systems allow additional exchange, e.g. author responses and meta-reviews. However, here we focus on papers, reviews and revisions, which constitute the core of peer reviewing data.
There exist different implementations of peer review, including blind review (where reviewer and/or author identities are hidden to promote objectivity) and open review (identities are open). Moreover, reviewing standards and practices vary by research community, venue and publication type. Review forms and templates are one important varying factor; in the following, we differentiate between unstructured (only one main text) and structured (several predefined sections) review forms. We refer to the particular implementation of peer review at a certain venue as reviewing system. Along with the natural domain shift based on research field, the reviewing system contributes to the composition of the peer reviewing data it produces.
Existing Data
Peer review is a hard, time-consuming, subjective task prone to bias, and an active line of work in NLP aims to mitigate these issues. NLP for peer review crucially depends on the availability of peer reviewing data -yet open data is scarce. The majority of existing NLP studies on peer review (e.g. Kang et al., 2018;Yuan et al., 2022;Hua et al., 2019;Kennard et al., 2022;Cheng et al., 2020;Ghosal et al., 2022a;Shen et al., 2022) draw their data from a few machine learning conferences on the Open-Review.net platform which -at the time of writing -does not attach explicit licenses to the publicly available materials, making reuse of the derivative data problematic. In addition, over-focusing on few selected conferences in machine learning limits the utility of NLP approaches to peer review for other areas of science and reviewing systems, and leaves the question of cross-domain applicability open. While the recent F1000RD corpus (Kuznetsov et al., 2022) addresses some of these issues, it lacks data from NLP and CS communities, and does not contain blind reviewing data, although single-and double-blind review are arguably standard in most research fields (Johnson et al., 2018).
Ethical, copyright-and confidentiality-aware collection of peer reviewing data (Dycke et al., 2022), and transformation of this data into unified, research-ready datasets, require major effort. The purpose of NLPEER is to provide the NLP community a head-start in ethically sound, cross-domain and cross-temporal study of peer review.
NLP Approaches to Aiding Peer Review
Recent years have seen a surge in NLP approaches to aid peer review; ranging from early works studying reviewing scores (Kang et al., 2018) to more recent approaches attempting to align author responses to review reports automatically (Kennard et al., 2022). The goal of our work is to provide diverse and clearly licensed source data to support the future studies in NLP for peer review assistance.
In addition, we explore the potential for crossdomain NLP-based peer reviewing assistance on three tasks detailed in Section 5. Review score prediction has been first introduced in (Kang et al., 2018) and further explored in (Ghosal et al., 2019). While following a similar setting, our study contributes to this line of research by exploring the task of score prediction across domains and research communities and provides new insights on the factors that impact the transferability of the task between reviewing systems. Pragmatic labeling has been previously explored in (Hua et al., 2019;Kuznetsov et al., 2022;Kennard et al., 2022) and is usually cast as a discourse labeling task on free-form peer review text. Yet, many reviewing systems enforce the same discourse structure by employing structured peer review forms. Complementing the prior efforts, we explore the potential synergies between the two approaches. The guided skimming for peer review task is novel and builds upon the recent work in cross-document modeling for peer review by Kuznetsov et al. (2022) and other related works (Qin and Zhang, 2022;Ghosal et al., 2022a).
Datasets
NLPEER consists of five datasets: two datasets from previous work, two entirely new datasets from the NLP domain, as well as an up-to-date crawl of the F1000Research platform.
Prior datasets Parts of the PeerRead data (Kang et al., 2018) have been created with explicit consent for publication and processing by both reviewers and authors, and we include them into our resource. ACL-17 5 and CONLL-16 6 contain peer reviews and papers from the NLP domain, stem from a double-blind reviewing process, and use unstructured review forms with a range of numerical scores, e.g. substance and soundness. The data is licensed under CC-BY.
F1000-22 is collected from F1000Research -a publishing platform with an open post-publication reviewing workflow. Unlike other datasets in NLPEER, F1000-22 covers a wide range of research communities from scientific policy research to medicine and public health. The reviewing process at F1000Research is fully open, with reviewer and author identities known throughout the process, contributing to the diversity of NLPEER. F1000Research uses unstructured peer reviewing forms coupled with a single 3-point overall score (approve, reject, approve-with-reservations). The paper and review data are distributed under CC-BY license, which we preserve. Table 1: For the datasets in NLPEER, we report the mean and standard deviation of per-paper and per-review statistics. We report % of accepted papers; for F1000-22 this is the % of version one drafts with unanimous scores of "accept", as acceptance conceptually does not exist here. Total statistics are summed and averaged ( * ), respectively.
COLING-20 (New) was collected via a donation-based workflow at the 28th International Conference on Computational Linguistics, in the NLP and computational linguistics domain. The data stems from a double-blind reviewing process; review forms include free-form report texts and multiple numerical scores, e.g. relevance and substance. We release this data under the CC-BY-NC-SA 4.0 license.
ARR-22 (New) was collected via the donationbased workflow proposed by Dycke et al. (2022) at ACL Rolling Review (ARR) -a centralized system of the Association for Computational Linguistics. NLPEER includes peer reviewing data for papers later accepted at two major NLP venues -ACL 2022 7 and the NAACL 2022 8 -covering submissions to ARR from September 2021 to January 2022. The reviewing process at ARR is doubleblind and uses standartized structured review forms that include strengths and weakness sections, overall and reproducibility scores, etc. We release this data under the CC-BY-NC-SA 4.0 license.
Unification
The diverse source datasets in NLPEER were cast into a unified data model ( Figure 2). Each paper is represented by the submission version and a revised camera-ready version, and is associated with one or more review reports. In the case of F1000-22 all revisions and their reviews are present. To unify 7 60th Annual Meeting of the ACL 8 2022 Annual Meeting of the NAACL the papers, we converted all drafts and revisions in NLPEER into intertextual graphs (ITG) -a recently proposed general document representation that preserves document structure, cross-document links and layout information (Kuznetsov et al., 2022). We extended the existing ITG parser 9 and combined it with GROBID (GRO, 2008(GRO, -2023 to process PDF documents and preserve line number information whenever available. Papers were supplemented with the PDF, XML and TEI source whenever available. Reviews were converted into a standardized format that accommodates structured and free-text reviews and arbitrary sets of scores. As paper revisions were not collected for some of the NLPEER datasets, we have complemented existing data with camera-ready versions obtained via the ACL Anthology 10 . Papers, reviews and datasets are accompanied with metadata, e.g. paper track information and licenses for individual dataset items. Further dataset creation details are provided in the Appendix A.
Ethics, Licensing and Personal Data
All datasets included in NLPEER are distributed under an open Creative Commons license and were collected based on explicit consent, or an open license attached to the source data. The ARR-22 data collection process allowed reviewers to explicitly request attribution, and this information is included in the dataset. F1000Research uses open reviewer and author identities throughout the reviewing process; this information is preserved in F1000-22. Finally, the authors of camera-ready publications included in NLPEER are attributed. In all other cases, review reports and paper drafts throughout NLPEER are stripped of personal metadata; in addition, the reviews of all datasets apart from F1000-22 have been manually verified by at least one expert from our team to not contain personal information. Due to the practices of scientific publishing in the selected communities, NLPEER only includes texts written in English.
Statistics
The datasets in NLPEER originate from different domains and peer reviewing systems, each with its own policies, reviewing guidelines, and community norms. While a comprehensive comparison of publishing and reviewing practices lies beyond the scope of this work, this section provides a brief overview of the key statistics for NLPEER datasets. Table 1 reports the textual statistics of NLPEER, comprising more than 4M peer review tokens in total, with more than 400K review tokens in the NLP/CL domain. The resource is diverse, and we observe high variability in review and paper composition among the datasets. For example, papers in F1000-22 come from a wide range of domains and cover a wide range of article types from case reports to position papers, reflected in the low number of sentences per paper (158) with high variance(±90.3) compared to the other, NLP-based datasets with roughly 200 sentences per paper and lower variance. Yet, we note that review lengths exhibit smaller variance across the datasets. Every paper in NLPEER is associated with at least one review and one revision, making it suitable for the study of cross-document relationships.
Scores and Acceptance
The data collection workflow impacts the proportion of accepted papers in the dataset: from 100% in ARR-22 which uses a strict confidentiality-aware collection procedure, to 34% in F1000-22, where manuscripts are made available prior to peer reviewing and acceptance. Yet, acceptance per se is not an accurate proxy of review stance: a paper that is eventually accepted can receive critical reviews. To investigate, we turn to reviewing scores. Each of the reviewing systems in NLPEER requires reviewers to assign a numerical rating to the papers. However, scoring scales and semantics differ across datasets: the NLP and CL conferences employ fine-grained scales (5-and 9-point), while F1000 Research uses a very coarse scale (3-point). Figure 3 shows the normalized distribution of overall scores for each dataset in NLPEER. We observe that scores near the arguably most interesting region around the borderline to acceptance are well-represented, with a skew towards the positive end of the review score scale otherwise. For the donation-based datasets (all except F1000-22), this is likely due to participation bias (Dycke et al., 2022); for F1000-22, it is a result of the post-publication, revision-oriented reviewing workflow. This fundamental difference between the peer reviewing systems in NLPEER makes it an interesting and challenging target for the computational study of reviewing scores across reviewing systems and research communities.
Domain Structure The vocabulary of a text collection is an important proxy for describing its language variety (Plank, 2016), and higher vocabulary overlap indicates shared terminology and topical alignment between datasets. To investigate the domain structure of NLPEER, we measure the vocabulary overlap of review texts based on the Jaccard metric for the top 10% most frequent lemmas excluding stopwords similar to Zhang et al. (2021). As Figure 4 demonstrates, reviews from the NLP/CL communities (ARR-22, ACL-17, COLING-20) are most similar (0.37-0.53), while F1000-22 is most similar to ARR-22 with a notably lower score than the within-community comparison (∆ ≈ 0.25). This illustrates that despite domain differences, the review reports do share general wording (e.g. "model", "method", "author") that is independent from lower-frequency field-specific terminology (e.g. "cross-entropy", "feed-forward", "morpheme"). NLPEER includes datasets with linguistically diverse review reports while maintaining a domain-independent base vocabulary characteristic for the genre of peer reviews. 11 This makes the investigation of cross-domain NLP for reviewing assistance a promising research avenue.
Reviewing Assistance with NLPEER
NLPEER is a unique resource for the study of computational assistance during peer review authoring.
To demonstrate its versatility, we define three tasks -review score prediction, pragmatic labeling and guided skimming -that are anchored in realistic assistance scenarios and targeted towards helping junior reviewers to improve their review. Figure 5 illustrates the tasks. For readability, we use s to 11 The paper vocabulary overlap follows a similar trend, see Appendix ( Figure 7). denote sentences, par for paragraphs, and sec to denote sections for reviews (e.g. s R ) and papers (e.g. s P ), respectively. We report the main results here, and refer to the Appendix B for details and the Limitations Section for an ethical discussion of risks and opportunities of these tasks.
General Setup
Baseline Models Our experiments aim to assess the difficulty of each assistance task across the subsets of NLPEER. Hence, we base our experiments on well-established large language models (LLM) from the general and scientific domain -RoBERTa Fine-tuning and Evaluation For each task, we split the data into a training (70%), development (10%) and test set (20%), and fine-tune the pretrained LLM on the training split using a taskspecific prediction head. To account for small dataset sizes, we fine-tune each model with small learning rates ([1, 3 × 10 −5 ]) and a linear warm-up schedule, and allow training for up to 20 epochs, following existing recommendations (Mosbach et al., 2021). We repeat the experiments ten-times with different random seeds and report the median and standard deviation of each performance metric across the runs. All experiments were run on a single NVIDIA A100 GPU, summing to a total of roughly seven days computing time all together.
Review Score Prediction
Reviewer rating behavior is heterogeneous; the correspondence between review ratings and review texts often lacks consistency (Wang and Shah, 2019). The differences in rating assignment might exist both on the individual level (i.e. due to lack of experience) and on the community level (i.e. when reviewing for a large multi-track conference). Here, a review score prediction model can suggest a score to the reviewer that is typical for the given report and sub-community. This suggestion can serve as feedback to revise inconsistent scores.
Task. Following prior work (Kang et al., 2018;Ghosal et al., 2019;Stappen et al., 2020), we cast review score prediction (RSP) as a regression task where given the review text in R and the paper abstract a P the model should predict the overall review score q R mapped to the respective scale. We provide the paper abstract as an input to the model to allow to contextualize the review text; hereby, the model can resolve coreferences and weigh review statements in relation to the paper. We measure the performance by the mean root squared error (MRSE). As one key challenge of review scoring is the mapping of an overall assessment to a discrete score scale, we also measure the classification performance of the regression model in terms of the F1-score by splitting the real-valued outputs into equally sized intervals according to the respective review score scale. We highlight that this task framing does not account for specific score semantics, but in exchange permits direct model transfer and comparison across domains. We leave the in-detail study of RSP as a classification task, for instance using in-context learning (Brown et al., 2020) on score semantic labels, to future work.
Setup. For each dataset in NLPEER, we consider all reviews of the initial paper draft; this means, for F1000-22 we include only the reviews of the first paper version. As the input, we concatenate the review report sections sec R 1 , sec R 2 ... ∈ R and the paper abstract a P , which we separate by a special token. To account for the limitations of the LLMs, we truncate the resulting text to 512 tokens. For training, we normalize the scores q R to the interval [0, 1] considering the maximum and minimum scores of the respective rating scale. The same paper can receive multiple reviews; we ensure that all reviews of the same paper belong to the same data split. For representative sampling, we ensure that the distribution of reviews per paper is similar across splits (see Appendix B).
Results. Table 2 summarizes score prediction results for the best-performing LLM by dataset. Neural models substantially outperform the mean score baseline for ARR-22 and F1000-22, yet for ACL-17, CONLL-16 and COLING-20 they perform onpar or worse. We note that the latter datasets also have the lowest number of samples. To investigate a potential connection, we have trained a RoBERTa model on 30% of the ARR-22 training set, resulting in a median 0.44 MRSE and 0.24 F1-macro score on the test set -on par with the mean baseline, and comparable to the similarly-sized ACL-17. This suggests that the dataset size has the strongest impact on score prediction performance, encouraging future reviewing data collection efforts, as well as the study of few-shot transfer between high-and low-resource peer reviewing systems and domains.
Pragmatic Labeling
The core function of peer review is to assess and suggest improvements for the work at hand: a good review summarizes the work, lists its strengths and weaknesses, makes requests and asks questions. Although some venues employ structured review forms to encourage review writing along these dimensions, issues of imbalanced feedback (e.g. focus only on weaknesses) (Hua et al., 2019), inconsistent uses of review sections, and lack of guidance for free-form reviews persist. Here, a pragmatic labeling model can provide feedback to reviewers to potentially revise, balance or rearrange their review report. Following up on prior studies on argumentative and pragmatic labeling schemata for free-form peer reviews (Hua et al., 2019;Kuznetsov et al., 2022, etc.), we use NLPEER to explore the connection of pragmatic labels and structured review forms for the purpose of this assistance scenario.
Task We cast pragmatic labeling as a sentence classification task, where given a review sentence s R i , the model should predict its pragmatic label c i ∈ C. For this experiment we use the data from two distinct reviewing systems: F1000Research employs free-form full-text reviews; a subset of F1000-22 has been manually labeled with sentence-level pragmatic labels in the F1000RD corpus (Kuznetsov et al., 2022). ACL Rolling Review, on the other hand, uses structured review forms split into sections, which we align to the F1000RD classes: Strength, Weakness, Request (Todo in F1000RD) and Neutral (Recap, Other and Structure in F1000RD). See Appendix B for more details.
Setup For F1000RD we consider all 5k labeled sentences mapped to the respective classes. For ARR-22, we take review sentences longer than five characters (to filter out splitting errors), and label them by their respective review section, resulting in around 14k labeled sentences. For each dataset, we split the instances at random disregarding their provenance to maximize the diversity across splits.
Results Table 3 presents the pragmatic labeling results in-and cross-dataset. Expectedly, neural models outperform the majority baseline in-dataset by a large margin. Yet, cross-dataset application also yields non-trivial results substantially above the in-dataset majority baseline, despite the domain shift between the ARR-22 and F1000RD data. This has important implications, as it suggests that data from structured reviewing forms (entered by reviewers as part of the reviewing process) can be used to train free-text pragmatic labeling models, thereby significantly reducing the annotation costs. Nevertheless, the gap to the in-distribution supervised model remains, constituting a promising target for the follow-up work, that would need to disentangle the effects of domain, task and reviewing system shift on pragmatic labeling performance.
Guided Skimming for Peer Review
Review writing typically requires multiple passes over the paper to assess its contents. Different paper types (e.g. dataset or method papers) require different reviewing strategies (Rogers and Augenstein, 2020); hence, the regions that require most scrutiny and rigor during reading vary across papers. Suggesting passages most relevant to the required reviewing style could encourage higher quality of reviewing and serve as a point of reference to junior reviewers. We model this scenario via the novel guided skimming for peer review task, in line with Fok et al. (2023) who integrate passage recommendations into reading environments, reporting improved reading performance.
Task We model the task as follows: given a paper, the model should rank its paragraphs par P by relevance to the critical reading process. The training data for this task is derived from explicit links, e.g. mentions of line numbers or sections in the reviews, which can be reliably extracted from reports in a rule-based fashion (see B.3 for details) and used to draw cross-document links between review report sentences s R and the paragraphs of the papers they discuss par P (Kuznetsov et al., 2022) 12 . Paper paragraphs with incoming explicit links are then considered "review-worthy", and the task is to rank such paragraphs above others for previously unseen papers with no available review reports. While the resulting task is a simplification of the actual skimming process during peer review, it is a first step towards exploring review-paper-links for modeling actual reviewer focus. We encourage future work to follow-up on this line of inquiry.
Setup We use the ARR-22 and ACL-17 datasets, as they are of sufficient size and offer line numbers in the paper drafts. We extend and apply the explicit link extractor proposed by Kuznetsov et al. (2022), resulting in a total of 229 papers with 743 relevant passages with an average of 3.24 ± 3.2 of linked passages for ARR-22 and 87 papers, 308 passages and 3.54 ± 3.13 linked paragraphs per paper for ACL-17. We fine-tune LLMs using a binary classification objective, batching linked and unlinked paragraphs of the same paper together and rank by the output softmax of the positive score. We compare to a random baseline. Datasets are split by paper while ensuring a similar distribution of passages per paper across splits (see app. B).
Results Figure 6 summarizes the Precision and Recall at k for the best performing SciBERT model and the random baseline on ACL-17. Both recall and precision exceed the random baseline by a large margin and for all considered k. At around k = 3 roughly 50% of the relevant passages are retrieved; while at the same rank around 21% of the retrieved paragraphs are actually linked by the reviewers. The mean reciprocal rank (MRR) measures the average position of the first relevant result within the rankings. SciBERT achieves an MRR of 0.41 ± 0.05 on ACL-17 and 0.34 ± 0.03 on ARR-22, outperforming the random baseline by 0.23 and 0.18, respectively. Overall, the LLM perform substantially above random despite discarding all context information of a paragraph (see appendix C for a detailed analysis). While guided skimming is a non-trivial task, the above-random performance of the given LLM baseline shows promise for further research in this direction, which could also include an in-depth study considering the context of paragraphs -e.g. their position in the logical structure of the paper.
Further Applications
NLPEER includes rich representations and metadata for a large number of inter-linked peer review and manuscript texts. This enables new NLP studies for peer reviewing assistance and beyond. In this section we highlight and critically reflect on further applications of NLPEER in general NLP research and in practice. Peer review reports are expert-written texts that typically reflect deep understanding of the underlying paper. Hereby, they can serve as a valuable resource for distant or direct supervision of generalpurpose machine learning models. For instance, paper summaries in review reports can be used as a basis for a challenging document summarization task; aspect score disagreements between reviews may serve as a weak supervision signal for aligning arguments for and against the paper across reviews.
On the other hand, NLPEER and peer reviewing data in general might be exploited to train models with dual uses and practical risks. One such example is automatic review generation, where given a paper the model should generate a review report. While a resulting model may be used benevolently as a paper-writing aid or an analytical device for the study peer review, using it to avoid the reviewing effort or replace the reviewer bears a wide range of risks. As it is implausible that current state-ofthe-art NLP models could produce a non-generic, meaningful review of a novel paper (Yuan et al., 2022), such applications could compromise the academic quality assurance process. Due to the central role of peer review in research and publishing, we emphasize that future work should carefully reflect on the real-world impact of any models developed based on NLPEER.
Conclusion and Intended Use
We have presented NLPEER-the first clearly licensed large-scale reference corpus for the study of NLP for peer review. NLPEER opens many new opportunities for the empirical study of scholarly communication. For NLP, it allows developing new annotated datasets based on clearly licensed, richly formatted, unified corpora that span multiple research domains, reviewing systems and time periods. It can be used as a testing ground for domain transfer (Gururangan et al., 2022;Chronopoulou et al., 2022), and enables the study of cross-document relationships between papers, paper revisions and peer reviews (Kuznetsov et al., 2022). From the meta-scientific perspective, it allows comparing peer reviewing practices across research communities, and can provide crucial insights into how researchers review and revise scientific texts. Our task implementations and results can guide the development of NLP assistance systems and can be used for systematically comparing pre-trained language models in the context of peer reviewing applications. Finally, our resource can serve as a blueprint for future aggregate resources for the study of peer review in NLP.
for their support during the implementation of the data collection at ARR. We express our gratitude to Nuria Bel for her feedback during the first iterations of our data collection initiative at COLING-2020, and to Richard Gerber for helping us with early technical challenges in SoftConf. Last but not least, we would like to thank our reviewers for their valuable suggestions, as well as the community members who have engaged in a lively debate on this initiative and provided us with both encouragement and useful feedback.
This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE. It is co-funded by the European Union (ERC, InterText, 101054961). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them. Finally, parts of this work have been co-funded by the German Research Foundation (DFG) as part of the PEER project (grant GU 798/28-1). This work is part of the InterText initiative 13 .
Limitations
While we hope that our approach to data collection can serve as a benchmark for future NLP studies beyond peer review, we deem it equally important to explicitly outline the potential risks and limitations of NLPEER and NLP for peer review in general. Our discussion below encourages future research in ethics and applied NLP for peer review; many of our considerations are not specific to peer review and are equally relevant to the applications of NLP in general.
From the data perspective, we deem it important to clearly state what NLPEER is not meant for. Our data collection campaigns for ARR-22 and COLING-20 included an explicit disclaimer on the risks of author profiling on the peer reviewing data; we stress that such applications violate the intended use of NLPEER. Furthermore, NLPEER enables a wide range of new NLP assistance tasks for peer review. Yet, we encourage future studies of NLP for peer review to reflect carefully about the potential risks and benefits of new task defi-13 https://intertext.ukp-lab.de/ nitions atop of peer reviewing data in general and NLPEER specifically. For instance, the full automation of peer review, i.e. the generation of review reports given a paper, bears risks and dual uses.
Considering diversity in NLP datasets, we stress that even NLPEER only covers a fraction of peer reviewing across all fields of science, and more data needs to be collected to enable fully representative NLP-based study of peer review. Due to the genre standards of scientific publishing our dataset only covers papers and reviews in English language. Multilingual scholarly document processing is overall poorly represented in NLP, and constitutes a promising avenue for future research. While our resource contains data from a wide range of domains, research in arts and humanities is under-represented due to the poor data availability. The trend towards open science and the adoption of responsible data collection practices (Dycke et al., 2022) might bring reviewing data from previously unexplored domains and languages into NLP. We stress that any direct comparison based on our corpus would need to take into account reviewing practices and guidelines adopted by the respective communities. Specifically, potential biases resulting from the donation-based collection for ARR-22 and COLING-20 should be taken into account.
From the task side, we highlight that implementations and resulting models presented here are meant to exemplify the proposed tasks, determine their technical feasibility, and serve as a starting point for developing future NLP for peer review assistance systems. As such, the provided implementations have limitations: for example, sentence-level pragmatic labels derived from structure-based ARR forms might contain noise since ARR forms group text on section level; guided skimming does not make use of implicit links, and explicit links are mostly based on line numbers and quotes, limiting the recall. Since we did not perform extensive hyperparameter search and tuning of the models, our results should not be interpreted as a claim towards superiority of a particular model, approach or reviewing system.
We highlight that high intrinsic task performance does not necessarily translate into the extrinsic utility of NLP support in real-world reviewing environments. We thus deem it crucial to study the factors that affect the success of NLP assistance for peer reviewing. This includes the study of the humanmachine interaction dynamics and its desiderata; for example, review score recommendations should be accompanied by explanations. We encourage extensive research on risks of biases and errors in NLP assistance models; for instance, a review score prediction model might learn undesirable biases against certain types of papers. Review writing assistance implemented in a real reviewing system should always be accompanied by carefully designed guidelines and policies.
Finally, we invite the community to reflect on the potential societal consequences of the individual NLP assistance tasks, even if NLP models accomplish them well. To provide an example, our newly introduced guided skimming task assists during the effortful and time-intensive, yet crucial step of reading the paper under review. Although the guided skimming for peer review task models an intermediate step during reading and is intended to serve as an additional point of reference during the iterated skimming steps of peer review, such a technology might encourage reviewers to read only the paragraphs suggested by the model. We argue that this risk of "lazy reading" is independent of the technology at hand; a reviewer that is institutionally incentivized to perform reviews as quick as possible, may read a paper superficially and settle with heuristics for their assessment (Rogers and Augenstein, 2020) regardless of assistance. A greater risk, however, may be imposed by potential biases and errors of a guided skimming model, which could distract less experienced reviewers. While recent work on skimming assistance in scholarly articles (Fok et al., 2023) suggests a mature and reflected interaction of users with highlight recommendations and possible errors, this needs a specific investigation for the use case during peer review. On the other hand, a critical reading model may serve as a useful point of reference to guide reviewers to employ more scrutiny on the parts of the paper appropriate for this specific paper type, which ultimately may improve reviewing quality. We assess that the opportunities provided by the introduced review assistance tasks outweigh the potential risks in general, yet highlight that a targeted study is necessary to substantiate this assessment.
A.1 Overview
Revision Matching Our data model assumes that each paper is associated with at least one revisionyet some of the existing peer review datasets only provide the submitted drafts. To remedy this, we augment ACL-17, CONLL-16 and COLING-20 with camera-ready versions by matching the accepted paper draft titles and abstracts against the ACL anthology 14 .
For each of the papers in the mentioned datasets, we extracted the title and abstract either from the provided meta-data or the PDF. We then considered all papers of the respective conference in the ACL anthology and retrieved the top five entries according to the sentence BLEU 15 of the title and abstract with the paper at hand. Exact matches were included without manual verification, for the others the authors checked if the papers plausibly align. When in doubt, we opted to not include the matched camera-ready version.
Paper Parsing For all datasets except F1000-22, the raw paper inputs are PDFs, which are processed using GROBID (GRO, 2008(GRO, -2023 and translated into the ITG data model (Kuznetsov et al., 2022) that captures the structural information (i.e. sections, subsections, etc.), linking information (i.e. citations and references) and layout information (i.e. lines and pages) of a document. For the datasets with line numbers in paper drafts (ACL-17, CONLL-16, ARR-22), we first remove line numbers for parsing and then match the line and page information heuristically to the processed papers. Although manual spot checks suggest a high quality of GROBID and ITG parsing, errors are inevitable and we provide papers both in raw and parsed form. To process F1000 Research XML papers, we extend and modify the existing parser provided in the ITG library.
Review Parsing The reviews are converted into a standardized format that supports structured and unstructured reviews, with and without scores. This means that for ARR-22, COLING-20, ACL-17, and CONLL-16 we include the rich set of review scores, and for ARR-22 the diverse review sections (strengths, weaknesses, etc.) remain intact.
Metadata and Versions NLPEER includes all available revisions of a paper. For F1000-22, each paper may have multiple revisions. Other datasets include at least the paper draft and if the paper was eventually published a camera-ready version of the paper. We associate rich metadata with each paper version, including the extracted title, abstract, authors (for accepted papers), and, if available, information on the paper type.
Personal data check To ensure that the published reviews pose no risks related to personal information, each peer review text in NLPEERincluding the data adopted from prior work -was additionally validated by at least one NLP expert from our group. The initial analysis has identified 25 potentially problematic cases, of which 10 were deemed relevant upon a second check. Most of these cases contained either anonymous Open-Review.net identifiers within the review texts or included notes to the area chairs within the review main body. Despite low risk of cross-linking this information, we opted to discard these potentially privacy relevant sentences from the respective reviews.
A.2 COLING-2020 Collection
The data for the COLING-20 dataset was collected during the 28th International Conference on Computational Linguistics. Independent from the actual reviewing process, we asked authors and reviewers to donate their anonymized drafts and reviews, respectively.
Workflow After reviewing was completed, we reached out to authors and reviewers, asking them to consent to the research use of their data and to grant a CC0 license to their texts, with the the silence period of two years after COLING-20 acceptance decisions (October 2020). Reviewers and authors were informed about the risks of author profiling based on their provided textual artifacts. In total roughly 1500 anonymous reviews and 150 drafts were donated in this way. To adhere to the principles proposed by Dycke et al. (2022), we only include reviews and paper drafts for which both authors and reviewers agreed to donation; hereby avoiding any possibility of leaking confidential research ideas from the papers.
A.3 ARR-2022 Collection
ACL Rolling Review (ARR) is the unified and continuous reviewing system of the Association for Computational Linguistics (ACL). Papers are submitted in regular intervals, the cycles, receive reviews and a meta-review. In case of a positive metareview, the paper becomes eligible for submission to any of the ACL conferences including the Annual meeting of the ACL and Empirical Methods for Natural Language Processing, where program chairs make the final acceptance decision. Otherwise, the paper is revised, resubmitted and typically reviewed by the same set of reviewers in a later cycle.
Workflow We followed the workflow with the exact same license transfer agreements proposed by Dycke et al. (2022) to collect peer reviewing data of the cycles September 2021 trough January 2022 covering papers later accepted at the annual meeting of the ACL (ACL 2022) and the annual meeting of the north-american chapter of the ACL (NAACL 2022). Independent from the reviewing process, reviewers were presented the option to donate all peer reviews of each cycle in bulk anytime during the reviewing period. After acceptance decisions for the conferences were released, we reached out to the authors of accepted papers roughly one month before the actual conference took place. Authors and reviewers were informed about the risks of author profiling and review release. The collected dataset contains reviews for the final draft of a paper, but none of the previous revisions. Some reviews included in the dataset are therefore revisions of previous reviews or may contain references to those.
A.4 F1000-22 Creation
F1000Research is an open post-publication reviewing platform covering articles from various fields, from clinical medicine to scientific policy research to R package development, as well as different article types, including case studies, literature reviews, research articles and code documentation. Publications are published prior to acceptance, and then can be approved, rejected or approved-withreservations by one several invited reviewers. Publications can have multiple versions; each version is accompanied by open peer reviewing reports, author responses and amendment notes. All data on F1000Research is provided under an open license (CC-BY) in an easy-to-process JATS XML format. Workflow F1000 Research provides an official API for collecting peer reviews and articles 16 . We retrieved the index of articles and reviews in July 2022 and subsequently downloaded all articles with reviews for all versions and in XML, as well as PDF format. We discarded 34 articles with invalid file formatting and roughly 2000 articles that lacked reviews for the first version, as these indicate stale submissions. We extract meta-data, reviews and author responses from the article JATS XML files.
A.5 Extended Datasets Analysis
We complement the domain overlap analysis between reviews by the same analysis on paper abstracts. The vocabulary overlap in Figure 7 is generated under the same configuration (Jaccard metric on the top 10% of the lemmas), but computed on paper abstracts for all datasets. We see that the vocabulary overlap in abstracts is very similar to the overlap in review texts. However, the absolute similarity values are overall lower. This supports the observation that reviews have a wider crossdomain shared vocabulary, while papers apparently employ a more specialized register.
across datasets in NLPEER. We therefore fine-tune well-established large language models (LLMs) on each of the tasks; while the training objectives and approaches vary, we omit extensive fine-tuning of the LLMs and instead focus on a base set of hyperparameters close to the recommended ones. During a pilot study we observed relatively unstable training, which rendered extensive random hyperparameter search infeasible for the scope of this work. Table 4 depicts the different finetuning parameters per LLM used for all experiments unless stated otherwise. We use the huggingface transformers implementation of RoBERTa 17 with roughly 125 million parameters, BioBERT 18 with around 110 million parameters, and SciB-ERT 19 with roughly 110 million parameters. We follow recommendations (Mosbach et al., 2021) for fine-tuning LLM on comparatively small datasets. For all of the models and datasets, we allow up to 20 epochs of training, employ a linear warmup schedule (for 6% of the training steps) with non-bias weight decay of 0.1 and have 10 repeated measures on different random seeds to account for different random initializations of the task-specific classifier heads. We implement the training and testing pipeline in pytorch lightning 20 using huggingface transformers 21 . For each run, we select the model with the best performance on the validation set at the end of each epoch. We implement an early stopping mechanism that stops fine-tuning if no improvement is observed after at most 8 epochs.
Repeated Measures
For each task and language model, we apply the fine-tuning procedure described above using the test and development sets. We repeat fine-tuning including model selection in total 10 times for each model and task. In each finetuning run we vary the random seed influencing the order of batches and randomly initialized weights of the model. We report the used random seeds within the code provided along the submission.
Stratified Splitting For the tasks review score prediction and guided skimming for peer review, we split the datasets with a special stratification criterion. For review score prediction, we require that the distribution of reviews per paper is similar across splits. For guided skimming, we make sure that the splits have a similar distribution of relevant paragraphs per paper. To achieve this, we employ sklearn's stratified, binary split function 22 while mapping the considered numerical stratification criterion to a discrete space by assigning the real numbers to buckets. To realize a split into three datasets, we realize repeated binary splits.
B.2 Pragmatic Labeling
For pragmatics labeling we map the semistructured review form of ARR and the labels of the F1000RD dataset (Kuznetsov et al., 2022) to the same set of labels. The mapping of labels is summarized in Table 5. We highlight that, unlike the manually curated labels of F1000-RD, the sentences of ARR-22 are just extracted from the review forms which do not enforce full consistency with the respective section implying certain levels of noise in labels. For instance, some reviewers do mention strengths in the summary section of a review. Hence, our experiments are also targeted towards determining if this scalable approach to acquire a supervision signal is feasible; a further, detailed analysis of the quality of labels is a promising future direction of research.
B.3 Guided Skimming for Peer Review
We formulate guided skimming as a ranking task on the paragraphs of a paper considering their relevance to the writing of a peer review. To approach this task, we exploit explicit links from the review reports to the paper indicating that reviewers discuss a specific paragraph. Kuznetsov et al. (2022) propose a regular-expression-based algorithm to detect anchors (i.e. explicit mentions of structural elements in the paper) in review reports. The authors report an F1 score of 0.77 (with a precision of 0.81) for anchor identification and 0.64 (with a precision of 0.66) for their approach at matching explicit anchors to regions in the paper compared to human annotations. We conclude that the extraction of explicit links works reasonably well to be used as a proxy for reviewers' focus regions in the paper. Manual checks support this observation, in particular we see that explicit links seem to be identified with high precision, but comparatively low recall. Consequently, the resulting labels do contain certain levels of noise and reflect only a subset of the regions of the paper that are actually discussed in the reviews. We also highlight that implicit links in reviews, i.e. discussions of paper aspects that do not use explicit identifiers for paper regions, are not considered, as they are not readily available at scale. Table 6 shows the regular expressions used for the extraction of explicit anchors in reviews. We extend the existing set of rules by line numbers and formulas as supported in ARR-22 and ACL-17. For matching these, we rely on the layout information extracted heuristically from the PDFs allowing a reliable mapping of paragraphs to line ranges. We aggregate the explicit links of all reviews for a paper to derive the set of focused paragraphs. We omit the frequency of links to paragraphs and consider only binary labels (linked or not-linked), to simplify the task and avoid making additional assumptions. While we in principle allow for any kind of explicit link (to paragraphs, sections, etc.) during extraction, we only include those that can be mapped to one specific paragraph leaving only quotes and line references in practice.
Relevant Paragraph Distribution within Papers
In total the papers of ARR-22 have 12636 paragraphs of which 1100 (roughly 9%) are linked. In the ACL-17 dataset 600 of 4642 paragraphs (13%) are referenced by reviewers explicitly. We investigate the types of explicit links in each dataset. For ARR-22 64% of the relevant paragraphs are extracted from line-mentions in the reviews, while for ACL-17 these amount to 57%. Figures 8 and 9 show the histograms of text lengths for the relevant (i.e. linked) and not relevant (i.e. not linked) paragraphs of the papers. Overall, the length of relevant paragraphs tends to be higher than that of short paragraphs. We suspect that this phenomenon is encouraged by two interacting factors: first, paper parses are not perfect especially for splitting PDFs into structural elements leading to differently sized paragraphs in addition to naturally occurring size variations. Second, in combination with the first factor, longer paragraphs are simply more likely to be linked, because they have more content that can be discussed. This encourages a further line of work based on length-normalized paragraphs; we also investigate paragraph length as a spurious feature for the skimming task in the following. Further on, we investigate the position of relevant paragraphs within the document hierarchy (sections, subsections, etc.) of the papers. For ARR-22 around half of the paragraphs originate from the level of sections rather than sub-sections, while for ACL-17 roughly two thirds lie on section level. This suggests no substantial bias towards a level in the document hierarchy.
Finally, we analyse the distribution of linked paragraphs within a set of canonical sections typical for NLP papers, including e.g. introduction, method, and results. Figure 10 and 11 show the frequency of links to each of these canonical section types. We map non-canonical section titles to the "other" category. In both datasets the number of explicit links pointing to the introduction is the highest when ignoring the section type "other". For ARR-22 this phenomenon is less pronounced, while the results section is slightly more often referenced than in ACL-17. Overall, there appear to exist natural areas of focus for reviewers as approximated by explicit links, but paragraphs of many different section types are in fact covered in both datasets. As our baseline models don't take structural information into account the observed light skews towards certain sections is unlikely to be the primary explanation for the non-trivial perfor- There is an error in equation 3 Table 6: Selection of the regular expressions used for identifying explicit anchors in review reports. The horizontal line separates the rules proposed by Kuznetsov et al. (2022) from the extended rules introduced in this work. The full set of rules is provided in the code associated with this work. For each type of explicit link, we provide an example. C Detailed Results
C.1 Review Score Prediction
We report the complete and more detailed results of the review score prediction models including all tested large language models (RoBERTa, BioBERT, SciBERT) and the mean score baseline in Table 7.
Additional Metrics All metrics are computed on the actual review score scale; hence model outputs are mapped back from the normalized scale. In addition to the mean root squared error (MRSE) and the F1-macro on discretized output predictions, we report the R 2 metric to measure the degree of fit of the regression models, as well as an additional score diversity criterion. For this criterion, we compute the overall score distribution for a model across all samples of the test set and compare it with the true, human score distribution (using KL-divergence) to measure whether the diversity of model scores aligns with human reviewers. A model that learns reviewers' rating behavior well, should also predict scores from a similar range of scores as humans do. Models over-focusing or ignoring single scores would have high scores; hence lower is better in this case.
KL-Divergence Interpretation
We see that the mean overall score consistently shows the highest scores meaning the lowest similarity to the human score distribution. While the best model according to MRSE tends to have one of the lowest scores, this is not consistently true (e.g. RoBERTa on F1000-22). For COLING-20 and CONLL-16 the models perform on-par with mean baseline suggesting that they converged to predicting scores very close to the mean. Overall, even the best models still show notably different rating behavior from humans across all samples and datasets.
C.2 Pragmatic Labeling
We extend the results presented in the main body of the paper by a concise error analysis for the within and out-of dataset experiments. In the following report the confusion matrices for the best performing LLM (RoBERTa) of the model that achieved the reported median performance. Table 7: MRSE is measured on non-normalized scores (i.e. absolute values cannot be compared across datasets); here, lower is better. F1-macro computed on discretized scores, where higher is better. KL-Divergence is computed relative to the human score distribution across all samples of each dataset; hence, a lower value indicates a more similar distribution compared to humans. We report the median score over ten runs and provide the standard deviation.
Within-dataset Errors
and vice-versa. We highlight that in our experiments the class neutral subsumes the more finegrained neutral labels of F1000-RD like summary or structure, which might be one factor contributing to harded delineation.
In Table 9 we report the confusion matrix for RoBERTa on the ARR-22 dataset. Generally, we seem a similar pattern as for F1000-RD: the neutral class is most often confused with the others. As the review forms are not strictly enforced, it is likely that strengths and weaknesses are already reported in the summary section correlating to the neutral label encouraging this confusion. Likewise several neutral, factual sentences exist in the strength and weaknesses sections. This shows one limitation of using structure review forms as a proxy for review sentence pragmatic. Interestingly, we see that requests and weaknesses are very commonly confused. While the noisy supervision labels might again be a contributing factor, this aligns with the reported highest human disagreement for these two classes by Kuznetsov et al. (2022).
Out-of-dataset Errors
We inspect the errors of the best performing models trained on ARR-22 and tested on F1000-RD, and vice versa. Table 10 reports the confusion matrix for the first case. We see that unlike the in-domain trained model on F1000-RD weaknesses are frequently confused with requests and vice versa. This confirms that the request and weaknesses sections in the ARR review forms lead to the most ambiguity in the supervision labels as already hypothesized in the previous paragraph. Similar to the in-domain trained model the neutral class is the hardest to predict for the model. Overall, the transfer performance lies in a promising range that suggests more efforts on few-shot and cross-domain transfer of models are good future directions of research.
The transfer of a model trained on F1000-RD to ARR-22 aligns with previous observations for the within evaluation on ARR-22: most confusion is observed for the neutral class. Additionally, the model predicts the class request very frequently for sentences belonging to weaknesses. This supports the hypothesis that many sentences in the weakness section of ARR are actually requests, as the model trained on F1000-RD is based off of human gold annotations.
C.3 Guided Skimming for Peer Review
In addition to the at-k-measures and aggregate metrics of the skimming performance, as reported in the main body of this work, we provide the full results of all models and additional metrics in this section.
Paragraph Length Baseline As reported in B.3, the length distribution of linked and non-linked paragraphs might be a useful spurious feature for ranking the paragraphs by relevance to the guided skimming process. While the truncation of model inputs of the used large language models to 512 tokens makes is unlikely that the models are at risk of exploiting sequence length to achieve nontrivial performance, future approaches using, for instance, the full paper text or full paragraphs might do so. Hence, in the following we also report the performance of the baseline that ranks paragraphs by their number of characters. Table 12 reports the performance of the LLMs and the baselines on ACL-17. We consider the mean reciprocal rank (MRR) and the area under the receiver operating characteristic curve (AUROC) as overall measures of the ranking quality in addition to the at-k-measures. For ACL-17, SciBERT shows the best performance according to MRR, AUROC and Precision@3. While the model performs substantially above the random baseline, the margin towards the length baseline is small, especially for the AUROC. Overall, the ranking produced by SciBERT leads to a higher precision in the top ranks, but seemingly performs on-par with the length baseline for the overall ranking performance. This shows the difficulty of the task at hand, but at the same time suggests that there is a useful training signal in the paragraph texts beyond their mere length.
Ranking Measures
The results on ARR-22 as shown in 13 are very similar. Although all models perform above random, the paragraph length baseline is hard to beat. Here, we can observe some improvements in terms of recall and precision in the top ranks, but the overall ranking performance lies below the length baseline.
Conclusion The length of the paragraphs seems a relevant feature that might be exploited as a spu- Table 11: Confusion matrix of RoBERTa trained on F1000-RD and transferred to ARR-22. The entry in row i and column j denotes that the label j was predicted by the model for a sample of true class i. rious decision criterion by the models. However, especially in the top ranks for ACL-17 (see 12) the LLM seem to pick up information beyond mere paragraph length. We suspect that structural and contextual information would be beneficial for this hard task that would increase the margin towards the paragraph length baseline. Additionally, more elaborate training regimes considering list-wise losses a interesting future directions. To eliminate the risk of learning paragraph length as a spurious pattern the normalization of paragraph lengths by different segmentation techniques is promising.
A2. Did you discuss any potential risks of your work?
Yes, in the Conclusions and Intended Use, as well as the Applications and Limitations sections.
A3. Do the abstract and introduction summarize the paper's main claims?
1 A4. Have you used AI writing assistants when working on this paper?
Grammar correction and spell checking B Did you use or create scientific artifacts?
3 B1. Did you cite the creators of artifacts you used?
Yes. Moreover, we attribute the individual content whenever possible or requested by the text authors.
B2. Did you discuss the license or terms for use and / or distribution of any artifacts?
3
B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? 6 B4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? 3.3 B5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Yes, to the extent possible, given the anonymity of parts of the data. Section 3.
B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Yes, Table 1 gives an overview over all data and Appendix B details the different experimental data configurations C Did you run computational experiments? 5 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Appendix B The Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.
|
2022-11-15T06:42:49.586Z
|
2022-11-12T00:00:00.000
|
{
"year": 2023,
"sha1": "54a316ecfb97352e55c5e85c06ccf7d013c4993b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "64d0f9341dca2faeb8aad7ba2658e690ee45ef30",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
40570255
|
pes2o/s2orc
|
v3-fos-license
|
Incidence, Contributing Factors and Outcomes of Antepartum Hemorrhage in Jimma University Specialized Hospital, Southwest Ethiopia
Background: Antepartum haemorrhage complicates three to five percent of pregnancies contributing to perinatal and maternal morbidity and mortality. Timely access to quality obstetric services is the major determinant of both maternal and newborn outcomes after antepartum haemorrhage. In Ethiopia, the magnitude and consequences of antepartum haemorrhage are not well studied. The objective of this study was to determine the incidence, factors associated with and maternal and perinatal outcomes of antepartum haemorrhage in Jimma University Specialized Hospital. Methods: A hospital-based prospective cohort study was conducted in Jimma University Specialized Hospital, from January 1 to December 31, 2013. Data were collected by reviewing medical records and interviewing mothers. Cumulative incidence of antepartum hemorrhage among mothers who gave birth and odds of adverse outcomes among mothers with and without antepartum hemorrhage were calculated. Odds ratio was calculated to estimate the effect of antepartum hemorrhage on maternal and new born adverse outcomes. Results: Between January and December 2013, 3854 women gave birth in JUSH. The incidence of antepartum hemorrhage was 5.1% (n=195) in 2013. The major causes of antepartum hemorrhage were abruptio placentae and placenta previa occurring in 127(65.1%) and 52(26.7%) of cases, respectively. Six (3.1%) of the patients with antepartum hemorrhage died. Of the 206 babies born, 63 (30.6%) were stillborn and additional 13 (6.3%) newborns died during the first seven days of life making perinatal mortality rate of 36.9%. Conclusion: Antepartum hemorrhage is a common complication of pregnancy and cause of maternal and perinatal mortality in Jimma University Specialized Hospital. The risk of adverse outcomes is very high compared to other countries. Efforts to improve access and quality of comprehensive emergency obstetric care services are required.
Introduction
Adverse pregnancy outcomes including maternal and perinatal morbidity and mortality constitute major public health problems in the developing world. The risk of dying from maternal causes is 100 times higher for women in developing countries compared to those in developed countries. Developing countries contribute to 99% of the world's maternal deaths (1,2). In 2013, sixty percent of Sub-Saharan African countries had a Maternal Mortality Ratio (MMR) of above 400 maternal deaths per 100,000 live births (3). Ethiopia has one of the world's highest MMRs at 676 maternal deaths per 100,000 live births in 2011 (4), showing no reduction from its 2005 level of 673 maternal deaths per 100,000 live births (5). Despite reductions observed during the last decade, perinatal mortality also remained high compared to other developing and developed countries (6). For the period 2006 to 2011, the average perinatal mortality rate in Ethiopia was 46 perinatal deaths per 1,000 pregnancies of seven or more months of gestation (4). Obstetric haemorrhage remains one of the major causes of maternal deaths (7)(8)(9)(10), and one of the primary obstetric causes of perinatal mortality (11)(12)(13). Antepartum hemorrhage (APH), bleeding from the genital tract of a pregnant mother with a viable fetus before the onset of labour, complicates 3.5% of pregnancies and it constitutes one of the reasons for emergency hospital visits among pregnant women (14,15). Abruptio placentae and placenta previa are the major causes of APH (16,17). Even though a number of obstetric and non-obstetric situations are identified as risk factors for APH (17)(18)(19)(20)(21); it still remains a predominantly unpredictable condition (14). Access to quality basic and emergency obstetric care services remains Incidence, Contributing Factors and Outcomes of Antepartum Hemorrhage in Jimma University Specialized Hospital, Southwest Ethiopia the major plausible explanation for disparities in the risk of maternal and perinatal mortality and morbidity from APH in different parts of the world (22). Once it occurs, hemorrhage is likely to be fatal to the mother or her baby in situations where actions cannot be taken immediately to stop further bleeding, replace excessive blood loss, and prevent fetal complications (15,23,24). Evidence shows that variability in the burden of APH is primarily a result of variations in outcomes instead of variations in incidence, suggesting the vital role that improved obstetric care can play in addressing the issue. Studies from Africa have shown comparable prevalence of causes of APH compared to those from Europe and the United States; however, maternal and newborn outcomes of APH varied vastly between developing and developed countries (25). Information regarding the magnitude, causes and consequences of APH is limited in Ethiopia. A study from Hawassa University Hospital, a referral hospital in Southern Ethiopia, conducted on placenta previa and abruptio placentae patients who visited the hospital between January 2006 and December 2012 reported perinatal mortality rate of 50%. This research was indicative of the high incidence of adverse pregnancy outcomes after APH (26). The aim of this study was, therefore, to determine the magnitude, contributing factors and outcomes of women presenting with APH in Jimma University Specialized Hospital (JUSH).
Materials and Methods
Study Setting: The study was conducted from January 1, 2013 to December 31, 2013 in JUSH, a teaching hospital located in Jimma town of Oromia Regional State, Ethiopia. Located 357 km from Addis Ababa, JUSH is the only specialized referral hospital in the South Western region of Ethiopia. The hospital has a predominantly rural catchment population of 15 million people for tertiary level care. In 2004 Ethiopian fiscal year (2011/2012 GC) the hospital has provided service to 12,266 emergency cases, 136,332 outpatient clients and 18,478 admitted patients. The labor ward has given services to 3,775 deliveries among which 927(24.6%) were cesarean deliveries. JUSH is also serving as a clinical postgraduate specialty teaching hospital for different specialties including Obstetrics and Gynecology and Pediatrics & Child Health, since 2005. The Department of Obstetrics and Gynecology has a labor ward with six beds in first stage room, four delivery couches in the second stage room, three beds in recovery unit and forty beds in maternity ward, along with two operation rooms. The ward is staffed with eight obstetrics and gynecology specialists, 25 midwives, 16 clinical nurses, and 33 residents of different years (levels) of study.
Study Design: A hospital-based prospective cohort study design was employed. All women who were admitted to the labor/maternity ward during the study period were followed until discharge from the hospital or seven days after giving birth, whichever came first. Women presenting with uterine rupture were excluded because of difficulties to ascertain the diagnosis of antepartum hemorrhage.
Measures: Cumulative incidence of APH was determined by using the number of cases identified during the one year study period and the aggregate number of mothers who were admitted to the maternity/labor ward of JUSH. Patient characteristics, maternal mortality, and perinatal mortality were measured among APH cases.
Data Collection: Data were collected by reviewing medical records and interviewing mothers. A pretested structured questionnaire was used to collect data regarding patient characteristics, causes of APH, maternal outcomes and newborn outcomes for each APH case. Data on APH cases was collected by eight second year obstetrics and gynecology residents who were trained on how to complete the data collection questionnaire during patient follow-up period. Newborn outcome for neonates referred to neonatology unit was obtained by reviewing patient records and registration books in the neonatology unit. Aggregate data on total number of mothers who gave birth in the hospital during the study period was obtained by reviewing registration books of the labor/maternity ward. The data collection process was supervised by one of the principal investigators, a resident in the department during the data collection period.
Data Management and Analysis: Data was checked for completeness, cleaned and entered into SPSS version 16.0 on a daily basis. The final dataset was analyzed using the same software using descriptive statistical packages and binary logistic regression was also used.
Ethics: Before data collection, Ethical Review Committee of the College of Public Health and Medical Sciences, Jimma University has approved this study. Written consent was obtained from all APH patients included in the study.
Results
Between January 1 and December 31, 2013, a total of 3,854 women gave birth in JUSH. One hundred ninety five of them were diagnosed to have APH showing a cumulative incidence of 5.1% in 2013. The distribution of socio-demographic characteristics among women with APH broadly reflects the population composition of reproductive age women in the catchment area; majority of the mothers included in this study were Oromos (81%), Muslims (69.2%), housewives (57.4%) married (96.9%), and illiterate (45.1%). The average age of the women was 26.6 years with standard deviation of 5.9 years. Two third of the mothers were in the age range of 21 and 34 (Table 1). Causes of APH Abruptio placentae and placenta previa were the major causes of APH established as final diagnosis in 127 (65.1%) and 52 (26.7%) of APH patients, respectively. Other causes including leech infestation and unknown causes accounted for 16 (8.2%) of the cases. The incidence of abruptio placentae and placenta previa was 3.3% and 1.4% among mothers who gave birth in JUSH in 2013.
Among patients with placenta previa, 41 (78.9%) had placenta previa totalis. Placenta previa partialis and placenta previa marginalis posterior were the second most common type of placenta previa, each accounting for 5 (9.6%) of placenta previa patients . There was only one (1.9%) mother with low lying placenta; and no mother was diagnosed with placenta previa marginal anterior. Of the 127 patients with abruptio placentae, just over half (52%) had grade 1 or grade 0 according to Sher's grading criteria (Grade I (Retrospective) Not recognized clinically before delivery: small retroplacental Haematoma discovered on maternal surface of placenta after delivery, No APH; Grade II mild vaginal bleeding, uterine tenderness and tetany, No fetal distress, no maternal shock; Grade III Severe vaginal bleeding, uterine tenderness and tetany, fetal distress then death, maternal shock, according to DIC: IIIa: Without DIC, IIIb:With DIC) diagnosed in 34 (26.8%) and 32 (25.2%) of patients, respectively (Figure 1 and Figure 2). Access to and Quality of Services Majority, 158 (81%), of the mothers with APH had at least one prenatal care visit to a health facility. Fifteen (28.8%) of patients with placenta previa and 19 (15%) of those with abruptio placentae had no prenatal visits. One hundred sixty three (83.6%) of the patients were referred from another health facility. At the time of presentation, 138 (70.8%) of them had vaginal bleeding while the rest had only concealed bleeding (concealed bleeding is when a patient's bleeding is not recognized clinically before delivery but is diagnosed to have abruptio placentae after delivery of the baby and placenta with small retroplacental hematoma discovered on maternal surface of placenta). For patients with revealed vaginal bleeding, the median time of presentation from start of vaginal bleeding to time of assessment by a physician was 12 hours (IQR: 5 to 24 hours). On presentation, 75 (38.5%) of the mothers had excessive vaginal bleeding (as described by treating physicians) and 48 (24.6%) had deranged vital signs (hypotension: blood pressure less than 90/60 mlli meter Incidence, Contributing Factors and Outcomes of Antepartum Hemorrhage in Jimma University Specialized Hospital, Southwest Ethiopia mercury, tachycardia: pulse rate more than 100 beats per minute). Data on whether an intravenous (IV) line was secured or not on presentation was available for 117 mothers with revealed vaginal bleeding, all of which were referred from another health facility. Among these mothers IV line was secured on referral only for 29 (24.8%). (Table 2) Cesarean delivery (CD) was the common mode of delivery used in 106 (54.4%) of APH patients. It was employed in 49 (94.2%) mothers with placenta previa of which 39 (79.6%) were done as emergency. Fifty four (42.5%) of mothers with abruptio placentae delivered by CD; 21(16.5%) delivered with instrumental delivery, forceps being the commonest. Non-reassuring fetal heart beat pattern (bradycardia) was the major indication for CD in mothers with abruptio placentae documented in 36 (66.7%) of cases. (Table 2) The major reason for referral to NICU was preterm delivery 27(57.4%). The major cause of neonatal death in NICU was respiratory failure accounting for 9(69.2%) of the total NICU deaths during the first seven days of life among referred neonates. Binary logistic regression with contributing factors to APH, address, maternal age, and number of fetuses as predictors showed that only address was a significant predictor of perinatal mortality. Newborns born to mothers with APH from Jimma town (where the study hospital is located) were 0.47 (OR) (95% CI: 0.236 to 0.956) times less likely to die during their perinatal life compared to those from outside of Jimma.
Maternal Outcomes
Six (3.1%) of the patients with APH died during the peripartum period, four of which were because of hypovolemic shock secondary to bleeding. The other two deaths were because of respiratory failure. One (1.7%) maternal death occurred among APH cases from Jimma compared to 5 (3.7%) among cases from outside of Jimma. A number of complications were diagnosed among surviving patients too. Postpartum hemorrhage and anemia (more than 10% fall in hematocrit) were the commonest postpartum complications diagnosed in 73 (37.4%) and 74 (38.0%) of the cases respectively. Hysterectomy was done for six (3.1%) patients with uncontrolled postpartum hemorrhage. Thirteen (6.7%) of the patients developed endomyometritis during postpartum period. Sixty five (34.4%) of the patients were discharged within two days of admission; 121 (62.1%) stayed for three to seven days; and three (1.5%) were hospitalized for longer than a week. (Table 4)
Discussion
The incidence of APH in JUSH was 5.1%, 3.3% for abruptio placentae and 1.4% for placenta previa. The incidence rates observed in JUSH were higher than reports from other studies conducted elsewhere (14,15,21,(27)(28)(29). This higher figure, however, doesn't directly reflect higher incidence of APH among mothers in the hospital's catchment as most uncomplicated deliveries in Ethiopia occur at home; according to the 2011 EDHS, institutional delivery rate was only 9.9% (4). Moreover, JUSH is a referral hospital where complicated cases in its catchment are more likely to be referred.
High risks of perinatal mortality (36.9%) and maternal mortality (3.1%) were observed among APH patients in our study hospital. In 2013, JUSH reported a total of 371 stillbirths and 31 maternal deaths (30). APH patients accounted for 63 (17.0%) and 6 (9.7%) of these reported perinatal and maternal deaths, respectively. Fatality rates associated with APH, both for mothers and their newborns, are tremendously higher compared to other developing and developed countries (28,(31)(32)(33). A retrospective study of 100 patients with AP in France for example found zero maternal deaths and only 19% perinatal death rate among APH patients in a situation where 67% of deliveries occurred prematurely (31). Our finding is in congruence with another study from Southern Ethiopia that reported very high perinatal mortality rate among newborns to patients with placenta previa and abruptio placentae (26). This suggests that adverse outcome among APH patients is a national problem requiring attention from the health sector.
Our study suggested that poor access to comprehensive obstetric care was a major contributor for the observed high fatality rates. Risks of perinatal and maternal mortality were higher for patients who came from outside of Jimma compared to patients from Jimma indicating the potential role of delay in receiving care.
The study also showed that quality of care was a problem even after a patient gets in contact with the health system. Only a quarter of patients referred to JUSH had IV line secured on referral even though they had vaginal bleeding. Blood transfusion is a lifesaving component of comprehensive emergency obstetric care for mothers with obstetric hemorrhage (34). In our study, however, we found that only 18.5% of patients were transfused with at least one unit of blood even though it was indicated for more than half of the patients. This highly conservative use of blood transfusion could possibly explain high maternal mortality in our hospital where PPH was cause of death for two third of the maternal deaths.
Prematurity was the major neonatal problem among newborns to mothers with APH in our study. Quality of neonatal resuscitation and other NICU services is therefore a critical component of the continuum of care to prevent early neonatal death. This study didn't assess the processes of care in the NICU; however, the high mortality rate among neonates admitted to NICU compared to findings from other studies (25,28,31,33) is indicative of the need for improvement in quality of NICU services in the study hospital.
A major limitation of this study is associated with our inability to get information on as to how mothers who didn't make it to the study hospital were treated, get death certificates of mothers who died before they arrive to the study hospital and inability to follow discharged mothers and newborns until day seven of delivery. As a result, perinatal and maternal mortality might have been underestimated if additional deaths occurred at home among discharged in Jimma University Specialized Hospital, Southwest Ethiopia newborns. However, even with such a limitation with a potential to underestimate mortality, the risk of perinatal and maternal mortality among APH cases in our study was much higher than findings from other studies. Therefore, the limitation won't have an impact on the validity of our conclusion.
In conclusion, APH, primarily caused by abruptio placentae and placenta previa, is a common complication of pregnancy and cause of maternal and perinatal mortality. The risk of adverse maternal and newborn outcomes including maternal mortality, perinatal mortality and low birth weight is higher among APH cases in JUSH compared to reports from other countries. Efforts to improve geographical access, referral services and quality of comprehensive emergency obstetric care are needed to improve maternal and newborn outcomes in Ethiopia.
|
2019-03-07T14:02:10.951Z
|
2015-01-01T00:00:00.000
|
{
"year": 2015,
"sha1": "9b84e581c8af09905db3f28c29eaca3389298f33",
"oa_license": "CCBY",
"oa_url": "http://www.hrpub.org/download/20150620/UJPH3-17603820.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "90348eebe48d6529056e86323dfd986e8905af71",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
149624868
|
pes2o/s2orc
|
v3-fos-license
|
The outcome of stapedotomy in adult patients with clinical otosclerosis in Erbil
* Department of Otolaryngology, College of Medicine, Hawler Medical University, Erbil, Iraq. Introduction Otosclerosis is defined as a continuous process of bone remodeling in which there is an alteration in bone metabolism of the ottic capsule in the form of bone resorption and re-deposition. Unlike other similar bone diseases, it does not occur outside of the temporal bone. The formation of centers of newly constructed bone, usually occurs in the area of the oval window and annular ligament, leading to stapes fixation. Otosclerosis was first described by Vasalva in 1735 as ankylosis of the stapes to the margins of the oval window. It is well known that otosclerosis has clinical and histological forms. The clinical form of otosclerosis refers to the presence of symptoms like hearing loss and tinnitus. While the histological form the disease is present without symptoms. Histologically demonstrated that otosclerosis is about ten times more common than clinical otosclerosis. The overall incidence of otosclerosis reveals the variability in distribution according to race, gender, geographic location, familial incidence, pregnancy, and age. The disease occurs more frequently in the Caucasian race (white race) than in other races. It is less common in Asians and rare in Africans. There has been an increasing incidence of otosclerosis in Japan. Otosclerosis process usually affects young adults and people between 15 and 45 years of age. The incidence of otosclerosis described in the literature ranges between 0.3 and 2%. Souza et al. indicated that clinical otosclerosis is present in 0.5% to 1.0% of the population. In 2001, Declau et al. stated that clinical otosclerosis has a prevalence of 0.3% to 0.4% among the white ethnic population. A recent Jordanian Background and objective: Otosclerosis is a primary disease of the temporal bone that leads to stapes fixation. Hearing loss and tinnitus are the main symptoms. Treatment includes surgery, medical treatment, and sound amplification therapy alone or in combination. This study aimed to evaluate the functional outcomes of patients with clinical diagnosis of otosclerosis undergoing primary stapes surgery in Erbil city. Methods: A retrospective descriptive study. A total of 32 patients with clinical otosclerosis underwent unilateral stapedotomy in the specialized center between September 2011 and September 2013. These included 20 females and 12 males, aged 21 to 48 years, their mean age (±SD) was 31.9 (±10.91) years. Results: The average preoperative and postoperative air conduction threshold was 51.13 and 23.91 dB, respectively. The mean preoperative and postoperative bone conduction threshold was 21.53 and 16.21dB, respectively. The average preoperative and postoperative air-bone gap was 29.03 and 8.51 dB, respectively. All 32 ears (100%) had a residual air-bone gap <10 dB. Conclusion: Stapes surgery showed significant functional hearing outcomes in this study. The very significant reduction in the air-bone gap is a good indicator of the success of the surgery.
Introduction
Otosclerosis is defined as a continuous process of bone remodeling in which there is an alteration in bone metabolism of the ottic capsule in the form of bone resorption and re-deposition.Unlike other similar bone diseases, it does not occur outside of the temporal bone.The formation of centers of newly constructed bone, usually occurs in the area of the oval window and annular ligament, leading to stapes fixation. 1tosclerosis was first described by Vasalva in 1735 as ankylosis of the stapes to the margins of the oval window. 2 It is well known that otosclerosis has clinical and histological forms.The clinical form of otosclerosis refers to the presence of symptoms like hearing loss and tinnitus.While the histological form the disease is present without symptoms.Histologically demonstrated that otosclerosis is about ten times more common than clinical otosclerosis. 3The overall incidence of otosclerosis reveals the variability in distribution according to race, gender, geographic location, familial incidence, pregnancy, and age.The disease occurs more frequently in the Caucasian race (white race) than in other races. 4,5It is less common in Asians and rare in Africans.There has been an increasing incidence of otosclerosis in Japan. 6Otosclerosis process usually affects young adults and people between 15 and 45 years of age. 7he incidence of otosclerosis described in the literature ranges between 0.3 and 2%.Souza et al. 8 indicated that clinical otosclerosis is present in 0.5% to 1.0% of the population.In 2001, Declau et al. 9 stated that clinical otosclerosis has a prevalence of 0.3% to 0.4% among the white ethnic population.A recent Jordanian https://doi.org/10.15218/zjms.2019.001 2 2 always made by surgical exploration, which confirms the immobility of the stapes and then it is verified histopathologically.Clinical symptoms of otosclerosis include progressive hearing loss and tinnitus.In rare cases, dizziness may occur as well.Historically otosclerosis has been treated both medically and surgically.Of the factors that may inhibit the disease process, fluorides, cytokine inhibitors, and bisphosphonates, however, the medical intervention has not yet been shown to prevent or slow the disease. 20Amplification with hearing aids or assistive devices has been indicated.Surgical correction of the conductive hearing loss is highly effective.One of the most important developments in the surgical treatment of conductive hearing loss caused by otosclerosis was the first stapedectomy, performed by John J. Shea, Jr. in May 1956. 21Since the surgery by Dr. John J. Shea, numerous techniques have been introduced in an effort to achieve optimal improvement in the hearing loss.Stapedotomy was later established as the gold standard procedure because the limited opening of the vestibule was found to significantly reduce the risk of inner ear damage.It was first performed by Professor Henri Andre Martin. 10The main aim of stapes surgery today remains elimination of the ABG or a significant reduction, to within 10 dB.Among large series of stapedotomies, reported air-bone gap (ABG) closure (closed to <10 dB) varied from 94% (n=2368) to 75% (n=861). 22,23Patient characteristics, surgical experience, and intraoperative findings may be considered potential prognostic factors affecting postoperative audiometric results. 9This study aimed to evaluate the effectiveness of stapedotomy in improving hearing in patients with conductive hearing loss due to otosclerosis.study 10 reported that the incidence of clinical otosclerosis in the general population is about 2%.Majority of patients with otosclerosis has the disease in both ears.Bilateral symptoms have been reported in 70% to 85% of cases. 8Many studies have indicated that the clinical form of otosclerosis is twice more common in women than in men. 11However, when it comes to the histological form of the disease, the ratio between women and men is 1:1. 4 Current research suggests that heredity, genetic malformations, viral infection, trauma, endocrine disorders, and autoimmune diseases play a role in the etiology of otosclerosis.However, none of the hypotheses is accepted as a unique etiopathogenetic theory. 12Most authors agree that this disease is transmitted by autosomal dominant inheritance with varying degrees of penetration of the gene responsible for the development of the disease. 13,14Recent studies indicate the existence of nine different chromosomes containing genes responsible for the development of otosclerosis 15 .Furthermore, genetic investigations have identified seven loci (OTSC15, OTSC7, OTSC8), although none of the corresponding genes have been found. 16,17t is well-known that pregnancy may trigger the onset of otosclerosis or worsen it.Measles Virus infection is another risk factor implicated in the development of the disease. 10The involvement of stapes footplate in patients with otosclerosis causes conductive or mixed hearing loss.Depending on the extent of the process, conductive hearing loss can range from 30 dB to 50 dB. 18However, sensorineural hearing loss eventually occurs, and its cause has not yet been determined.One of the theories behind the sensorineural hearing loss in otosclerosis is the invasion of the spiral ligament by the disease. 19n 1919, Wittmaack suggested that sensorineural hearing loss occurs as a consequence of toxic or inflammatory material deposited within the cochlea. 8he definitive diagnosis of otosclerosis is
Patients
A prospective quasi-experimental study of functional hearing results was performed for conventional-frequency audiometry).First, pure tones at 0.25, 0.5, 1, 2, 3, 4, 6, and 8 kHz were presented to one ear at a time and thresholds for bone-conducted sound measured by placing a calibrated vibrator on the mastoid process while presenting tones at 0.5, 1, 2, and 4 kHz.The hearing threshold was identified using the modified Hughson-Westlake method as recommended by the International Standards Organization.These measures were performed before surgery and 6-8 weeks after that.Tonal audiometry results obtained before and after surgery were compared.Bone conduction, air conduction and air-bone (AB gap) threshold at 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz were compared.Subsequently, we compared the average hearing threshold at hearing frequencies (PTA-pure tone average), bone conduction, air conduction and air-bone threshold before and after surgery.PTA was done for all four voice frequencies by adding the values expressed in dB at the aforementioned frequencies.The values were then divided by four to obtain the value that can be used in further calculations as the relevant one.
Statistical analysis
Data were analyzed using the statistical package for the social sciences (version 19).The paired sample t-test was used to compare means, before and after the procedure.A P value of ≤0.05 was considered as statistically significant.before and after surgical treatment of otosclerosis using stapedotomy method at a specialized center in Erbil city, Kurdistan region.The study included patients who were surgically treated from September 2011 through September 2013.All patients were surgically treated by the same surgeon, using the same operative technique (stapedotomy).During that period, 32 patients underwent surgical treatment (20 females and 12 males) aged 21 to 48 years, their mean age (±SD) was 31.9 (±10.91)years.
Surgical procedure
The operation was performed with an endaural procedure under general anesthesia.The tympanomeatal flap was elevated, and the bone from the posterior scutum was removed with a curette or drill to expose the oval window and the stapes.The mobility of the stapes was checked by mobilizing the malleus handle by a needle.After separating the incudostapedial joint with a joint knife and cutting the stapedial tendon with scissors, the posterior crus of the stapes was divided with a scissor.The anterior crus of stapes was subsequently down-fractured with a microhook and removed.The prosthesis was sized by measuring the distance from the footplate to the long process of the incus.A fenestra of 0.6 mm in diameter was made at the junction of the posterior one-third and anterior two-thirds of the footplate by a micro-perforator (microdrill).Telfon piston prosthesis was used in all cases.The prosthesis was positioned after the adjustment of its length.
Audiometric assessment
The diagnostic algorithm included the following analysis procedures: anamnesis, clinical examination, tonal audiometry, tympanometry, stapedial reflex testing.All functional diagnostic procedures were conducted at the specialized Audiology center applying the tonal Audiometer -Interacoustics -AA-222 Audiotraveller and tympanometry.The hearing was measured in a soundproof booth with patients wearing calibrated headphones (TDH39
Results
A total of 32 patients (12 male and 20 female) between 21-48 years with a mean (± SD) age of 31.9 (±10.91) years underwent the operation.All patients had bilateral disease, but operations were performed only in one ear (the worse ear).The mean (± SD) follow-up period was 28.14 (± 10) weeks and patients with less than three weeks follow-up were excluded from the study.
Hearing results
The air conduction thresholds, bone conduction thresholds and air-bone gap (AB-gap) of all patients were evaluated in 4 500, 1000, 2000 and 4000 Hz frequencies.The preoperative and postoperative hearing statuses of patients are summarized in Table 1.
Bone Conduction Threshold
The difference in bone conduction at 0.5 kHz, 1 kHz and 2 kHz before and after surgery was statistically significant (P ≤0.001).The difference between bone conduction at 4 kHz before and after surgery was not statistically significant (P = 0.402).
Air Conduction Threshold
The differences in air conduction at all frequencies (0.5 kHz, 1 kHz, 2 kHz, 4 kHz) before and after surgery were highly statistically significant (P ≤0.001).
Air-Bone Gap (AB gap)
The differences in air-bone gaps at all frequencies (0.5 kHz, 1 kHz, 2 kHz, 4 kHz) before and after surgery were highly statistically significant (P ≤0.001).
Average
The mean air conduction thresholds of all patients were evaluated in 500, 1000, 2000 and 4000 Hz frequencies.The average preoperative bone conduction threshold was 21.53dB which was reduced to 16.21dB postoperatively.The difference in bone conduction thresholds was statistically significant, P = 0.009 (before X = 21.53; after X = 16.21).The difference in air conduction PTA at all four frequencies before and after surgery was highly statistically significant, P ≤0.001 (before X = 51.13;after X = 23.91).
The difference in air-bone gap PTA at all four frequencies before and after surgery was highly statistically significant, (P ≤0.001) (before X = 29.03;after X = 8.51) (Table 2).Various surgical techniques have been used to treat otosclerosis, but stapedotomy still remains the method of choice. 18In our study, all patients were surgically treated by small fenestra stapedotomy of the stapes footplate.The main goal of surgical techniques has been to improve patients' hearing function and to eliminate the accompanying symptoms of the disease, such as tinnitus and dizziness.Most studies dealing with the outcomes of surgical treatment of otosclerosis depend on audiological testing before and after surgery, presuming that audiological measurements reflect the patient's subjective experience regarding the treatment outcome.
Bone conduction
In the current study the average preoperative bone conduction value at 500 Hz was at the level of 18.75 dB preoperatively and 15.31dB postoperatively, this was considered statistically significant.In addition, the change of average bone conduction value before and after surgery was also achieved a statistical significance at 1000 Hz and 2000Hz.However, at 4000Hz the mean values measured before and after surgery were 17.19 dB and 17.66 dB, respectively and thus showing no statistically significant difference at 4000 Hz.Thus in our study, there was a statistically significant difference in the mean of pre-and postoperative bone conduction values at frequencies 500, 1000 and 2000Hz but the difference was not statistically significant at 4000Hz.Similar to our data several studies have noted that in otosclerotic patients BC thresholds are better in the postoperative than in the preoperative period.However, the degree of BC improvement differs in various studies.Awengen et al. 24 noticed an improvement in BC after stapedectomy in 500, 1,000 and 2,000 Hz but its deterioration in 4,000 Hz.Arnoldner et al. 25 showed some BC improvement in conventional or laser-assisted surgery.Aarnisalo et al 26 29 It is important to know that BC threshold not only depends on the direct transmission of the vibration to the inner ear fluids through the skull, but it is also related to the relative movement of the footplate in the oval window due to the different inertia of the ossicular chain and the otic capsule. 30,31The mechanical process by which the energy of the sound waves entering the external canal and middle is utilized is known as Carhart phenomenon. 32In patients with otosclerosis, this energy is not utilized properly as there is a reduction of ossicular chain fluctuations caused by stapes fixation.This eventually will lead to difficulties in transmission of stimuli to the inner ear mostly at a frequency of 2000 Hz. 32 Thus, the major drop in bone conduction is observed at this frequency.
Air conduction
The average air conduction at 500 Hz before surgery was 55.78 dB.After surgery, the average air conduction was 33 The authors reported the improvement of the air-bone gap of 30.8 dB at 500 Hz, whereas the scores at 1000 Hz and 2000 Hz were 25.5 dB and 14.3 dB, respectively.However, at 4000 Hz the increase of only 2.9 dB was demonstrated and this was not considered statistically significant.Contrary to the aforementioned study, our results at 4000 Hz showed a statistical significance, as the improvement of the average air-bone gap was 13.34 dB.
Average (all frequencies)
The improvement in the PTA value of air conduction before and after surgery was statistically significant.
References
This study concludes that the significant reduction in the air-bone gap, regardless of the change in bone conduction threshold, is a good indicator of the success of the stapedotomy surgery in patients with otosclerosis.
Competing interests
The authors declare that they have no competing interests.
before and after surgery was 20.52 dB.The goals of otosclerosis surgery are the closure of the air-bone gap and producing the capability of hearing without amplification.Success, defined as closure of the air-bone gap to less than 10 dB. 18n our study, the closure of the air-bone gap to less than 10 dB was obtained in all of our patients as the average air-bone gap after surgery was 8.51 dB.In 2006, Vincent et al. 23 8 .In the aforementioned study, no significant decrease in bone conduction was observed after surgery.Thus the average values for air-bone conduction were considered statistically significant.
Unlike the above study, our research showed a significant reduction of the bone conduction after surgery except at 4000 Hz.Also as it is indicated in the earlier section, the reduction in bone conduction thresholds after surgery was also reported by Awengen 24 , Arnoldner et al., 25 Aarnisalo et al. 26 and Moscillo et al. 27
Figure 1 :
Figure 1: Case examples of Pure Tone Audiogram pre and post surgery.
Figure
Figure 1 shows three case examples of pure tone audiogram pre and
Table 1 :
Preoperative and postoperative audiometric results among 32 patients.
Table 2 :
The pure tone average of the bone conduction, air conduction and air-bone gap before and after surgery.
11rformed a prospective study over a period of 14 years.The air-bone gap was ≤10 dB in 94.2% of cases.In 2013, Oeken et al.34published a study of 256 cases of stapedotomy in which the postoperative air-bone gap was ≤ 10 dB in 86%.In 2013, Ataide et al.35observed the same result in 75.8% of patients undergoing stapedotomy.In his study of otosclerosis, Sargent et al.11reported that in patients who had undergone otosclerosis surgery the closure of air-bone gap within a range up to 10 dB in 90% of patients.The research of Rauka and Halik 36 in 2005, who compared the PTA-air-bone gap before and after surgery, a gap of 10 dB or less was observed in 85.19% of patients.Therefore, the audiometric results obtained in the present study are consistent with those in the literature.After all, it is argued that the relation between the average air-bone gap values before and after surgery cannot be considered a reliable indicator of the success of the surgical procedure, especially if there is a decrease in bone conduction during the postoperative period
|
2019-05-12T13:27:46.435Z
|
2019-04-23T00:00:00.000
|
{
"year": 2019,
"sha1": "5809bbffeb4a8a115f3d48047a3afe967cb150f8",
"oa_license": "CCBYNCSA",
"oa_url": "https://zjms.hmu.edu.krd/index.php/zjms/article/download/632/570",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5809bbffeb4a8a115f3d48047a3afe967cb150f8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119080338
|
pes2o/s2orc
|
v3-fos-license
|
Where is the Higgs?
I discuss the theoretical uncertainties in the indirect Higgs mass determination. I present the probability density function for the Higgs mass obtained combining together the information from precision measurements with the results from the direct search experiments carried out at LEP. The probability that the Higgs weights less than 116 GeV comes out to be around 35 % while the 95 % upper limit is located around 210-230 GeV.
Introduction
The last months of the year 2000 have seen a great excitement in the physics community because of a possible evidence at LEP of a Higgs boson with a mass M H ≈ 115 GeV. Unfortunately the shutdown of LEP will not allow additional information on the Higgs to be collected in the near future. Thus, it seems a good moment to try to review what we (do not) know about the Higgs.
On one side we have the impressive amount of data collected at LEP, SLC, and the Tevatron that allow to probe the quantum structure of the Standard Model (SM), thereby providing indirect information about the Higgs mass. On the other we have the result of the direct searches performed at LEP with the excess of events reported in the combined analysis of the four collaborations. The virtual Higgs effects are usually analyzed through a χ 2 fit to the various precision observables that allows a 95% Confidence Level (C.L.) upper bound to be derived. The outcome of the searches used to be reported as a combined 95% C.L. lower bound, while today is more appropriate described by the likelihood of the experiments. Given these two pieces of information it is often said that one of the greatest achievement of LEP has been to have pin down the Higgs mass between 113 (from the 95% C.L. lower bound of the direct searches) and 170 GeV (from the 95% C.L. upper bound of the global fit to electroweak data). What I would like to discuss in this talk is how confident we are in the fact that 113 ≤ M H ≤ 170 GeV addressing two aspects of this problem: i) how solid is the 170 GeV 95% C.L. upper bound? ii) What is the right way to combine the information from the precision measurements with that from the searches to answer the simple question:"what is the probability that the Higgs mass is between, for example, 113 and 170 GeV?" 2 Theoretical uncertainties in the indirect Higgs mass determination As well known, the vector boson self-energies are sensitive to virtual Higgs effects and therefore the value of the Higgs mass affects the precision measurements like, for example, the ones made at peak of the Z. However, the Higgs behavior in the relevant electroweak corrections is very mild, just logarithmic. To appreciate how much this logarithmic behavior makes hard to constrain the Higgs let me consider a simple example. Let us take the effective electroweak mixing angle, sin 2 θ lept ef f ≡ s 2 ef f , that is the most important quantity in the determination of M H , and write it as In Eq. (1) I identify the l.h.s. with the experimental result s 2 ef f = (s 2 ef f ) 0 ± σ(s 2 ef f ) while in the r.h.s. δc i represent the theoretical uncertainty in the corresponding coefficients connected to the fact that we have computed c i in perturbation theory through certain order in the perturbative series and therefore we do not know their exact values because of higher order contributions. From Eq. (1) one obtains where y 0 is the value corresponding to δc 1 = δc 2 = σ = 0. To see the effect of ∆ th in extracting M H I put σ = 0 and take where s 2 ∼ 0.23, c 2 = 1 − s 2 . In Eq. (3) I estimate c 2 through the Higgs leading behavior of the correction ∆r relevant for s 2 ef f [1] while for ∆ th , I take the value estimated in the 1995 CERN report on 'Precision calculation for the Z resonance' [2] that was supposed to represent the uncertainty due to next-to-leading two-loop electroweak effects. Inserting the values of Eq. (3) into Eq. (2) yields y ∼ 1.29 y 0 . We see that a theoretical uncertainty coming from two-loop unknown contributions (that are supposed to be not even the dominant part) makes an error in the indirect determination of the Higgs mass of 29 %! Eq.(2) tells us an important thing, namely that the error on y depends on its central value (y 0 ). This implies that not always shrinking the uncertainty in ln y reduces the uncertainty on y. This is true only if the central value does not change, but this is not always the case (as we will see later). This consideration can be put in a more formal way noticing that if the logarithm of a quantity, A ≡ ln y, is normally distributed, then the quantity itself is distributed according to a lognormal (see, e.g., [3] for the properties of this distribution) whose standard deviation, given by is a combination of the expected value and standard deviation of its logarithm and therefore compensating effects can happen.
The above example clearly tells us that to extract accurate indirect information on the Higgs one needs both very precise experiments and a very good control of the theory side. This brings in the issue of what error we can associate to our theoretical predictions. They are affected by uncertainties coming from two different sources: one that is called parametric and it is connected to the error in the experimental inputs used in our predictions. The second one is called intrinsic and it is related to the fact that our knowledge of the perturbative series is always limited, usually to the first few terms. Concerning parametric uncertainties, α(0), G µ and M Z are very well measured, while the top mass, M t , and the strong coupling constant, α s , are not so precisely known. However, the scale of the weak interactions is given by the mass of the intermediate vector bosons, so what actually matters in our predictions is not α(0) but α(M Z ). The latter contains the hadronic contribution to the photon vacuum polarization, (∆α) h , that cannot be evaluated in perturbation theory. Fortunately, one can use a dispersion relation to relate it to the experimental data on the cross section for e + e − annihilation into hadrons. In the recent years this subject has received a lot of attention with the appearance of several new evaluations that have followed two main streams. On one path there are the most phenomenological (ph.) analyses that rely on the use of all the available experimental data on the hadron production in e + e − annihilation and on perturbative QCD (pQCD) for the high energy tail (E ≥ 40 GeV) of the dispersion integral [4,5]. On the other the so called "theory driven" (t.d.) analyses that advocate the use of pQCD down to energy scale of the order of tau mass, supplemented by the use of the experimental data in regions, like, for example, the threshold for the charmed mesons, where pQCD is not applicable [6,7,8]. I am not going to discuss the differences in the various analyses (see Fred Jegerlehner's talk [9]) but I would like to point out few facts: i) all results are compatible with each other. The choice of one value instead of another is just a matter of taste (or friendship). ii) The t.d. results have a smaller error with respect to the ph. ones but a lower central value. iii) I can take two perfectly compatible numbers, one from the ph. analyses like (∆α) h = 0.02804 ± 0.00065 [4] and one from the t.d. ones like (∆α) h = 0.02761 ± 0.00022 [8] and get a difference in the 95% C.L. upper bound for the Higgs mass O(50 GeV). Just to mention, it is the t.d. value, that has a smaller error, that gives the higher upper bound.
I would like to emphasize that this uncertainty has nothing to do with the blue band in the famous ∆χ 2 vs. M H LEPEWWG plot. There, the blue band represents, for a chosen value of (∆α) h , the intrinsic uncertainty. There is no way to rigorously define the intrinsic uncertainty. The best that can be done is to try to estimate it by comparing the output of the two codes TOPAZ0 [10] and ZFITTER [11], that now include up to two-loop next-toleading terms, when they are run enforcing the several built in options for resumming known effects. These options are supposed to mimic the size of unknown higher order terms and the numerical spread in the outputs of the two codes can be taken just as an order of magnitude of the unknown higher order contributions.
3 Higgs mass inference from precision measurements I am going to discuss now what can we learn about the Higgs from precision measurements. In the spirit of the question raised in the Introduction, I am not going to perform the standard χ 2 analysis, although clearly what I present will be related to it, but following a Bayesian approach I am going to construct f (m H | ind.), the p.d.f. of the Higgs mass conditioned by this indirect information under the assumption of the validity of the S.M. I would do it employing the three observables, s 2 ef f , M W and Γ ℓ . These quantities are the most sensitive to the Higgs mass and also very accurate measured. The most convenient way to approach the problem is to make use of the simple parameterization proposed in Ref. [12] and updated in Ref. [13] where s 2 ef f , M W and Γ ℓ are written as functions of M H , M t , α s and the hadronic contribution to the running of the electromagnetic coupling: In the above equations (5) and (6) of the quantities , respectively, all three variables being described by Gaussian p.d.f.'s. The A 1 , Y and Z determination are clearly correlated, therefore one has to built a covariance matrix. This can be easily done because formulae (4-6) are linear in the common terms X ≡ {M t , α s (M Z ), (∆α) h }. The likelihood of these indirect measurements Θ ≡ {A 1 , Y, Z} is then a three dimensional correlated normal with covariance matrix where 4 Including the constraint from the direct search Given f (m H | ind.) a natural question to ask is how this p.d.f. should be modified in order to take into account the knowledge that the Higgs boson has not been observed (that there is some indication for the Higgs) at LEP 1 .
To answer this question let me discuss first an ideal case. I consider a search for Higgs production in association with a particle of negligible width in an experimental situation of "infinite" luminosity, perfect efficiency and no background whose outcome was no candidate. In this situation we are sure that all mass values below a sharp kinematical limit M K are excluded. This implies that: a) the p.d.f. for M H must vanish below M K ; b) above M K the relative probabilities cannot change, because there is no sensitivity in this region, and then the experimental results cannot give information over there. For example, if M K is 110 GeV, then f (200 GeV)/f (120 GeV) must remain constant before and after the new piece of information is included.
In this ideal case we have then where the integral at denominator is just a normalization coefficient. More formally, this result can be obtained making explicit use of the Bayes' theorem. Applied to our problem, the theorem can be expressed as follows (apart from a normalization constant): where f (dir | m H ) is the so called likelihood. In the idealized example we are considering now, f (dir | m H ) can be expressed in terms of the probability of observing zero candidates in an experiment sensitive up to a M K mass for a given value m H , or In fact, we would expect an "infinite" number of events if M H were below the kinematical limit. Therefore the probability of observing nothing should be zero. Instead, for M H above M K , the condition of vanishing production cross section and no background can only yield no candidates. Consider now a real life situation. In this case the transition between Higgs mass values which are impossible to those which are possible is not so sharp. In fact because of physical reasons (such as threshold effects and background) and experimental reasons (such as luminosity and efficiency) we cannot be really sure about excluding values close to the kinematical limit, nevertheless the ones very far from M K are ruled out. Furthermore, the kinematical limit is in general not sharp; at LEP, for example, the large total width of the Z • plays an important role. Thus, in a real life situation we expect the ideal step function likelihood of Eq. (12) to be replaced by a smooth curve which goes to zero for low masses. Concerning, instead, the region of no experimental sensitivity, M H ∼ > M K ef f , the likelihood is expected to go to a value independent on the Higgs mass that however is different from that of the ideal case, i.e. 1, because of the presence of the background.
In order to combine the various pieces of information easily it is convenient to replace the likelihood by a function that goes to 1 where the experimental sensitivity is lost [17,18]. Because constant factors do not play any role in the Bayes' theorem this can be achieved by dividing the likelihood by its value calculated for very large Higgs mass values where no signal is expected, i.e. the case of pure background. This likelihood ratio, R, can be seen as the counterpart, in the case of a real experiment, of the step function of Eq. (12). Therefore, the Higgs mass p.d.f. that takes into account both direct search and precision measurement results can be written as In Eq. (13) R, namely the information from the direct searches, acts as a shape distortion function of f (m H | ind.). As long as R(m H ) is 1, the shape (and therefore the relative probabilities in that region) remains unchanged, while R(m H ) → 0 indicates regions where the p.d.f. should vanish. One should notice that R(m H ) can also assume values larger than 1 for Higgs mass values below the kinematical limit. This situation corresponds to a number of observed candidate events larger than the expected background. In this case the role played by R(m H ) is to stretch f (m H | ind.) below the effective kinematical limit and this might even prompt a claim for a discovery if R becomes sufficiently large for the probability of M H in that region to get very close to 1.
Results
The experimental inputs I use to construct f (m H | ind.) are [19]: s 2 ef f = 0.23146 ± 0.00017, M W = 80.419 ± 0.038 GeV, Γ ℓ = 83.99 ± 0.10 MeV, M t = 174.3 ± 5.9 GeV, α s (M Z ) = 0.119 ± 0.003. Concerning (∆α) h , as I said we have many possible choices. I am going to present my results using two different values of (∆α) h , one as representative of the ph. analyses and the other for the t.d. evaluations. For the ph. case I take the standard value (∆α) EJ h = 0.02804 ± 0.00065 [4] while for the t.d. analyses I choose the one that has the smallest uncertainty, i.e. (∆α) DH h = 0.02770± 0.00016 [7]. The intrinsic uncertainty is taken into account by averaging different inferences (see Ref. [17]). The values of the R function that enters in Eq. (13) has been provided by the LEP Higgs Working Group [20]: they take into account all Higgs searches by the four LEP collaborations. Table 1 summarizes the result of my analysis. The shape of the p.d.f. with and without the inclusion of the direct search information is presented in Fig. 1
Conclusions
The analysis that I have presented clearly shows that a heavy Higgs scenario is highly disfavored, given a O(90%) probability that the Higgs weights less than 200 GeV. However, it should be said that the distribution has a quite long tail. Indeed we have that above 300 GeV there is still a residual O(1%) probability. The excess in the data recorded by the LEP collaborations is reflected in the analysis by the spike in the distribution around M H ≈ 115 GeV. Maybe it can be of some interest to know that, according to my analysis, the probability that the Higgs mass is below 116 GeV is 37%, if we use (∆α) EJ h , and 38% in the case of (∆α) DH h . Let me say clearly that all the results I have presented are derived under the assumptions of the validity of the SM and rely on the experimental inputs I have used. In particular I have used for s 2 ef f the combined LEP+SLD value, although it is well known that the two most precise determinations of it are not in very good agreement. Just to see how much my conclusions rely on the combined s 2 ef f value I have redone the analysis discarding the SLD result, i.e. using the LEP value for the effective sine,
|
2019-04-14T02:32:24.724Z
|
2001-02-12T00:00:00.000
|
{
"year": 2001,
"sha1": "1b069060280c79ff8c4bb03152b1642b96abc843",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0102137",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "1b069060280c79ff8c4bb03152b1642b96abc843",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
270417313
|
pes2o/s2orc
|
v3-fos-license
|
Evaluating the Impact of Emergency Department Length of Stay in a Military Training Hospital Following the Implementation of a Standardized Paging System
Emergency department (ED) lengths of stay (LOS) may be unnecessarily extended by inefficient consulting processes. Delays in initiating consultations, returning calls, consultant evaluation of patients, and communication of recommendations can contribute to potentially avoidable increases in LOS. Prolonged ED LOS has been shown to increase patient morbidity and mortality and to decrease patient satisfaction. We created a standardized procedure for ED-initiated consultations, with the goal of reducing the time to initial consultant callback, time to admission, and total ED LOS. Following our intervention, time to consultant callback was decreased; however, there was no reduction in total ED LOS for admitted patients.
Introduction
Emergency departments (EDs) are often challenged with managing long wait times and prolonged lengths of stay (LOS), and with ED volumes having returned to pre-COVID-19 pandemic levels, overcrowding is likely to increase [1].ED LOS may be further compounded by system inefficiencies and delays related to a specialist or admitting service consultation, particularly if required for definitive determination of patient disposition.
Currently, many institutions, including academic and community facilities in the United States, utilize paging services to initiate ED consultations.These services include but are not limited to traditional pages to pagers, encrypted messages in the electronic health record (EHR) sent directly to the consultant, or callback numbers sent to consultants' personal cell phones.Delays in communication may occur related to pager malfunction, network downtime, or incorrectly listed or dialed pager numbers.On-call consultants may be busy with more critical tasks at the time a page is received and be unable to return the call promptly.Similarly, ED staff may not be present to receive the return phone call while tending to other patients in the department.Conceivably, such factors may be additive in their effect on throughput efficiency and LOS.Brick et al. demonstrated this cumulative effect in a tertiary ED, in which specialty consultation accounted for 33% of LOS for admitted patients and 54% of LOS for discharged patients [2].Our military academic institution in the United States utilizes a traditional paging system, in which our medical support assistant (MSA) pages the consultant with our emergency department callback number in order to initiate a consultation.As this is an academic institution, both resident and attending physicians practice in this facility.
LOS can be impacted by additional factors, such as hospital bed availability, coordination of nursing assignments, and timing of admission orders placed by consultants.We specifically focused on our quality improvement project, with the intention of demonstrating that decreasing consultation time could lead to reductions in ED LOS.Shen et al. demonstrated that automation of certain ED processes, including consultation, could decrease LOS despite an increase in ED patient volume [3].Furthermore, they demonstrated that the lack of a standardized consultation process led to a variation of up to two hours in patient disposition times, when corrected for patient load [3].Systematic reviews have identified standardization of paging workflows as effective strategies to decrease ED LOS [4,5] with median reductions ranging from 36 to 52 minutes in individual studies [6,7].
In January 2022, our facility implemented MHS Genesis, the new EHR of the Defense Health Agency.This EHR allows for improved tracking of ED throughput metrics and provides opportunities to increase departmental efficiency and decrease ED LOS for admitted patients.Using tools available in MHS Genesis, we sought to implement a standardized paging process in hopes of improving the efficiency of our consulting process and decreasing one of the potential increases in LOS for patients in our department.
Materials And Methods
We defined the steps of our intervention in a written standard operating procedure (SOP) (Appendix).The SOP was implemented in order to define and set timeliness standards for contact and communication with consultants for patients in the ED.Following the new SOP, when a consult is needed, a provider places an order in the EHR to initiate the consult via paging service.
If the initial page is not returned promptly, the MSA follows a predetermined timeline for repeat pages, as displayed in Table 1, and further described in the SOP (Appendix).The MSA is empowered to take action independently, thus not requiring the ED provider to prompt or request successive pages.MSAs are civilian contractors who are specifically trained to operate our consultant SOP, in addition to coordinating interfacility transfers and ensuring the successful uploading of supplemental medical charts and documents.If MSAs are not available, the responsibility to order the consultant order in MHS Genesis, to document the time the page was placed, and to send the initial consultant page, falls on the emergency medicine provider, either the attending or resident physician.
Clock
Step Prior to the implementation of MHS Genesis, our department could track total ED length of stay using our legacy EHR systems, but we were unable to accurately stratify pertinent metrics related to paging and consult timing.Consequently, a pre-intervention comparison could only be made for total ED LOS, when comparing MHS Genesis data to that obtained from legacy systems.
MHS Genesis allowed for greater granularity in tracking consult process efficiency, as our workflows in MHS Genesis dictate that an order be placed at three key time points: the time a consult is placed, the time the ED provider speaks with the consultant (a "request for admit" order), and the admit order placed by the consultant.By comparing the time intervals between these respective orders before and after the implementation of our SOP, we were able to track and compare differences at each stage, in addition to ED LOS.
ED throughput data were analyzed and compared across several time periods, pre-and post intervention.For historical comparison, the average ED LOS data for admitted patients were compiled for the year and month prior to our intervention, using data from our legacy EHR and MHS Genesis.In the week prior to the implementation of our paging SOP (May 9-15, 2022), ED providers were instructed to order consults using EHR order entry, but other aspects of the SOP were not yet implemented.On May 16, our SOP was initiated.ED LOS and consultant-specific time interval data were collected from May 9, 2022, through July 31, 2022.
During the study period, 17,451 patients were seen in our ED.Specifically, 15,980 patients were seen after the implementation of the system, and 1,471 were seen during the week used as the comparison group.
Given the objectives of this comparison, data points were pulled for admitted patients only.This left 2,778 patients admitted over the study time frame, compared to 244 in the comparison group.Finally, only charts with proper consultation and requests for admission orders could be included to allow for proper comparison, including charts where orders were placed out of sequence.Therefore, 760 patients and 53 patients in the comparison group met the final inclusion criteria.These data are visualized in Figure 1.
FIGURE 1: Final Inclusion Criteria for Data Analysis
After entering the data into Excel for comparison and analysis, duplicate orders were first removed using conditional formatting to identify repeat patient identifiers.Once the duplicate orders were removed, patients were organized by patient identifier number (PIN).
We used tools intrinsic to MHS Genesis to specifically analyze patient arrival time: "consult to medicine" or "consult to surgery" orders, "request for admit" orders, and patient departure time.Each order was matched with the PIN used for that visit.A comparison of the differences in these order times was used to determine the amount of time required for each step in the admission process.
The difference between patient arrival and departure times was used to represent the total ED LOS.The difference between patient arrival time and the order for consultation functioned as a surrogate for the time required to be seen by an ED provider and the admission decision to be made (time to disposition).The interval between the consultation order and the patient's departure time represented the time required for the consultant to see the patient, place admission orders, and for the patient to leave the department.Averages of these time intervals were compared for the pre-and post-intervention time periods.Patient arrival time did not discriminate between total time in the waiting room versus patients' who were immediately roomed based on acuity.
The launch of our new paging system necessitated the dutiful placement of consult orders by the providers and the disciplined entry of subsequent orders following consultations.The system's complexity introduced multiple points where data could potentially be misplaced.Before our evaluation, it was common practice for ED providers to determine a patient's need for admission and issue an admission order immediately upon ED presentation.We emphasized the critical need to delay the admission order until after consulting with a specialist, aiming to capture more accurate data on response times.Recognizing the risk of data loss in this approach, we rigorously monitored the sequence of consultation and admission orders.To maintain the integrity of our data, we excluded any records where the admission order preceded the consultation order, thus eliminating the possibility of distorted results.Further, we did not include patient encounters in our study that did not have reported arrival and departure times, consultation initiation and completion, consultation to bedside time, or time from admission from consultation evaluation.
Statistical analysis was performed with the assistance of our hospital statistician, and the outcomes were compared using the Wilcoxon-Mann-Whitney two-sample rank sum test.The study was granted exemption by our institutional review board as a quality improvement study.
Results
Following the implementation of our paging protocol, there was a significantly improved time to consultation callback after page placement.Prior to implementation, an average time of 55 minutes and 23 seconds elapsed prior to callback.Following the initiation of our SOP, the time-to-return calls were decreased to an average time of 20 minutes and 25 seconds in the post-intervention period.This demonstrated an average of 34-minute and 57-second improvement prior to the execution of the automated protocol.
Using the Wilcoxon-Mann-Whitney p values, there was a statistically significant improvement in LOS from May to June 1 (p value < 0.001), as well as from May to July (p value < 0.001).More specifically, from May 9 to May 23, there was an initial statistically significant improvement in LOS (p value = 0.007), and from May 16 to 23 (when the protocol was initiated), there was an even more significant improvement (p value < 0.001).Chi-square approximation demonstrated a probability simply to chance of less than 0.001.
Furthermore, there was a statistically significant improvement in time to admission order placement from May to June 1 (p value < 0.001), as well as from May to July (p value < 0.001).More specifically, from May 9 to May 23, there was an initial statistically significant improvement in time to admission order placement (p value = 0.0015).Chi-square approximation demonstrated a probability simply to chance of 0.0028.
Looking at the time from consultant callback to the placing of admission orders, there was a statistically significant improvement in time to inpatient admission order placement from May to July (p value < 0.001).More specifically, from May 16 to May 23, there was a statistically significant improvement in time to admission order placement to admission orders (p value = 0.0021).Chi-square approximation demonstrated a probability simply to chance of 0.001.The data did not demonstrate a statistically significant improvement in time from admission order placement to admission to the hospital.
Despite the statistically significant decrease in time to returned consultant calls, other time intervals did not similarly improve.The average total emergency department LOS increased by a range of 10-20%, with one outlier week from May 23 to 30 that demonstrated nearly a 45-minute decrease from other recorded times.Similarly, the average time for consultant evaluation of the patient also increased during protocol implementation, with the longest being approximately 45 minutes.Finally, time from returned page to admission also fluctuated, with the same outlier week from May 23 to 30.
Average times both before and after implementation of the paging system are displayed in Table 2 and Figure 2.
Discussion
The results of the standardized paging protocol significantly reduced time from consultant page to callback, ultimately leading to patients being evaluated more quickly by consultants; however, it was not associated with significant decreases in our overall ED LOS.Data analysis helped identify additional areas where ED throughput was bottlenecking.Other throughput measures that have previously been shown to reduce ED crowding, such as a provider in triage (PIT) and a fast track section, were previously implemented in our ED and remained unchanged throughout the duration of the study period [4].Our paging system demonstrated that a low-cost intervention, such as placing consultation orders, can lead to reduced time until consultation evaluation.Previous literature has shown that consultant decision time has a significant impact on ED LOS [2].However, despite significantly improving time to consultant page back, time to consultant evaluation, and time to order placement for admission, the overall ED LOS did not improve appreciably.This was an unexpected result, as similar processes to initiate ED consults, such as text messages, have demonstrated decreased ED LOS [7].
We suspect that this is attributable to other unmeasured variables, including additional bottlenecks downstream from the ED, such as bed availability, coordination of nursing assignments, and timing of admission orders placed by consultants.Additionally, providers in our department are often called upon to care for trauma patients, and the unpredictable arrival of such patients can interfere with the timely dispositions of non-trauma patients.We did not evaluate disposition times in association with trauma volume, nor did we stratify data by time of day.ED LOS may be longer at night or on days with heavy trauma volume, and these factors may have changed the impact of our intervention.Finally, the comparison to last year's LOS was for all patients, not just for admitted patients that were measured in this study.
We believe that the significant reduction in time to consultation and time to admission order following the implementation of our paging system was also likely impacted by increased resident buy-in, as well as the timing of the intervention occurring later in the academic year, when residents are more comfortable in their respective roles, both in the ED and in inpatient units.As time progressed through the postintervention period, users in our hospital system were becoming more familiar with the EHR, and this may account for the changes we observed.However, in early July, internal medicine residents begin their training in our facility, and the introduction of new trainees may have accounted for increases in our measured time intervals for this month, though our study is not specifically powered to this end.
An important limitation of our data is that analysis was restricted to patients who were admitted to the hospital.Specialist consultation for patients who would ultimately be discharged likely adds significant time to ED department LOS compared to those who did not receive consultation.We see this as a future area of study in this realm.Finally, in considering the implementation of this or similar paging protocols, departments should be aware of the potential downsides of such a system.If repeat pages and/or overhead pages are initiated at times when the consultant clearly cannot return the call (e.g., a surgeon who just took a critical patient to surgery), or when the ED provider is unavailable to answer a return call, this could result in unnecessary stress and potentially damage relationships with consultants and ED providers.This could lead to negative downstream throughput effects such as inpatient beds not being cleared, as demonstrated in previous studies [8].
FIGURE 2 :
FIGURE 2: Paging Protocol Impact on Department Times Royal blue: total ED LOS; gray: ED provider evaluation time; light blue: time to consultant callback after first page; navy: consultant evaluation time to time of admission
|
2024-06-13T15:21:01.469Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "d2fc688c27cc691074b13e20c85ce6738dc549ab",
"oa_license": "CCBY",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/243364/20240611-16265-1ikvuy5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "deb308f1bbc9df27f03deedb00c381964dfe27d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225415515
|
pes2o/s2orc
|
v3-fos-license
|
Visible dust and asbestos: what does it suggest regarding asbestos exposures?
Traditionally, industrial hygienists and those preparing asbestos exposure analyses have simply had to demonstrate that the exposures observed exceeded background levels (i.e., ambient outdoor levels ranging from 1x10-8 to 1x10-4 and indoor levels ranging from 1x10-5 to 1x10-4 fibers/cc.)2 Thus, the standard to meet was whether or not an individual’s exposure level was above these de minimis† levels or background levels. Courts in asbestos litigation in the past decade, along with prodding from companies and their representation, have increasingly required quantitation of asbestos exposures beyond arguments that the exposures exceeded de minimis or background levels.3,4,5,6 For example in Bannister v. Freemans,3 the Judge ruled that “The claimant had not established, on a balance of probabilities, that the alleged exposure gave rise to a material increase in the
Introduction
Traditionally, industrial hygienists and those preparing asbestos exposure analyses have simply had to demonstrate that the exposures observed exceeded background levels (i.e., ambient outdoor levels ranging from 1x10 -8 to 1x10 -4 and indoor levels ranging from 1x10 -5 to 1x10 -4 fibers/cc.) 2 Thus, the standard to meet was whether or not an individual's exposure level was above these de minimis † levels or background levels.
Courts in asbestos litigation in the past decade, along with prodding from companies and their representation, have increasingly required quantitation of asbestos exposures beyond arguments that the exposures exceeded de minimis or background levels. 3,4,5,6 For example in Bannister v. Freemans, 3 the Judge ruled that "The claimant had not established, on a balance of probabilities, that the alleged exposure gave rise to a material increase in the 1 President, EES Group, Inc. Pompano Beach, FL. Correspondence: spetty@eesgroup.us † An abbreviated form of de minimis Non Curat Lex, "the law cares not for small things" -a legal doctrine by which entities like Governmental Regulators and Courts consider such situations to be trifling matters.
risk…" Another Judge ruled that the exposure level could not be considered a "material increase in risk" and was therefore de minimus. 4 Others have argued that asbestos exposure levels must be quantitated above de minimum levels in terms of finite increased risk or fibers years per cubic centimeter (f-yr./cm 3 ). 5,6 Thus, increasingly the need has arisen to demonstrate that historic asbestos exposure levels would have been more than background asbestos levels.
One industrial hygiene approach in this situation is to use the concept of Similar Exposure Groups (SEGs) wherein one applies exposure information from other situations that best approximate the exposure(s) of the individual of interest. Two issues arise from this approach: i) the available data to be used for comparison may not well reflect the exposures of the individual of interest and ii) past data/standards were taken in units of million particles per Abstract Courts in asbestos litigation in the past decade have increasingly required quantitation of asbestos exposures beyond arguments that the exposures exceeded de minimis or background levels. Exposure levels for those in the past were often either not measured or minimally measured. Similar Exposure Groups (SEGs) methods can be considered, however i) the available data to be used for comparison may not well reflect the exposures of the individual of interest and ii) past data/standards were taken in units of million particles per cubic foot of air (mppcf) as opposed to fibers per cubic centimeter (f/cc). 1 The purpose of this work was to research these two issues to determine if exposure to visible dust created from asbestos-containing materials (ACM) (>1% asbestos) likely exceeds recommended asbestos exposure limits. This was accomplished by posing and answering the following three questions regarding asbestos exposures: 1) Was the American Conference of Governmental Industrial Hygienists' (ACGIH) 5 million particles per cubic foot (mppcf) of air threshold limit value, in effect for many years, a total dust standard? 2) Does the presence of visible dust indicate the presence of more than 5 mppcf of dust in the air? 3) Would the presence of visible asbestos-containing dust demonstrate a potential health hazard? The author conducted a review of the available literature to determine the answers to these questions. The author also extensively researched and analyzed the conversion factor or ratio between exposures in units of mppcf and f/cc. The results indicate that: i) the 5 mppcf asbestos standard was based on total dust, not just asbestos dust; ii) the presence of visible dust from ACM operations is likely greater than 5 mppcf; and iii) that the presence of visible asbestos-containing dust likely results in levels above the American Conference of Governmental Industrial Hygienists' (ACGIH) and the Occupational Safety & Health Administration's (OSHA) standards. These results have implications for individuals performing retrospective asbestos exposure analysis. cubic foot of air (mppcf) whereas newer data/standards were taken in units of fibers/cc or fibers/milliliter (f/cc or f/mℓ). Units of f/cc and f/mℓ are equivalent and heretofore the units of f/cc are used. The question of how a set of earlier data/standards compares to that of later data/standards must be addressed. Moreover, this conversion factor or ratio between the older data/standards and newer data/standards have varied considerably.
The purpose of this work was to research these two issues in order to determine if exposure to visible dust created from asbestos-containing materials (ACM) (>1% asbestos) likely exceeds asbestos exposure limits. This was accomplished by posing and answering the following three questions regarding asbestos exposures: 1. Was the ACGIH 5 mppcf threshold limit value (TLV), in effect for many years, a total dust standard?
2. Does the presence of visible dust indicate the presence of more than 5 mppcf of dust in the air?
3. Would the presence of visible asbestoscontaining dust demonstrate a potential health hazard?
There has been some suggestion that the 5 mppcf TLV standard was intended to mean 5 mppcf of asbestos particles, not total dust particles. 7 As will be discussed, it has also been debated whether or not the presence of visible asbestos-containing dust in the workplace constitutes a health hazard. Both of those suggestions will be shown to be false; the ACGIH 5 mppcf TLV standard was a total dust standard and the presence of visible dust emanating from an asbestos-containing source is accepted in the scientific community as constituting a hazard to human health.
Method
The author conducted several literature reviews: the history of approaches used to monitor the airborne levels of asbestos in air; historical asbestos in air Standards and Codes; and a literature review and analysis of conversion factors or the ratio of asbestos exposure data in units of mppcf vs. f/cc to determine the best conversion factor. The author also analyzed the asbestos exposure literature to determine if observing visible dust emissions while working with ACM exceeds acceptable exposure levels. Results
ACGIH Nuisance (Total) Dust TLV Standard
Initial efforts to set allowable limits, including those for dust and asbestos, were set by the American Conference of Governmental Industrial Hygienists (ACGIH) beginning in the 1940s. It is informative to compare and understand the differences in both dust and asbestos ACGIH recommended levels with time to illustrate changes in units used (mppcf to mg/m 3 for dusts and mppcf to f/cc for asbestos).
As can be seen, the total dust standard slowly evolved to Particulates Not Otherwise Specified (PNOS) as it was recognized that certain dust components (e.g., silica and asbestos) were more toxic than other components; by 2003, PNOS, too, was withdrawn. Casserly, representing ACGIH, noted that one of the rationale for the Nuisance (total) Dust Standard was that excessive total dust concentrations would "seriously reduce visibility" (p.15). 10 Similarly, OSHA's Total Dust Standard is 15 mg/m 3 or 50 mppcf -synonyms described as Dust (total); "Inert" dusts; Nuisance dusts; Particles Not Otherwise Regulated (PNOR); and includes all inert or nuisance dusts, whether mineral or inorganic, not listed specifically in 1910.1000. ‡
ACGIH Asbestos TLV Standard
The history and evolution of the ACGIH Asbestos TLV Standard with time is summarized below: 8 • 1946-1947: Maximum Allowable Concentration (MAC) -Time-Weighted Average (TWA) -5 mppcf.
As detailed below, for asbestos ACGIH Standards, it has been determined that the early levels set were likely based on total dust of which asbestos was only a portion of the dust -thus actual exposure levels to asbestos that were the basis of these early standards were actually lower. In addition, to be able to compare older asbestos data/standards with later data/standards, a definitive conversion factor or ratio between information in units of mppcf and f/cc is needed. Both issues are addressed below.
The basis and flaws with the 1946-1973 ACGIH Asbestos TLV Standard of 5 mppcf is well documented in a 1993 Letter to the Editor by Thomas Mancuso. 11 One of the major debates regarding the ACGIH Asbestos Standard is whether or not this standard, based on work by Dreessen,7 was based on: i) data for only asbestos, ii) data using a percentage of the total dust as asbestos, or iii) was based on total dust measurements.
In this letter, Mancuso noted that the ACGIH 1946 Asbestos TLV of 5 mppcf was based on work by Dreessen, using midget impinger technology that measured "all dust particles seen" (fibrous and non-fibrous), not just asbestos (fibrous) particles. Mancuso concluded by stating: 11 In summary, the historical published literature from all sources, governmental and nongovernmental, including the 1986 letter by Cook himself, establish that the TLV of 5 million particles per cubic foot for asbestos for the prevention of asbestosis was for total dust, fibrous and non-fibrous, and was not based upon a percentage of asbestos […] [Emphasis added; p. 965].
Mancuso supported this opinion with the following statements, reproduced at length here from his letter: 11 …The Dreessen dust measurements for total dust was the source for the TLV for asbestos as specified by the ACGIH and was adopted by various states. The ACGIH TLV for asbestos started in 1946 and for all the succeeding years did not mention nor designate percentage of asbestos in the total dust count as was clearly and repeatedly designated for silica.
1. Fleischer et al., 12 (1946), in a large scale study of pipe covering operations in shipyards, stated: "There are no established figures for permissible or safe dustiness in pipe covering operations" but added that "In general we feel that dust counts below 5 million particles per cubic foot by Konimeter indicate good dust control" (see also Marr, 13 1964). Dreessen et al. in their study of asbestosis in the asbestos textile industry suggested 5 million particles of total dust by impinger as a threshold for that industry.
The [1962] ACGIH documentation on Threshold Limit
Values states for asbestos: "The present threshold limit value relates to the prevention of asbestosis. It was recommended by Dreessen et al.; counts were from impinger-collected samples in ethyl alcohol and distilled water. Both fibrous and non-fibrous particles were counted, but the latter greatly predominated. While chemical analysis of collected samples of airborne dust corresponded to those of settled dust samples, it is believed that dust counts of particulates by conventional methods can be expected to give only an indirect measure of the risk of asbestosis because of the great relative importance of long fibers." 3. Schall 14 [1965] presented a critique analysis of the Dreessen et al. report of 1938, which cited an extended series of limitations. This included the basic fact that the method of sampling could not differentiate between cotton and asbestos and that the dust counts were given as average yet the range was enormous. In that analysis, reference was made to total dust, i.e., all particles. It is important to stress that the 5 mppcf value was based upon dust counts of all particles, fibrous and particulate, asbestos or not. 15 [1967], in a study relative to the TLV and asbestos, stated: "Anyone looking at the present basis for the threshold limit value (TLV) or 5 mppcf as recommended by the American Conference of Governmental Industrial Hygienists (ACGIH) in 1946 realizes that it is not based on solid evidence. The method used in setting the standard includes all dusts (both grains and fibers); and a large portion of the asbestos fibers that were collected had a diameter, when viewed at 100-X magnification, well below the resolving power of the light microscope." Again, Balzer and Cooper, 16 reporting on the same study [1968], stated: "To compare our sampling data with the present threshold limit value (TLV) of 5 million particles per cubic foot, we have taken a number of midget impinger samples along with our other methods of sampling. All the samples were counted in accordance with the standard procedures prescribed by the American Conference of Governmental Industrial Hygienists and include both grains and fibers." 5. The NIOSH 17 recommended standard for asbestos in 1972 in the historical background stated, "In the study of asbestosis conducted by Dreessen et al., midget impinger count data were used as an estimate of dust exposure. All of the dust particles seen, both grains and fibers, were counted since too few fibers were seen to give an accurate measurement. The resulting count concentration was a measure of overall dust levels rather than a specific measurement of the asbestos concentration." 6. Cook 18 , in a letter to Mancuso [1986] acknowledged that prevailing practice of total dust counts for asbestos particles in the prior decades rather than the percentage of asbestos, "In our telephone chat, you wished me to write you concerning my understanding of the inclusion of all dust particles, asbestos and others, in the results of dust counts in the 1930s and on up to the '60s when fiber counts over 5 microns in length replaced the dust count of all particles less than 10 microns in diameter. As was the practice at that time, the dust counts reported included both asbestos particles and those of other composition following the procedures of the two above cited publications. This was my practice also."
Balzer
Cook further stated in the same letter, "I was interested in reviewing the copy of the Hemeon report of June 1947 for the Industrial Hygiene Foundations of America, prepared during John McMahon's managing directorship, a report that I had not been aware of even though I was rather close to the Foundation in those years." The Hemeon (unpublished) report relative to the TLV and asbestos concluded: "The information available does not permit complete assurance that 5 million is thoroughly safe nor has information been developed permitting a better estimate of safe dustiness. It is nevertheless of the greatest importance either that such assurances be sought or a new yardstick of accomplishment be found for accurately measuring any remaining hazard in the dust zone below five million for the elimination of future asbestosis depends on the degree of control effected now." 7. Merewether 19 [1938], in his extensive report on silicosis and asbestosis, emphasized the importance of the invisible dust and puts into perspective any consideration of dust levels: "That is to say, that the dust particles which are invisible to the naked eye are the important ones; this leads us to the practical point that if a silica or an asbestos process produces visible dust in the air, then the invisible dust is certainly in dangerous concentration." Within this context, Mancuso [1993] stated: "The disease itself provides the medical evidence that the nature of the exposure to asbestos was harmful, regardless of any description of limited or intermittent exposures or numerical designations in any regulations." [p. Specific papers cited by Mancuso, and others, were reviewed independently by the author. Results from this review follow.
Dreessen, et al. and his coworkers, based on a Public Health Service study, studied asbestos exposure levels of 541 workers in three asbestos textile plants located in South Carolina. 7 Their study was published by the US Public Health Service as Public Health Bulletin No. 241, A Study of Asbestosis in the Asbestos Textile Industry. The PHS Bulletin suggested that maintaining total dust particulate exposure below 5 mppcf would protect most workers from asbestosis; but even here, three cases of asbestosis were observed: 7 Only three cases of asbestosis, all of them diagnosed as doubtful or borderline cases, were found to be exposed to dust concentrations of less than 5 million particles per cubic foot. (These three individuals had dust exposures of about 4 million particles per cubic foot.) Above 5 million particles per cubic foot, numerous cases of well-marked asbestosis were found. It would seem that if the dust concentration in asbestos factories could be kept below 5 million particles (the engineering section of this report has shown how this may be accomplished), new cases of asbestosis probably would not appear. [p. 177] Now, nearly eighty years later, several flaws can be observed reviewing Dreessen's work: 1. The dust measurements were for total dust, not just asbestos dust.
2. The percentage of dust that was asbestos varied and was not correlated with cases of disease.
3. The recommended limiting value for asbestos exposures of 5 mppcf was clearly speculative and not protective based on three cited cases of asbestosis at levels of exposure below 5 mppcf.
Merewether, in his 1938 paper entitled Dusts and Lungs with Particular Reference to Silicosis and Asbestosis, noted that all dusts are not equal either physically or in terms of harm to the lungs and resulting diseases: 19 For practical purposes, therefore, so far as present knowledge goes, the dusts which cause serious local effects on the lungs and which may and often do cause disablement and death, are those containing free silica and asbestos; other dusts are harmless in this respect unless, as has been said so happily, "inhaled in insulting concentrations." […] With the silica dusts the dangerous particle size range is up to 10 microns, with the lighter asbestos dust it is much greater, extending up to 200 microns [Due to the fibrous nature.] (p. xiii-xiv) Schall also discussed the 1938 paper by Dreessen et al.,stating: 14 It is not commonly appreciated that the five mppcf indicates a total count, including background dust which may vary greatly including cotton, rock dust, asbestos fibers, etc. On page 23 of the Dreessen paper it is stated "The measurement of dust particles suspended in the air of asbestos textile plants is difficult because of the presence of both asbestos and cotton fibers. The differentiation of these fibers under the microscope is not always possible, especially when the fibers are short and fine. [p. 318-319] Schall further explained in detail why the TLV was based on total dust. The method was incapable of collecting just asbestos fibers. The method collected both asbestos and cotton fibers (fibrous) and all other particles (nonfibrous): 14 Counts were from impinger-collected samples in ethyl alcohol and distilled water. Both fibrous and non-fibrous particles were counted, but the latter greatly predominated. [p.316] Finally, Schall commented on the wide variability of asbestos present in the samples that served as the basis for the 5 mppcf TLV: 14 All of the sampling methods used in different parts of the world for estimating exposures to mineral dusts are empirical. None is an absolute method which will yield data from which the hygienic significant exposures can be precisely judged. [p. 9] Lynch, Ayer, and Johnson demonstrated in their work (see Table 9 below) that not all the dust from textile operations is asbestos dust. 21 Murphy, Jr. et al., analyzing the work by Dreessen,stated: 22 In retrospect, the choice of 5 mppcf, on the basis of the data then available, was open to question; in the dust counts in textile mills, no distinction was made between cotton and asbestos fibers. [p. 1277] NIOSH, in their criteria document entitled Criteria for a Recommended Standard -Occupational Exposure to Asbestos, commented directly on historical standards used to measure asbestos exposures: 17,23 In the past, in the United States, asbestos fibers were measured by the impinger method which included counting particles as well as asbestos fibers. [Emphasis added; p.V -1] It should be noted that considerable research has been conducted on methods for dust sampling (see for example Breslin et al., 24 LeClare et al., 25 Leidel and Busch, 26 Leidel et al. 27 Thus, the answer for this first question regarding the basis for the ACGIH TWA-TLV for asbestos of 5 mppcf was that it was based on total dust, not just asbestos dust. In the case of the asbestos dust condition, our evaluation of the exposure should be based on the knowledge that the present toxic limit for asbestos is five million particles of dust per cubic foot of air. This is a very small concentration, so small in fact that the condition may look good even to a critical eye and still present an exposure greater than this low limit. Some indication of the amount of dust present in the air may be obtained by noting the layer of dust on nearby settling places after learning how long a time has elapsed since they were last cleaned. If only a thin layer of dust has accumulated over six months or a year and there are no visible puffs of dust escaping from the operation, it is probable that the condition is satisfactory. [Emphasis added;p. 194] In the "Optical Properties" chapter of their 1936 (updated 1954) book Industrial Dusts, Drinker and Hatch recognize that certain levels of dust suspended in the air are invisible to the naked eye when they discuss methods to demonstrate its presence by using Tyndall Lighting (directing a beam of light through a darkened room) [p. 26]. 30,31 This is the same unique lighting method used to film asbestos release incidents (e.g., Longo 32 It must be remembered that the dust which cannot be seen by the unaided eye is the most hazardous since it is of respirable size. Dust concentrations must reach very high levels before they are readily visible in the air. The absence of a visible dust cloud does not mean that a dust free atmosphere exists. [Emphasis added; p. [14][15] In Small particles, those in the range of 1 micron diameter, behave in air much differently than do particles which are large enough to be seen. [p. 7] A 1966 Union Carbide memo discussed asbestos dust levels at five million particles per cubic foot as being invisible to the naked eye, stating: 38 (T)his concentration of dust is generally not visible in the average work area unless a beam of light causing a Tyndall effect is present. Usually the dust concentration must be from 8 to 10 million particles per cubic foot before its presence is visible in average lighting conditions. [p. [1][2] Thus, it is apparent from these references that dust levels at/near 5 mppcf are not visible.
Comparison of f/cc to mppcf Levels
To compare and contrast earlier total dust levels reported in units of mppcf to those later reported for asbestos in fibers per cubic centimeter (f/cc), as is often needed in epidemiology studies, a ratio between f/cc and mppcf is needed.
The consensus best approach to determining this ratio would be pair data, where data using an impinger (for mppcf data) and a membrane filter (for f/cc data) were taken sideby-side. Fortunately, in the 1960s and 1970s, a number of such experiments were conducted and the results analyzed, to determine this ratio. Much of the work in this area was completed by the Department of Health, Education and Welfare's Division of Occupational Health, Public Health Service.
From 1930 to 1975, 5,952 airborne dust samples were taken in textile facilities, mostly in South Carolina. From 1930 to 1965, samples were taken using impinger methods; from 1965 to 1971 using impinger and membrane filter methods; and from 1971 forward, membrane filter methods were used [Dement 39 ; see also McDonald et al. 40 and McDonald et al. 41 ]. The earlier impinger methods produced airborne dust results in units of mppcf whereas the membrane filter produced results in either a mass of total dust or when examined under a microscope to determine the number of fibers per cubic centimeter or milliliter of air (f/cc or f/ml). The f/cc value depends on how one defines a fiber (typically with an aspect ratio -length to diameter -of 3) and the length of fibers counted (typically all fibers, fibers >5 µm or fibers >10 µm).
Below is a literature review of past work that attempts to determine this ratio (f/cc to mppcf); that is, what factor(s) one should multiply data taken in units of mppcf to convert the data to units of f/cc. 42 Ayer and his coauthors completed 230 paired samples at five facilities producing asbestos textiles where the midget impinger and membrane filter samples "were taken within a few centimeters of one another" [p. 277]. Results for this study, as with most studies, were typically broken down by fiber length: i) all fibers, ii) fibers >5µm in length, and iii) fibers >10 µm in length. The authors summarized these ratios obtained for data from four of the plants as follows:
Ayer et al., 1965:
• All fibers: Ratio (to convert from mppcf to f/cc, multiply mppcf by): 10 • Fibers >5µm in length: Ratio: 6 • Fibers >10µm in length: Ratio: 3 When utilizing optical fiber counting methods, fibers greater than 5 µm in length with an aspect ratio of 3:1 are traditionally defined as asbestos fibers (see Dement, et al. 43 ) although Gibbs and Eng report that only a fraction of amosite, crocidolite, and chrysotile fibers are >5 µm. 44 Actual data for the all fibers and fibers >10µm in length were presented in the paper and are reproduced as Tables 1 and 2. Thus, while the authors concluded that the value for the ratio (f/cc to mppcf) for all fibers should be 10, the actual overall average value was 9.4, with a range of values from 2 to 27. Again, while the authors concluded a value for the ratio (f/cc to mppcf) for fibers >10µm in length should be 3, the actual overall average value was 2.7, with a range of values from 1 to 8.
Lynch and Ayer, 1966: 45 Lynch and Ayer wrote a followup paper of paired sampling results at nine textile plants completed by the Department of Health, Education and Welfare's Division of Occupational Health, Public Health Service from January of 1964 to June of 1965. The nine plants reportedly covered 80% of the workers (>2,500) in the industry and was comprised of 1,896 membrane filter and 1,115 impinger samples. This time, the authors presented impinger and membrane filter data (Tables 3 and 4), but did not present the ratio of these values.
Again, membrane filter results (Table 4) were: i) all fibers, ii) fibers >5µm in length, and iii) fibers >10 µm in length. Using the data in Tables 3 and 4, one can develop the ratios (f/cc to mppcf) for the three ranges of fiber lengths (Tables 5, 6, and 7). Based on results from Tables 5, 6, and 7 findings for the ratio of f/cc to mppcf are: • All fibers: Ratio (f/cc to mppcf): • Overall average: 9.6 • Overall range of values: 3.0 to 27.7 • Fibers >5µm in length: Ratio (f/cc to mppcf): • Overall average: 4.3 • Overall range of values: 1.6 to 11.6 • Fibers >10µm in length: Ratio (f/cc to mppcf): • Overall average: 2.7 • Overall range of values: 1.0 to 7.0 Although not specified in this paper, it is clear that not all these sample results were paired results based on later work; 21 and these ratios are slightly different when only using paired results (see Lynch, Ayer, and Johnson analysis below).
Balzer and Cooper, 1968: 16 Balzer and Cooper completed impinger and membrane filter sampling of northern California and northern Nevada insulation workers using asbestos-containing materials (ranging from 10 to 100% amosite, chrysotile or amosite/chrysotile asbestos) under six tasks (prefabrication, application, finishing, tearing out, mixing, and general). A total of 64 impinger samples and 153 membrane filter samples were taken. Samples included both area and personal samples, but the paper does not explicitly state samples were paired.
Sample results, and the ratio of f/cc to mppcf calculated, are shown in Table 8. Note that the overall mean and median ratios are 1.22 and 0.70 respectively.
Lynch , Ayer, and Johnson, 1970: 21 Lynch, Ayer, and Johnson wrote a follow-up paper of paired sampling from three industries (textile, friction, and pipe) as part of their ongoing work completed by the Department of Health, Education and Welfare's Division of Occupational Health, Public Health Service. This included additional analysis of the textile data found in their 1965 and 1966 publications reviewed earlier. A summary of their findings on the ratio of f/cc to mppcf for these three industries, using paired sampling data, are summarized in Table 9. Note that textile ratios of 10.3, 5.9, and 2.7 (all fibers, >5 µm and >10 µm respectively) represent paired data; the values of 9.6, 4.3, and 2.7 calculated from all data (Tables 5, 6, and 7) clearly suggest that not all the data taken were paired. The authors concluded: The preferred index of asbestos exposure is fibers longer than 5 microns counted on membrane filters at 430x phase contrast. The method of counting is convenient and practical, and fibers >5 microns constitute a direct index of asbestos exposure. [Emphasis added; p. 604] Gibbs, 1974: 44 Gibbs wrote a paper including paired sampling of dust samples from the Quebec chrysotile industry. Area sampling was completed at nine sites in each of five mines and mills using paired midget impinger and membrane filter samplers. A total of 78 paired samples were taken. Any particle having a length to width ratio of 3:1 was counted as a fiber. Results are reproduced in Table 10.
Gibbs noted that a plot of all the impinger vs. membrane filter data did not result in an acceptable correlation coefficient. This is not unexpected as the data in Table 10 clearly suggests widely variable levels for the same operation between plants. Moreover, the basis for the fiber counts was simply any particles with an aspect ratio (L:W) of 3:1 rather than particle length (e.g., >5 µm). Although the data are paired, the method for counting fibers, without regard to fiber length, is inconsistent with other work cited. Dement et al., 1982: 43 Dement et al. reported on ratios of f/cc to mppcf. While the raw data were not explicitly referenced or provided, they cited U.S. Public Health Service data from 1965 (120 sets of paired) and from 1968-1971 (986 sets of concurrent) as the basis for the following conclusions: Paired Data (1965) and Concurrent Data (1968Data ( to 1971: • Textile industry -all operations except preparation: • Ratio of f/cc to mppcf: 3 • 95% confidence interval: ~2.0 to ~3.5
Concurrent Data (1968 to 1971):
• Textile industry -preparation only operation: • Ratio of f/cc to mppcf: 8 • 95% confidence interval: ~5 to ~9 These data were based on fibers >5µm in length. : 39 Dement et al. reported on ratios of f/cc to mppcf. Based on their citations, it is apparent that they drew on U.S. Public Health Service data from 1965 (120 sets of paired) and from 1968-1971 (986 sets of concurrent) as the basis for their further analysis and comments on ratio factors considered in this paper. The authors concluded:
Paired Data (1965):
• Textile industry -all operations except preparation: • Ratio of f/cc to mppcf: 2.9 for fibers > 5 µm • For α = 0.05, no statistical differences found by plant operation or increasing impinger concentration Concurrent Data (1968Data ( -1971: • Textile industry -all operations except preparation: • Ratio of f/cc to mppcf: 2.5 for fibers > 5 µm • For α = 0.05, no statistical differences found by calendar time nor plant operations -except preparation • Textile industry -preparation only operation: • Ratio of f/cc to mppcf: 7.8 for fibers > 5 µm The authors noted that for conversion work in their paper, they used ratios of 3 f/cc to mppcf (all operations except preparation) and 8 f/cc to mppcf (preparation) and that these factors would be conservative as they were on the upper end of the 95% confidence intervals. 39,46 McDonald et al., 1983: 47 McDonald et al., like many studies completed in this era, focused on the epidemiology of workers to asbestos exposure in South Carolina textile plants making asbestos-containing products. As part of their paper, they needed a ratio between f/cc to mppcf and relied on/interpreted much of the same data relied on by Dement and others.
Further references: Many of the standard references and more modern references tend to select an overall ratio of 6, which was likely derived from the work of Ayer et al. 42 A summary of the authors' findings discussed above, regarding the ratio (i.e., f/cc to mppcf), follows: • Mining and milling operations: • The ratio factors varied greatly.
• For equivalence of fibers >5 µm in length and lung cancer mortality, a ratio of f/cc to mppcf of 3.64 appeared to them to be appropriate.
• Textile Mill operations (fibers >5 µm in length): • Dement et al. 39 used a Ratio of 3 for all operations except Preparation, where a ratio of 8 was used.
• Ayer et al. 42 used a ratio of 6 for all operations.
• McDonald et al. 47 used an overall ratio of 6 (range from 1.3 to 10.0) for all operations.
The authors noted that the overall average ratio (f/cc to mppcf) between mining/milling and textile operations was a factor of ~2 and viewed this as "relatively minor." Nevertheless, the consensus best method for determining a conversion factor (f/cc to mppcf) comes from sources using paired experimental data (i.e., where both f/cc and mppcf values were measured at the same time).
The paired experimental data, for fibers >5 µm or fiber L/W ratio >3, are summarized below: Source Scenario
Ratio or
Conversion Factor (f/cc to mppcf) Ayer et al. 42 Textiles 6 Lynch and Ayer 45 Textile Plants (9) Based on this analysis, a mean overall average ratio (f/cc to mppcf) of paired sampling results summarized above is 4.55 with a range of 1.22 to 10.9. Note that these values cover textile, mining, mining processing, insulation, friction, and piping areas but do not cover every industry, occupation, or task.
Thus, if one converts 5 mppcf to f/cc, the range of values are 6.1 to 54.5 f/cc with an average value of 22.75 f/cc. Moreover, the literature implies that either no visible dust is present at these levels or it is present under only special lighting conditions.
In sum, the presence of total dust at 5 mppcf is likely not to be visible except in cases of special lighting. The presence of asbestos dust at 5 mppcf implies a fiber count of at least 6.1 to 54.5 f/cc with an average value of 22.75 f/cc.
Q3: Does the presence of visible asbestos dust in the air pose a likely hazardous situation?
As noted in the previous section, dust present at 5 mppcf is not reported to be visible and at this level correlates to levels of between ~6.1 and ~54.5 f/cc (median of ~22.5 f/cc). Thus, actual f/cc counts when visible asbestos-containing dust was present would be even higher.
While it is now generally agreed that any asbestos exposure poses a health hazard, this question primarily regards retrospective dust exposure analyses, where a question of hazard is determined by exposures in excess of the recommended standards for allowable levels of asbestos in the air. In the United States, these are: 53 Source Limit ( Table 4-2). 54 The author of this paper reviewed the references, and found that the paper by Davis et al. provided specific data and conditions. 55 Each owner or operator of any source ... shall comply with the following provisions: Discharge no visible emissions to the outside air during the collection, processing (including incineration), packaging, or transporting of any asbestos-containing waste material generated by the source or use one of the emission control and waste treatment methods specified in paragraphs (a) (1) through (4) The discharge of visible emissions has resulted in numerous criminal and civil enforcement actions. US EPA routinely charges persons who release visible emissions of asbestos dust into the air both criminally and civilly. 60-62,** Finally, it should be noted that it is standard practice on asbestos clearance inspections to perform a visual inspection first; no clearance sampling is to be completed if dust or residual dust is found in the air or on surfaces in the abated area to be cleared after blowing off surfaces (typically with a leaf blower). If visible dust is found, recleaning must occur, and the visual inspection successfully completed before any clearance sampling is performed.
The US EPA, under the NESHAPS Standard (40 CFR Part 61, Subpart M) for asbestos emissions noted that the purpose of the regulation was to: "minimize the release of asbestos fibers during activities involving the handling of asbestos." 59 This standard, first promulgated on April 3, 1973 and last updated in 1995, required no visible dust be discharged into outside air for "collection, mixing, wetting and handling operations." Other portions of the Standard require reporting of visible dust emissions. Clearly, as early as 1973, the US EPA wanted to prevent exposures to visible asbestos-containing dusts.
Conclusion
This paper posed the following three questions regarding asbestos exposures: 1. Was the ACGIH 5 mppcf TLV, in effect for many years, a total dust standard? 2. Does the presence of visible dust indicate the presence of more than 5 mppcf of dust in the air? 3. Would the presence of visible asbestos-containing dust demonstrate a potential health hazard?
Analysis of the literature found that i) the 5 mppcf asbestos standard was based on total dust, not just asbestos dust; ii) the presence of visible dust from asbestoscontaining materials (ACM) operations is likely greater than 5 mppcf; and, iii) that the presence of visible asbestoscontaining dust likely results in levels above ACGIH and OSHA standards.
These results have implications for individuals performing retrospective asbestos exposure analysis where the presence of visible dust from operations or the disturbance of asbestos-containing materials (ACM) occurred and provide a method to quantitate such exposures which often were not monitored and/or measured. 44,63,64 Limitations This work is limited by available information regarding paired experimental (measured) data for conversion of mppcf and mg/m 3 to f/cc as well as literature and experimental work regarding levels of visible asbestoscontaining dust in the air to this author. who reviewed drafts of this paper and suggested that additional analyses be completed regarding the determination of the conversion factor from mppcf to f/cc (ratio). He also provided names of additional papers to be considered in performing these subsequent analyses. Further thanks to the reviewers for suggestions of additional literature as well as additional analyses completed.
The author would also like to thank the staff at the EES Group, Inc., for editing this paper and for their help in locating literature cited in this paper. Also, thanks to Dr. Tess Bird for proofing and editing the manuscript. Disclosure: No external funding was used in writing this paper; any resources used in writing this paper were time and materials from the Corporation owned by the author.
Aside from conducting asbestos surveys and sampling for private clients in the Insurance Industry and private sector, Mr. Petty has served as an Expert Witness in cases related to potential asbestos exposures and the resultant litigation.
|
2020-08-13T10:07:22.274Z
|
2020-08-10T00:00:00.000
|
{
"year": 2020,
"sha1": "3bafdb09ac1dee8cbfc91c0707555171e0c0dfde",
"oa_license": "CCBY",
"oa_url": "https://www.jospi.org/article/14496.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e2ac1f69338b06165c1e798fa55e4e9569fa89ae",
"s2fieldsofstudy": [
"Law",
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
154845740
|
pes2o/s2orc
|
v3-fos-license
|
Period Fertility in Russia since 1930: an application of the Coale-Trussel fertility model
In this paper we present a detailed demographic analysis of the change of period fertility that occurred since 1930, based on individual retrospective data, collected in the most recent (five percent) microcensus of the Russian Federation from 1994. We assess the influence of external events on the level and distribution of (period) fertility. For the years prior to 1950 our information on age specific fertility is not complete, but using fertility models acceptable estimates can be constructed. The Coale-Trussell model is particularly suited for producing detailed and robust estimates of interpretable parameters of the fertility distribution. Although none of the observed crises in Russia succeeded in exerting a decisive influence on the course of the fertility transition, political events often had profound short-term effects.
Introduction
The history of Russia and the USSR during the twentieth century was full of social and political upheavals that often had disastrous demographic consequences (Andreev, Darsky andKharkova 1992, 1998).The main crises occurred in the periods 1915-1922, 1928-1934and 1941-1947(Zakharov and Ivanova 1996).
These societal cataclysms led to major population losses attributable not only to excess mortality but also to huge deficits of births.Beside these major social and political events policy measures contributed to the temporal course of fertility.The 1920's were characterized by a disregard of the role of the family in society.According to Marxist doctrine the family was to 'wither away' once capitalism was replaced by socialism.Freedom of divorce and abortion and rights to unmarried mothers were enacted to break down traditional family practices.
In the 1930's the family policy began to reflect a natalist mood, divorce and abortion became extremely difficult to obtain and the family was to be 'strengthened'.New laws to promote large families were enacted in 1944.Childrearing burdens were alleviated to increase female labor force participation and women were encouraged to bear more children to ensure the future labor force needs.
Abortion was legalized again in 1955 and became the principal means of fertility regulation (Avdeev, Blum and Troitskaja 1994).However, Soviet policy remained pronatalist: in 1981-82 new laws in favor of the family were enacted.Yet the history of a population is not only formed by the immediate reactions to external events.It is also the history of a secular evolution of long term trends.The developments in the fields of public health, medical facilities, ongoing urbanization, better education and the chronic housing shortages all contributed to a lowering of fertility levels.All of these often simultaneous developments are perceptible in the fertility figures and make the analysis of period levels very complex (Avdeev and Monnier 1994).
In two previous articles (Scherbov andvan Vianen 1999, 2001) we studied long term trends in fertility using a cohort approach.It was shown that Russia followed a unique path in its fertility transition.Although fertility decline started late, all generations born since 1920 had a completed fertility near or below the replacement level when taking into account infant and child mortality that continued to be very high until the fifties.
Although the cohort description is preferable for understanding the transition process, period data reflect effects caused by external shocks and the direct impact of population measures.Moreover, they are most relevant when studying the current and future age structure of the population.
Since 1959 detailed age-specific fertility figures from civil registration are available; but for most of the period 1927-1959 reliable data were inaccessible or nonexistent.During the crises, and particularly during the war, the vital registration system collapsed (Andreev, Darsky and Kharkova 1992).Therefore, earlier studies of period fertility (e.g.Coale, Anderson and Härm 1979) were constrained by data that referred to small samples, selected years, referred to parts of Russia or to the USSR as a whole or were only accessible in aggregate (tabular) form.Only recently part of the demographic history of Russia could be reconstructed using archive materials and demographic estimation (Andreev, Darsky and Kharkova 1998).
Information obtained from retrospective surveys can fill in part of this gap and the last microcensus of 1994 contains data of sufficient detail.It permits to reconstruct part of the life course of individual men and women.Moreover, the microcensus offers the last possibility to collect life-history data from women who lived through the most turbulent part of Russian history.The abrogation of the general population census of 1999 and the present situation in Russia indicate that it will take a long time before a comparable socio-demographic survey will be repeated.
We start our study in 1930 because from that year on we can estimate fertility with sufficient confidence.Moreover from that period onward the state tried to intervene most decisively and vehemently in the life of its citizens.Women were confronted with the crises of famine and war and its aftermath.
In the next section we give a short description of the 1994 microcensus of the Russian Federation and the methods used in the analysis of our data.The third section presents the results of this analysis.In a final section some conclusions are drawn.
Data and methods
During the history of demographic data collection in Russia and the USSR, eight censuses and two microcensuses were conducted.Before the 1959 census, the only census conducted and published under nearly normal conditions was that of 1926 (Schwartz 1986, Andreev, Darsky andKharkova 1992).A particular problem is that Soviet statistics of all kinds have always been published selectively and often in a format that is not entirely straightforward (Clem 1986, 23).It is only recently that more comprehensive statistics permit the detailed study of demographic developments.
Demographic sample surveys that included retrospective questioning on childbearing started in the second half of the twenties, but were discontinued in the thirties.Only since 1960 the Central Statistical Board of the USSR organized studies, the so-called 'September' surveys, which were conducted at regular intervals and included questions on nuptiality and fertility.The main results of the analyses have been published in papers and monographs, but the data were not published in statistical editions (Volkov 1999 and references therein).The first microcensus, a five-percent socio-demographic survey of the population of the USSR was performed in 1985 (Volkov 1999, 5-7).
The microcensus of 1994 relates to the status of the population on 14 February 1994 and covered 5 percent of the population on the territory of the Russian Federation.The principal results of the microcensus have been published in aggregate, tabular, form in 8 volumes by GOSKOMSTAT (1995).In the microcensus, 7.35 million persons were interviewed or 4.99 percent of the total population of 147.3 million.Volkov (1999) presents a detailed description of the microcensus, its methodology, organization and program.
The 1994 microcensus included 49 questions on 9 topics (Scherbov and van Vianen 1999).For our analysis we use the extensive information on the fertility of all women 15 years and over: number of children born alive and for every child, date of birth (month and year) and (eventual) date of death (month and year).
Although the data permits the distinction of various subpopulations, this paper will only study the fertility of the total enumerated female population.It should be realized that the information pertains to the history of women who survived until the date of the census.With the relatively high mortality levels of Russia and the various catastrophic events, a selection bias will be present with regard to the older respondents.In famine and war, pregnant women and women with young children are more vulnerable.Moreover, even in the 1980s appreciable mortality differences persisted, not only between urban and rural areas but also between geographic zones (Shkolnikov and Vassin 1994, 398).We expect that the more we go back in time the more our fertility figures will be biased downward.
For an assessment of the quality of our data we refer to Volkov (1999) and Scherbov and van Vianen (2001).An analysis of child-mortality as reported discloses that for all but the most recent years infant mortality is systematically underreported in the microcensus.Comparison with recent and the few extant older estimates indicates that about half of the infant deaths is not reported (Scherbov and van Vianen 2001).This underreporting, especially of children who died in the neonatal period, is well known in Russian statistics (Ksenofontova 1994).The underreporting introduces an appreciable bias in the estimated fertility before and during the war, when infant mortality was still high in Russia.It amounts to an underestimate of total fertility of about 10 percent in the period prior to 1950, going down to a mere 1.5 percent in the most recent period in our analysis.It does not influence our main conclusions on the trends over time.
The retrospective nature of our inquiry puts a limit on the period that we can study.We have only complete information on age specific fertility from 1950 onward, for earlier years we miss data on elder women, because only women born since 1900 are represented in our analysis.We restrict ourselves to women born since 1900, because the information obtained from the very old may be less reliable and refers to only few surviving women.From 1900 onwards the smallest sample is the cohort born in 1901 with 1,394 respondents, the largest sample is the cohort born in 1954 with 64,601 females.From 1909 onwards every cohort contains more than 10,000 women (Scherbov and van Vianen 2001).The period 1930-1949 is reconstructed by estimating the missing data using an analytic expression for the fertility schedule.The year 1930 is chosen as the lower bound because the typical early-peak type fertility schedule for Russia peaks before age 30.In order to get a reliable estimate of the missing part of the age specific fertility distribution, it is crucial that the observed data extend beyond the age at which fertility is maximal.
In an overview of curve fitting techniques Hoem et al (1981) concluded that the Coale-Trussell (1974) model and the Gamma-model were about equal and fitted the data very well.Both models will be applied in our study.The four parameters of the models are estimated using a non-linear least-squares algorithm, minimizing the sums of squared deviations between the observed and the estimated rates by single year of age (Scherbov and Golubkov 1986).
Results
In order to simplify our discussion, references to the Coale-Trussell and the Gamma model will often be abbreviated to CT and G respectively.In Figure 1 and Figure 2 we show two typical results of our estimation exercise.
The first figure refers to 1935 when data are incomplete.In the observed data there is some fluctuation in the oldest ages, probably ascribable to age misstatement and age heaping, but the fit of both Coale-Trussell (CT) and Gamma (G) over the observed age range is good.The extrapolation of fertility rates to higher ages of CT looks acceptable, but the fertility values induced by the Gamma (G) distribution are definitely not realistic.
Figure 1:
Age Before going into details on the models we discuss the outcomes of the Coale-Trussell model using the familiar parameters in Figure 3 and Figure 4.There is no post-war baby boom and it is 1949 before fertility recovers, probably reflecting the problems of reconstruction, demobilization and food shortages after the war.After 1960 fertility slowly declines until 1981 when a notable increase starts lasting until 1987 when a rapid decline sets in.The increase in fertility in the years after 1981 may be related to new family policy measures that were enacted in 1981.Until then child allowances were mostly restricted to fourth and higher parity children.The new politics extended grants and allowances to every child (Chesnais 1995, 197).With regard to figures on total fertility reported elsewhere, our estimates after 1960 coincide nearly exactly.Before 1960, the pattern of our estimate is well confirmed but, especially before 1940, the level of our estimates is much lower than the figures reported for instance by Andreev et al (1998), who reconstructed fertility prior to 1959 using demographic models and back projection.The fertility estimates for the years 1940-1945 are new, we could not find other reliable figures.In the data and methods section we noted already that infant deaths were underreported, which in the years before 1950, when infant mortality was very high, accounts for an underestimate of more than 10 percent.Another source of downward bias is that only surviving women are taken into account when estimating fertility in the past.Women with high parities were predominantly rural and in general had lower survival probabilities.Other factors may be related to selective migration from the former Soviet Republics after 1990.Finally, there may be problems with correspondence in calculating or estimating rates: the events in the numerator should occur in the population exposed-to-risk in the denominator.This is warranted in our data but given the history of census taking in the former Soviet Union this source of bias cannot be excluded in other estimates.However, the low estimates before 1940 certainly deserve further study.
The parameter m, which measures the deviation from the pattern of 'natural' fertility and is sometimes naively interpreted as the index of family limitation, is low in the first decennia.In the war years it even approaches zero, indicating a fertility that, at a very low level, is spread over the whole age range.After 1945 m increases monotonically until 1980, when it drops a little, reflecting changes in the fertility pattern after the introduction of the new policy measures.After 1985 m increases more rapidly.
Figure 4:
Fertility change in Russia since 1930: results of estimation using Coale-Trussel model for period data The age at onset of marriage a 0 is discussed together with the parameter 1/k, giving the speed at which women enter marriage compared to the standard schedule from 19th century Sweden.Both parameters change appreciably over the period considered here.
The shock of the War on the generations entering the reproductive period, as reflected in both 1/k and a 0 , is displayed clearly.The very low values of a 0 around 1960 may be related to squeezes on the marriage market due to the small cohorts born during the war and the difference of age at marriage between bridegroom and bride, which, in Russia, is typically around three years.In the youngest generations there is a secular increase in the speed of marriage.The phenomenon of a high concentration of births at very young ages resulting from early nuptiality, a short first birth interval and lowering total fertility is a peculiar feature of the most recent Russian fertility pattern (Zakharov and Ivanova 1996, 57).
After discussing the results of our estimation we return to a comparison of the Coale-Trussell (CT) and Gamma (G) model.In order to make the comparison possible, the parameters of the Coale-Trussell model were redefined.Besides Total Fertility (TFR) and age at onset (a 0 ) we calculated the mean age at childbearing (mean) and the variance in the age at childbearing (var) in the estimated schedules.In the Gamma model, the same parameters can be defined (Hoem et al 1981).
Figure 5:
Fertility change in Russia since 1930: estimates of total fertility and age at onset of childbearing In Figure 5 we see that the estimate of the Total Fertility (TFR) in the Gamma model is always higher than in the Coale-Trussell model.The difference after 1950, when we have complete data, is small, but for earlier years it is quite large.From Figure 1 we infer that these higher fertility estimates are artifacts of the procedure and a consequence of the unrealistic values of the (extrapolated) fertility rates at higher ages.The (small) difference in age of onset of fertility (a 0 ) is due to differences in the functional specification of the Coale-Trussell and the Gamma schedules, the trends in both parameters coincide nicely.Finally, in Figure 6 we see that the mean age at childbearing (mean) is nearly the same for both techniques when data are complete.However, with incomplete data the Gamma specification estimates the mean age at childbearing appreciably higher.The variance of the age at childbearing (var) slowly decreases in the Coale-Trussell model due to the concentration of fertility around the mean age.In the Gamma model the variance is always higher and becomes very large when data are incomplete, reflecting the overestimation of fertility at higher ages (compare Figure1 and Figure 2).
Conclusion
Our conclusions are based on the fertility histories as reported by those women who were alive at the date of the microcensus.Especially for the earliest years this restriction brings about that the quantitative picture is not completely representative for the period experience.Nevertheless, the pattern that emerges from our analysis gives a fair picture of the fertility history of Russia since 1930.The final result of our estimation, a Lexis plot of fertility rates by age from 1930 to 1993 is presented in Figure 7.The recent history of Russia is characterized by a succession of, often violent, societal upheavals and a brutal effort to change the human condition.A foregoing analysis of cohort fertility indicated that the long-term trends in demographic behavior developed in almost complete independence from these events (Scherbov andvan Vianen 1999, 2001).'None of the observed crises in Russia has succeeded in exerting a decisive influence on the course of the demographic transition' (Zakharov and Ivanova 1996, 38-39).The analysis presented above shows that political events often had profound shortterm effects.The Famine of 1933, the catastrophic events of the Second World War, the problems of Reconstruction and the policy measures around 1981 were distinguishable in the period figures.However, a comparison with the cohort experience shows that these effects are for the most part ascribable to changes in the timing of childbearing.Whether the current crisis will have lasting effects upon fertility therefore remains to be seen.Our analysis illustrates that fitting observed fertility data with model schedules provides an accurate, smooth and parsimonious representation.Moreover, the modeling shows to be most useful in the analysis of incomplete data sets.Fitting the Coale-Trussell model yields very plausible results when a large part of the fertility curve is missing.There is no indication that the assumption that the model schedule gives a fair description of the actual fertility schedule is violated.However, the same exercise with the Gamma model was much less rewarding.Especially in the case of incomplete data the Gamma estimates are unrealistic.The Coale-Trussell model is essentially based on a cohort description of marriage and childbearing.However, in the Russian case the societal disturbances of famine and war were so severe that the cohort fertility pattern became bimodal and impossible to fit (Scherbov andvan Vianen 1999, 2001).An earlier attempt to reconstruct Ukrainian fertility from limited data (Lutz et al 1992) was essentially based on the assumption that cohort fertility followed a Coale-Trussell pattern.According to our more complete data this assumption is incorrect and consequently the inferred fertility distributions might be biased.
Figure 3 :
Figure 3: Fertility change in Russia since 1930: results of estimation using Coale-Trussel model for period data
Figure 6 :
Figure 6: Fertility change in Russia since 1930: estimates of mean and variance of age at childbearing Figure 7:Coale-Trussell model: fertility rates by year and age www.demographic-research.org467 -Specific Fertility in Russia in 1935: estimation Figure 2 refers to 1960 when data are complete.Both CT and G fit the observed data very well up to age 40 but at higher ages the Coale-Trussell model fits the observed data much more closely.
|
2019-05-16T13:03:54.814Z
|
2002-06-07T00:00:00.000
|
{
"year": 2002,
"sha1": "7d2ab7e661378dcec19f677e7da3f9d57aed6d8b",
"oa_license": "CCBYNC",
"oa_url": "https://www.demographic-research.org/volumes/vol6/16/6-16.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2b4f55b22799e07b24e0d2beb3f032531ecefba8",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Economics",
"History"
]
}
|
1100588
|
pes2o/s2orc
|
v3-fos-license
|
For catch bonds, it all hinges on the interdomain region
Tensile mechanical force was long assumed to increase the detachment rates of biological adhesive bonds (Bell, 1978). However, in the last few years, several receptor–ligand pairs were shown to form “catch bonds,” whose lifetimes are enhanced by moderate amounts of force. These include the bacterial adhesive protein FimH binding to its ligand mannose (Thomas et al., 2002; Thomas et al., 2006), blood cell adhesion proteins P- and L-selectin binding to sialyl Lewis X (sLeX)–containing ligands (Marshall et al., 2003; Evans et al., 2004; Sarangapani et al., 2004), and the myosin–actin motor protein interaction (Guo and Guilford, 2006). The structural mechanism behind this counterintuitive force–enhanced catch bond behavior is of great interest.
The fi rst of the two recent papers (Phan et al., 2006) showed that the extended state of P-selectin does have a higher affi nity for ligand, as previously hypothesized. The authors added a glycosylation site in the hinge region that was predicted to wedge the interdomain hinge open to stabilize the extended conformation. This wedge mutant caused a fi vefold increase in the affi nity of soluble P-selectin for immobilized P-selectin glycoprotein ligand (PSGL)-1 in surface plasmon resonance (SPR) experiments. It also enhanced adhesion between P-selectin-expressing cells and cells expressing the sLe X -containing selectin ligand PSGL-1 in both static and fl ow assays. Interestingly, in both SPR and fl ow experiments, the mutation decreased the rate of bond formation, but decreased bond detachment rates even more. The authors also predicted that adhesion via L-selectin would be enhanced by a mutation (N138G) in the interdomain region that eliminates a hydrogen bond that favors the bent (putative low-affi nity) conformation. This hypothesis was validated with experiments that showed a reduced velocity of L-selectinN138G-expressing cells rolling over PSGL-1-coated surfaces, but measurements of bond on-or off-rates were not reported. Although there is no direct evidence that either mutation does, indeed, favor the extended conformation, the two mutations collectively offer convincing evidence that the extended state has higher ligand affi nity. The authors hypothesized that this provides a mechanism to explain catch bonds because force would favor the extended conformation of selectin. However, the effect of the mutations on catch bond behavior was not directly tested.
A paper by Lou et al. (2006) determined how the catch bond behavior of L-selectin was affected by the same N138G mutation. When microspheres coated with either the mutant or native L-selectin were washed over ligand-coated surfaces, tether rates increased approximately twofold for the mutant. In addition, this study used single-molecule experiments using a biomembrane force probe to demonstrate that the N138G mutation, indeed, changed the effect of force on bond lifetimes. Although the mutant still formed catch bonds with longer lifetimes at higher force, it did not require as much force to be fully activated. That is, the bonds formed by the N138G mutant had longer lifetimes than the native structures in the low-force "catch" regime, where increased force stabilizes the bond. However, application of suffi cient force can weaken even catch bonds, a process called "slipping." Mutant and native structures showed essentially the same behavior in the higher force "slip" regime, where increased force weakens the bonds. This indicates
Wendy Thomas
Department of Bioengineering, University of Washington, Seattle, WA 98195 Tensile mechanical force was long assumed to increase the detachment rates of biological adhesive bonds (Bell, 1978). However, in the last few years, several receptorligand pairs were shown to form "catch bonds," whose lifetimes are enhanced by moderate amounts of force. These include the bacterial adhesive protein FimH binding to its ligand mannose (Thomas et al., 2002;Thomas et al., 2006), blood cell adhesion proteins P-and L-selectin binding to sialyl Lewis X (sLe X )-containing ligands (Marshall et al., 2003;Evans et al., 2004;Sarangapani et al., 2004), and the myosin-actin motor protein interaction (Guo and Guilford, 2006). The structural mechanism behind this counterintuitive force-enhanced catch bond behavior is of great interest.
Correspondence to Wendy Thomas: wendyt@u.washington.edu Abbreviations used in this paper: PSGL, P-selectin glycoprotein ligand; sLe X , sialyl Lewis X; SPR, surface plasmon resonance. that the mutation affects the process by which force activates L-selectin, demonstrating that the interdomain hinge region is involved in the catch bond mechanism.
The hypothesis, proposed by both studies, that catch bonds arise as a result of extension of the interdomain region, is in agreement with a previously advanced hypothesis for the mechanism of FimH force-enhanced binding to mannose for bacterial adhesion (Thomas et al., 2002). That work used steered molecular dynamics simulations and site-directed mutagenesis to predict that mechanical force would enhance FimH binding to a mannosylated surface by extending a linker chain connecting the terminal lectin and anchoring pilin domains of the adhesin. This would result in an extended interdomain confi guration.
Although it is now evident that interdomain regulation is important for catch bonds, it remains unknown how this region, which is located far from the binding pocket in both selectins and FimH, regulates ligand binding. Beyond providing new experimental insights, Lou et al. (2006) propose a novel, quantitative "sliding-rebinding" mechanism for how interdomain opening increases ligand binding under force. In their model, the ligand is forced to slide past an alternate binding site on the reoriented selectin face when selectin adopts the open angle conformation. The transient binding to this site allows enough time for the ligand to rebind in its original site before it diffuses away. Without the force-induced reorientation, the ligand escapes the pocket without pausing at this intermediate binding site, so it has a lower chance of rebinding. The authors hypothesize that this is why force and the N138G mutation both increase the effective ligand binding lifetime in the catch regime.
Can this theory also explain how the wedge mutation in the hinge region would cause a decrease in the detachment rate of soluble selectin in the SPR experiments reported by Phan et al. (2006)? The drag force acting on an isolated nanoscale ligand-receptor complex in SPR experiments is considered nonexistent. In this case, Brownian motion of the ligand and receptor in the local energy landscape will determine the probability that the ligand interacts with the second binding site after detaching from the fi rst. Without external force to bias the direction of Brownian motion, the orientation of the binding interface will not matter. Thus, the sliding-rebinding model does not explain why mutations regulating the hinge region affect the bond off-rates in the absence of force. Unfortunately, the static kinetic assays of Phan et al. (2006) were performed for a different mutation in a different selectin than the catch bond-demonstrating assays of Lou et al. (2006). Further experiments that connect measurements of lifetimes with and without force, as well as further development of the novel sliding-rebinding model to address very low-force regimes, will be critical to understanding whether catch bonds involve affi nity regulation.
An alternative allosteric mechanism was developed to explain FimH catch bonds (Thomas et al., 2006). Whereas the sliding-rebinding model assumes that the orientation, but not the structure, of the binding pocket is changed by force, the allosteric model assumes that extension of the interdomain region causes a conformational change in the binding pocket that affects unbinding rates. It is unknown whether this might also be able to explain the selectin data.
In spite of the unanswered questions, these two papers have established that the hinge region regulates the affi nity of P-selectin for its ligand (Phan et al., 2006), as well as the catch bond behavior of L-selectin (Lou et al., 2006). These are important steps toward deciphering the structural origin of the counterintuitive phenomenon called catch bonds. Conformational changes in P-selectin. P-selectin was cocrystalized with (purple) and without (gold) the PSGL-1 ligand (peptide, blue; sLe X , cyan). The two structures differ in a series of changes that span from the binding site to the hinge region, indicated by the arrow. Residue 30 is shown in green, with the position of the wedge mutant glycosylation shown as a green circle. Shown in red is residue N138 forming a hydrogen bond with Y37 as they appear in the unliganded L-selectin structure, which is nearly identical in conformation to unliganded P-selectin.
|
2014-10-01T00:00:00.000Z
|
2006-09-25T00:00:00.000
|
{
"year": 2006,
"sha1": "8443fb2a1caefa319ea0aa7c185fd58c9e936c66",
"oa_license": "CCBYNCSA",
"oa_url": "https://rupress.org/jcb/article-pdf/174/7/911/1325851/911.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "8443fb2a1caefa319ea0aa7c185fd58c9e936c66",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
219009763
|
pes2o/s2orc
|
v3-fos-license
|
Effect of storage and stress conditions on the counts of Bifidobacterium animalis microencapsulated and incorporated in plantain flour
https://doi.org/10.1590/1981-6723.25219 Abstract The probiotic activity in the intestinal microbiota depends on its survival during food storage and its passage through the gastrointestinal tract. This study aimed to evaluate the effect of storage and stress conditions such as temperature, pH and bile salts on the viability of Bifidobacterium animalis microencapsulated and incorporated in plantain flour. Between days 21 and 28, the failure percentage decreased from 93% to 27%. The mean counts of B. animalis were statistically different with change of temperature, pH and bile salt concentration. For the temperature, the counts obtained at 50 °C and 80 °C decreased by 60.1% and 90.2%, respectively. Likewise, at pH 2.5 showed a over 90% survival reduction during 60 min; whilst at pH 3.5 during 60 min the survivals were less than 50%. Finally, the counts achieved using 1 g/L of bile salts were higher than those obtained at 3 and 5 g/L. The results indicate the need to evaluate other capsular components to improve the survival of B. animalis microencapsulated and incorporated in plantain pH e a concentração de sais biliares. Em relação à temperatura, as contagens obtidas a 50 °C e 80 °C diminuíram 60,1% e 90,2%, respectivamente. Da mesma forma, pH 2,5 teve redução superior a 90% a partir dos 60 minutos, enquanto com pH 3,5, durante 60 minutos, as sobrevivências foram inferiores a 50%. Finalmente, as contagens obtidas com 1 g/L de sais biliares foram superiores às obtidas com 3 e 5 g/L. Os resultados indicam a necessidade de avaliar outros componentes capsulares para melhorar a sobrevivência de B. animalis microencapsulado e incorporado à farinha de plátano.
Introduction
Plantain (Musa paradisiaca) is grown in Colombia in an area reserved for cultivation, which corresponds to 590,000 ha (Martínez et al., 2015). This fruit represents 12% of the total weight of this plant (Gañán et al., 2008). Bearing in mind that plantain is an essential daily diet product and it occupies a relevant place in the national economy, this study aimed at the production of plantain flour, incorporating a microencapsulated probiotic as an alternative to contribute to the improvement of productive efficiency in this value chain.
Probiotics are relevant in the functional food industry and have the capacity to incorporate themselves into the intestinal microbiota, benefiting human health (Champagne et al., 2018;Ranadheera et al., 2018). These lactic acid producers microorganisms are mainly from the Lactobacillus and Bifidobacterium genera (Du Toit et al., 2013;Casarotti & Penna, 2015;Coghetto et al., 2016). The probiotics survival can be enhanced by microencapsulation, based on the atomization of the microorganism, and its subsequent coating with a protective material, performed in a drying chamber with hot air to remove water in a short time (Fritzen-Freire et al., 2013).
The microencapsulation of B. animalis has been studied in dairy products, including goat milk (Ranadheera et al., 2015), ice cream (Silva et al., 2015), fermented milk (Bogsan et al., 2014;Rodrigues et al., 2014), whey (Casarotti & Penna, 2015;Rodrigues et al., 2011), and yogurt (Akalin et al., 2007). Studies on plantain flour are not known, showing a remarkable and novel aspect of this research. It is noteworthy that a probiotic food must have a count equal to or higher than 10 6 CFU/g (Ranadheera et al., 2015). Furthermore, there are evaluative referents of the survival of B. animalis during the storage (Rodrigues et al., 2011), heat stress (Du Toit et al., 2013;Fritzen-Freire et al., 2013), and conditions similar to those that the microorganism faces when going through the gastrointestinal tract (Silva et al., 2015), which are essential to its functionality after preparation and consumption of the food.
We evaluated the effect of storage and stress conditions such as temperature, bile salts, and pH on the viability of B. animalis microencapsulated and incorporated in fortified instant plantain flour. Additionally, the diameter of the microcapsules was analyzed by scanning electron microscopy (SEM).
Drying, extrusion and milling procedure for preparation of plantain flour
Initially, we selected plantain of the Dominico-Hartón (Musa AAB Simmonds) variety, in stage one of ripening. Then, the plantains were washed with potable water and its disinfection with a commercial agent (Dioxy San at 0.0025%, prepared by mixing 2.5 mL of the disinfectant and 1000 mL of potable water). These units were then subjected to blanching, with an internal temperature of 72 °C for 30 min, and manually separation of the husk adhered to the pulp using a knife. Subsequently, the pulp was chopped, obtaining units in slices of approximately 4 mm, which were later cut into 4 equal parts. The generated pieces were dehydrated, using hot air drying under forced convection at 80 °C for 5 hours, bringing the product to a moisture between 15 and 25%. Later, the dried plantain was milled, with a particle diameter between meshes number 2 and 4 (U.S. STD. Sieve, of 2 and 4.76 mm, respectively). An EX0113 equipment was used for the extrusion process, at 40 °C and screw speed of 800×g, bringing the product to a moisture of 15-25% in dry weight. Finally, the material was milled in a domestic mill and the flour was stored under vacuum and at room temperature until its use (Montoya et al., 2016).
Preparation of the biomass of B. animalis and the encapsulating solution
The strain of B. animalis ATCC 2557 was isolated in Man, Rogosa and Sharpe (MRS) agar, supplemented with 0.05% of L-cysteine and incubated at 35 ± 2 °C under anaerobic conditions. An isolated colony from B. animalis was inoculated into a flask containing 50 mL of MRS broth supplemented with 0.05% of L-cysteine, which remained under constant stirring at 200×g for 48 hours at 35 ± 2 °C. Subsequently, the total volume of this preculture was added to a flask containing 500 mL of MRS broth supplemented with 0.05% L-cysteine, which was subjected to incubation time for 48 hours at 35 ± 2 °C under constant stirring at 200×g. The biomass was obtained by centrifugation at 5500×g for 15 min, followed by discard of the supernatant and subsequent collection of the pellet in sterile 50 mL Falcon tubes. The obtained biomass was washes three times, subjecting the microorganism in inulin solution, which was centrifuged for 10 min at 5500×g (Rodríguez-Barona et al., 2012).
For the encapsulating solution, 250 g of maltodextrin were added to a bottle containing 1000 mL of distilled water, which, immediately was placed on a magnetic stirrer for one hour. Later, the bottle with the solution was sterilized at 121 °C in an autoclave for 15 min. Finally, when the maltodextrin solution was at room temperature, the biomass previously recovered was added, with a proportion of 5% of biomass in wet weight, with regard to the weight represented by the solution containing the encapsulant compound (Rodríguez-Barona et al., 2012).
Microencapsulation of the microbial suspension and incorporation in plantain flour
The suspension of maltodextrin and biomass of B. animalis was spray dryed in the Buchi-290 equipment (Flawil, Switzerland). Using an atomized vacuum pump, this suspension was transported through a sterile hose to the drying chamber of the equipment. An aspiration of 75%, compressed air of 0.05 Mpa, temperature of 80 °C at the inlet and 40 °C to 43 °C at the outlet was supplied. Before entering to the drying equipment, the suspension was constantly stirred at 50×g, using a magnetic stirrer. The material obtained (18-20 g) was collected in a sterile aluminum packing, which was subsequently subjected to vacuum (Rodríguez-Barona et al., 2012).
The encapsulation efficiency (EE) was calculated according to EE% = (N /N0)×100; where N is the number of viable cells (log CFU g −1 ) released from the microparticles and N0 is the number of free viable cells (log CFU g −1 ) in the feed solution before the encapsulation process. EE corresponded to 82%.
For the incorporation into the food matrix, 1 g of the maltodextrin capsules containing B. animalis and 10 g of the plantain flour were weighed in a 50 mL Falcon tube, which was subsequently homogenized using a Vortex shaker for 5 min at 200×g. This process was performed successively for incorporation of the maltodextrin capsules containing B. animalis in the product plantain flour. The homogeneity of maltodextrin capsules containing B. animalis was verified through a Relative Standard Deviation (RSD) of 11%, regarding the mean count of 20.5E9 CFU/g, when analyzing 15 repetitions. Finally, the different treatments of the proposed experiments were arranged. Braz
Analysis of B. animalis by counting on plate
For the analysis of the microorganism in the biomass and maltodextrin suspension, 1 mL of sample and 9 mL of buffered peptonated water were mixed, producing the first dilution, from which the serial dilutions necessary to facilitate the counting of this probiotic were made. Similarly, for the estimation in the product, the mixture of 0.1 g of the sample (B. animalis microencapsulated or plantain flour with this microorganism incorporated) with 1 mL of peptonated water was considered, generating the suspension from which serial dilutions were subsequently performed. Then, from the dilutions considered, aliquots of 0.1 mL were inoculated in duplicate on the surface of MRS agar supplemented with 0.05% L-cysteine. Finally, incubation was performed at 37 °C for 72 hours, inside an anaerobic polycarbonate jars with anaerobic indicator sachets (Fritzen-Freire et al., 2012).
Viability of B. animalis during the storage
From a production batch obtained, we performed weekly B. animalis analyses, starting on day 1 to day 84, with 15 repetitions for each sample unit stored at 22 °C and 80% RH in Falcon tubes with a capacity of 20 mL. From these counts, the percentage of sample units with counts equal to or higher than 1.0E6, recommended in the resolution 288, was obtained (Colombia, 2008). The reliability percentage has been applied in food to set the time limit until which a particular attribute is fulfilled (Corpas & Tapasco, 2013).
Viability of B. animalis using different temperatures
According to Fritzen-Freire et al. (2013), the suspension in buffered peptonated water of the plantain flour with the incorporated microorganism remained 10 min with stirring at 50×g using different temperatures (22 °C, 50 °C, and 80 °C). Then, we performed the analysis to obtain the count of B. animalis. These analyzes were performed in sextuplicate and an experimental design of a factor was used. In addition, a control analysis of the B. animalis count in the product at room temperature was performed, without the stirring to which underwent treatments aforementioned.
Viability of B. animalis at different pH
10 g of the product with the incorporated microorganism was diluted in 90 mL buffered peptonated water, with adjustment to three pH values (2.5, 3.5, and 6.2), performing analysis of these in sextuplicate, for three contact times (60, 120, and 180 min). The pH was adjusted with 37% HCL and 0.1 N NaOH (Rather et al., 2017). We applied the design of two factors, pH and contact time, applied to establish differences in the count of B. animalis among the treatments. Furthermore, we performed a control count of B. animalis in the product, with pH of 6.2 and immediate analysis, that is, without contact times.
Viability of B. animalis in the presence of bile salts
According to Rather et al. (2017), the sample units of plantain flour with the incorporated probiotic were analyzed in sextuplicate, after reconstitution in buffered peptonated water and adjust to three concentrations of bile salts (1, 3, and 5 g/L) and three contact times (60, 120, and 180 min). The design of two factors, bile salts concentration and contact time, was applied to establish their influence on the B. animalis count. Likewise, the percentage of decrease of B. animalis in the treatments was determined, from a control count without contact of bile salts. Braz Viability analyzes at different temperatures, pH, and bile salt concentrations were made after product storage time between four and five weeks at 22 °C and 80% RH.
Microscopic characteristics of the maltodextrin capsules
The microscopic characteristics of both the maltodextrin capsules containing B. animalis and the plantain flour were observed by SEM. Figure 1 (left) shows a cluster of microcapsules, with similar diameter and external structure. Furthermore, plantain flour was observed with a bigger size than the microcapsules and with irregular structure that nevertheless, had homogeneity in their general dimensions (Figure 1, right). Immediately after microencapsulation, areas between 61 and 336 μm 2 were obtained, while in the final measurement the areas were between 16 and 39 μm 2 . Likewise, after microencapsulation the capsules had semicircular morphology, with slight concavities on the sides; while from the fourth week of storage the capsules had irregular surface and exposed content. Previous studies show capsules with smaller mean diameters (Feng et al., 2018;Ramos et al., 2018), depending on the encapsulation method used. Figure 2 shows the behavior of the B. animalis count, transformed to its Ln for a clearer and esthetic visual appreciation. In general, there was a decrease in the count of this microorganism during the storage. After microencapsulation, there was a mean concentration of 20.5E9 CFU/g, with decrease of 1 logarithmic cycle per week, reaching a mean of 7.4E5 CFU/g on day 21, where the content of B. animalis per gram of the product in 93% of the sample units analyzed was higher than that required by the respective legislation in Colombia. Contrastingly, on day 28 only 27% of the samples analyzed complied the aforementioned criteria, with a mean concentration of 8.5E4. From day 35, all the samples had counts lower than 1.0E6 CFU/g. Among the main factors that could have affected the viability of B. animalis are the encapsulation method and the type of encapsulant material used. In the case of L. salivarious microencapsulated by freeze drying with alginate microgels and alginate-gelatin, we observed a decrease in cell viability of 2.4 log and 1.7 log, respectively, after 5 weeks under wet condition (Yao et al., 2017). Likewise, at 25 °C/60% RH during a storage period of 42 days, the reduction of L. rhamnosus microencapsulated with a material based on alginate, lecithin and starch, was between 1.23 and 2.66 log in different formulations tested (Huq et al., 2017). The microencapsulation by spray drying of L. plantarum and L. casei based on soybeans at 25 °C/60% RH also showed a significant loss of viability after 2 months, which was improved using inulin or oligosaccharides as a wall material (González-Ferrero et al., 2018). These results also show the influence of storage temperature on the reduction of probiotics. Consistent with the above, it has been established that refrigeration can improve the survival of the probiotics during the storage. The study of the effect of storage temperature on Bifidobacterium Bb12 inside of microcapsules based on casein showed a higher rate of inactivation at 25 °C, compared to that obtained at 4 °C, where a decrease of about 1 logarithmic cycle was obtained (Heidebach et al., 2010). Furthermore, the storage of L. plantarum in refrigeration for 21 days, microencapsulated into a sodium alginate matrix showed survivals close to 9 log (Coghetto et al., 2016). On the other hand, the decrease denoted in this study could be related to the cellular detriment by metabolic reactions such as the oxidation of fatty acids, which produced protein denaturation and phospholipid degradation (Huq et al., 2017). In particular, bacterial storage is related to an increase in the ratio between saturated and unsaturated fatty acids, due to the lipid oxidation, producing free radicals that damage the DNA and cell membranes (Albadran et al., 2015).
Effect of temperature on the count of B. animalis microencapsulated and incorporated in plantain flour
Regarding the mean control count, at 22 °C the mean represented a recovery of 100%. At 55 °C, the mean count meant a reduction of 60.1%. In addition, the mean count of B. animalis at 80 °C indicated a reduction of 90.2% (Figure 3). The analysis of variance performed later indicated statistical differences between the evaluated treatments (p = 0.00). Furthermore, the multiple comparison Tukey's test showed that the mean counts of B. animalis were statistically lower when increasing the temperature. Based on the results obtained, the plantain flour with B. animalis incorporated inside of microcapsules must be prepared without the supply of temperatures equal to or higher than 50 °C. Our results corroborate with those of another study (Arslan-Tontul & The protection provided by encapsulation against thermal exposure depends on the temperature and time supplied; however, the composition of the capsular structure also constitutes a differential factor for the protection of the probiotic. As a precedent of this, the thermal supply at 63 °C for 15 and 30 min showed significantly lower reductions of Lactobacillus salivarious microencapsulated with alginate-gelatin microgels, compared to this microorganism without capsular protection (Yao et al., 2017). In addition, a study performed with L. acidophilus showed that the microencapsulated cells by extrusion inside the alginate spheres with double chitosan coat and subjected to 70 °C for 60 min had higher survival rate than the cells with simple coat (Jantarathin et al., 2017). Furthermore, L. plantarum and L. casei microencapsulated by freeze drying with alginate double layer coat showed a decrease close to 1.0 and 3.0 log, after exposure to 75 °C for 1 and 10 min, respectively (Rather et al., 2017). These precedents demonstrate the influence of the composition of the capsular structure on the protection of probiotic bacteria.
Effect of pH on the count of B. animalis microencapsulated and incorporated in plantain flour
Independent of the pH used, the counts invariably decreased when increasing the contact time. Considering the mean control count, the B. animalis count was lower at pH 2.5 compared to those obtained in the other pH used. Furthermore, we highlight a decrease of 82.8% after 60 min of exposure to pH 2.5, compared to a decrease of 21.4% at pH 3.5 in the same contact time. Another noteworthy aspect is that at pH 6.2 there was a slight decrease in the B. animalis count to a longer time, reaching 21.9% after 180 min (Figure 4). The analysis of variance determined the existence of interaction between the evaluated factors pH and contact time (p = 0.00). Finally, the multiple comparison Tukey's test evidenced that the mean values were statistically different when changing pH (p = 0.00 among all treatments) and contact time (p = 0.00 for 60 min vs 120 min; p = 0.004 for 120 min vs 180 min). These results indicate the limited efficiency of the capsular structure to protect this microorganism from the diffusion of acidic molecules that cause physical deterioration and disruption of the biological processes in the bacterial cell. Braz Another aspect susceptible of biological interpretation is that at pH 6.2 the count of B. animalis had a reduction directly proportional to the contact time. This allows inferring about the concomitant existence of a degree of deterioration of B. animalis at the time of the experiment, as well as a loss of integrity in the maltodextrin capsule, factors that facilitated the subsequent affectation of the microorganism by osmotic stress through the time during the experiment. The argument of a progressive loss in the integrity of B. animalis is supported by the drastic decline occurred in the analysis of the viability during the storage of this microorganism. Compared to the results obtained, other studies show reductions of between 1 and 3 logarithms when microencapsulated strains were in the presence of simulated gastric fluids, during different exposure times (Coghetto et al., 2016;Yao et al., 2017;Huq et al., 2017). The destruction of probiotic microorganisms in the stomach has been related not only to the level of acidity but also to the destruction of the membrane proteins by enzymatic action (Huq et al., 2017), in specific cases such as pepsin and pancreatin, which cause lysis of the bacterial wall (Coghetto et al., 2016). Consequently, a further study could include this factor to establish its effect, concomitant with acidic pH, on the viability of B. animalis.
Effect of bile salts on the count of B. animalis microencapsulated and incorporated in plantain flour
Based on the mean control count, a significantly higher count of B. animalis was obtained when using 1 g/L of bile salts, with a maximum decrease of 70.7% in a contact time of 180 min, compared to the mean counts obtained with 3 and 5 g/L, concentrations that resulted in reductions of 90.1% and 94.3%, respectively. Furthermore, in the presence of 1 g/L of bile salts, the average reductions were 27.8% and 35.2% in the contact times of 60 and 120 min, respectively. Likewise, with all concentrations of bile salts tested against B. animalis, the reductions were appreciably higher when there was a contact time of 180 min, compared to those obtained in shorter times ( Figure 5). The analysis of variance applied indicated the association of the B. animalis count with the bile salts concentration and contact time (p = 0.000) without evidence of interaction between these factors (p = 0.969). In addition, the Tukey's test showed that the counts in the presence of 1 g/L of bile salts were statistically higher than those obtained with higher concentrations (p = 0.000) and that the counts obtained with 3 and 5 g/L were statistically similar (p = 0.710). On the other hand, the counts were significantly lower with a contact time of 180 min, compared to the other times used, whereas there were no statistical differences when comparing the counts at 60 and 120 min (p = 0.309). Braz The transit of the probiotic through the intestinal tract implies, among other aspects, the need to survive in bile salt concentrations close to 1% for 60 to 180 min, conditions under which, B. animalis was able to maintain counts similar to those obtained without exposure to bile salts. However, concentrations or times greater than these would probably affect the probiotic capacity of this microorganism. It has been indicated that the survival of probiotics under gastrointestinal conditions is related to the impermeability and stability of microcapsules (Coghetto et al., 2016). Furthermore, the hydrophilic material is more easily degraded in the presence of bile salts (Arslan-Tontul & Erbas, 2017). In the case of maltodextrin, being an amphipathic polymeric component, the prolonged contact time implies a greater possibility of loss of capsular integrity. This explains why the exposure to bile salts, independent of the concentration supplied, promoted a gradual detrimental effect on the capsule, causing drastic decreases of B. animalis when a time of 180 min was used.
Considering that plantain flour is a food matrix with low aw, the use of another type of encapsulant could be explored. For example, those based on microgels and microbeads or alternatively, a combination of encapsulant materials of an appropriate nature to prevent deterioration of the capsular integrity, to allow the protection of the probiotic during the storage. In addition, considering that the temperature conditions in microencapsulation can affect the survival capacity during the storage of the microencapsulated microorganism, it would be appropriate to promote the improvement of the viability of B. animalis from the effect of controlled stress conditions, of oxidative, thermal, and metabolic nature, prior to the microencapsulation process.
Our findings support some of these considerations. The first aspect is related to the intrinsic survival capacity of the ATCC 2557 strain of B. animalis to the factors evaluated; therefore, it would be pertinent to compare their survival capacity against strains of B. animalis isolated of the gastrointestinal tract, as well as establish the respective metagenomic understanding. Another aspect is the importance of exploring the integration of prebiotic formulations that include oligosaccharides, to improve the viability of the probiotic in the product (Langa et al., 2019). Likewise, although the in vitro evaluation of the survival capacity against the variables used allows inferring about the probiotic potential of the microorganism, it is necessary to explore simulation models that facilitate an approach to the conditions of the gastrointestinal tract (Silva et al., 2015), and contemplate the influence of other variables such as the content and type of nutrients, the osmotic pressure, and the presence and activity of other microorganisms.
|
2020-04-23T09:07:46.216Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "00f78d4f9f099f28a74ffb7b4667f8482365cfa1",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/bjft/a/HXRJGPZx4sgwYwgbCBHzvWw/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4c430bd1687edce69a512e93badc2ecfbaebe5bb",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
221035550
|
pes2o/s2orc
|
v3-fos-license
|
Progressive brain atrophy and clinical evolution in Parkinson’s disease
Highlights • Cortical and subcortical atrophy is accelerated early after the onset of PD.• Brain atrophy in PD progressed with cognitive, non-motor and mood deficits.• Structural MRI may be useful for predicting disease progression in PD.
Introduction
Clinical manifestations are very heterogeneous among individuals with Parkinson's disease (PD) (Greenland et al., 2019). Clinical subtypes have been identified according to the presence of different motor signs and symptoms, cognitive decline, non-motor symptoms and behavioural disturbances (Greenland et al., 2019). Rate of disease progression is also variable: although 50% have reached key milestones of either postural instability or dementia within 4 years from diagnosis, almost a quarter have a good prognosis at 10 years (Greenland et al., 2019). The link between protein accumulation in the brain, the consequent brain damage and the variable clinical disease progression is currently under investigation in order to identify biomarkers .
MRI can accurately measure changes in cerebral structures providing biomarkers for monitoring PD progression. Over the years, numerous cross-sectional MRI studies demonstrated more profound grey matter (GM) damage in fronto-temporal, parietal, occipital and limbic areas and in basal ganglia in patients with moderate to severe PD (Agosta et al., 2013a;Lewis et al., 2016;Melzer et al., 2012;Sterling et al., 2016), although some degree of structural GM alterations has been observed also in the early phase of the disease (Agosta et al., 2013b;Fereshtehnejad et al., 2017;Lewis et al., 2016;Pereira et al., 2014). A recent review analyzed and resumed longitudinal structural https://doi.org/10.1016/j.nicl.2020.102374 Received 8 June 2020; Received in revised form 8 July 2020; Accepted 4 August 2020 M. Filippi, et al. NeuroImage: Clinical 28 (2020) 102374 MRI findings in PD patients . Most consistent findings showed progressive cortical atrophy accumulation in basal ganglia, temporal/hippocampal, frontal and parietal areas in de novo PD cases and patients in the early/middle phase of the disease, with the achievement of a plateau in the later stage of the disease Melzer et al., 2015;Mollenhauer et al., 2016;Sampedro et al., 2019;Sarasso et al., 2020;Sterling et al., 2016;Tessa et al., 2014). Stratifying patients according to disease severity, findings are more controversial, although showing a progressive atrophy of basal ganglia over 1 year of follow-up and a widespread cortical thinning over 3-6 years in mild to moderate PD patients (Campabadal et al., 2017;Ibarretxe-Bilbao et al., 2012;Nürnberger et al., 2017;Sarasso et al., 2020). Different studies stratified patients according to cognitive impairment (Camicioli et al., 2011;Caspell-Garcia et al., 2017;Compta et al., 2013;Foo et al., 2017;Garcia-Diaz et al., 2018;Gee et al., 2017;Hanganu et al., 2014;Mak et al., 2017Mak et al., , 2015Ramírez-Ruiz et al., 2005) but only few studies used prediction models showing that atrophy of the hippocampus, fronto-temporal areas, caudate, thalamus and accumbens might foresee mild cognitive impairment or dementia conversion in PD patients (Foo et al., 2017;Sarasso et al., 2020;Zhou et al., 2020). The association between brain structural changes and the evolution of other non-motor manifestations has been less explored (Baba et al., 2012;Hanganu et al., 2014;Ibarretxe-Bilbao et al., 2010;Wee et al., 2016). The main weaknesses of the majority of previous studies are the small samples, the short observation periods, the inclusion of only two timepoints, and the classification of patients that was performed according to a single variable. Such stratifying methods might not be appropriate considering the complexity of the disease. For instance, patients with more aggressive symptoms can have short disease duration and patients with long disease duration can present milder form of PD; patients with heterogeneous motor, non-motor and cognitive characteristics can present the same Hoehn and Yahr (HY) scoring; and the label "cognitive impairment" often covers a wide spectrum of cognitive features. An updated view upon the current knowledge suggests the necessity to stratify PD patients according to a combination of variables meaningful of patients' characteristics and not according to one single feature or gross evaluation scale . To the best of our knowledge, only one longitudinal clinical study by Fereshtehnejad and colleagues used a composite model that merged numerous clinical variables including motor and non-motor domains (Fereshtehnejad et al., 2017). However, despite many attempts, there is no consensus about the best clustering model for PD patients (Mestre et al., 2018;Qian and Huang, 2019). The present study reports a longitudinal observation of patients with PD at different disease stages assessed by comprehensive motor and non-motor serial evaluations and annual structural MRI scans over 4 years. Contrary to the majority of previous studies, our large PD cohort was followed-up by serial visits for a relatively long time period. A new cluster analysis was employed to define disease subgroups at the study entry based on demographic characteristics, disease duration, motor severity, pharmacological treatment, cognitive and non-motor features, in order to group patients with the most similar characteristics. This method was already adopted in our previous functional MRI study on the same sample . Moreover, composite outcomes were used for each specific domain (motor, non-motor and cognitive). The aims of our study were to investigate the pattern of progressive brain atrophy in PD according to disease stage and subtype and to elucidate to what extent cortical thinning and subcortical atrophy are related to and can predict clinical motor and non-motor evolution.
Participants
Approval was received from the local ethical standards committees on human experimentation and written informed consent was obtained from all subjects prior to study participation.
154 PD patients were prospectively recruited at the Clinic of Neurology, School of Medicine, University of Belgrade, Belgrade, Serbia within the framework of an ongoing longitudinal project. Patients received a comprehensive evaluation in ON medication state including clinical, cognitive and MRI assessments at study entry and every year for a maximum of 4 years (Fig. 1). All PD patients were assessed at study entry, 1-year and 2-year follow-ups. Patients with Hoehn and Yahr (HY) < 2 were evaluated also after 3 years. After the two first follow-up visits, patients with HY ≥ 2 performed a 4-year visit as they were not able to be regularly (each year) scanned with MRI due to the severity of the symptoms or to logistic issues to move to the clinic. Patients were excluded if they had: HY > 4 and dementia (Emre et al., 2007) because they are usually less cooperative and may have some difficulties to stay still into the MRI scanner and to participate to all the study visits; moderate/severe head tremor at rest; cerebrovascular disorders (including vascular parkinsonism) or intracranial masses on routine MRI; history of traumatic brain injury; any other major neurological and medical condition; and incomplete MRI or images with artefacts. Our sample included ten patients with Glucocerebrosidase (GBA) mutation that were equally distributed among PD subgroups. Sixty age-and sex-matched healthy controls, without any neurological, psychiatric, or other disorders, were also recruited among friends and relatives of patients and by word of mouth for baseline comparison with PD patients. Healthy controls performed clinical, cognitive and MRI assessments only at baseline.
Clinical evaluation
At study entry and each follow-up visit, an experienced neurologist blinded to MRI results performed clinical assessments. Patients were examined in ON state (i.e., period when the dopaminergic medication is working and symptoms are well controlled). Demographic, general clinical and family data (sex, education, age, handedness, age at onset, side of onset, PD duration, and family history) were obtained using a semi-structured interview. Levodopa equivalent daily dose (LEDD) (Tomlinson et al., 2010) was calculated. Disease severity was defined using the HY stage score (Hoehn and Yahr, 1967) and the Unified Parkinson's Disease Rating Scale (UPDRS) (Movement Disorder Society Task Force on Rating Scales for Parkinson's, 2003). UPDRS was used to evaluate non-motor symptoms (UPDRS I), motor symptoms (UPDRS II), motor signs (UPDRS III) and motor complications (UPDRS IV). UPDRS III rigidity, axial and bradykinesia subscores were also calculated. The severity of Freezing of Gait (FoG) was assessed using the FoG questionnaire (FoG-Q) (Giladi et al., 2000). The presence of hallucinations was reported according to the UPDRS I subscore, and of dyskinesia and fluctuations according to the UPDRS IV subscores. The presence of other non-motor symptoms (i.e., gastrointestinal, urinary, olfactory, orthostatic and sexual dysfunctions) was assessed according to the Non-Motor Symptoms questionnaire (NMS-Q) (Chaudhuri et al., 2006). Sleep disorders were investigated using the REM Sleep Behaviour Disorder Screening Questionnaire (RBDSQ) (Stiasny-Kolster et al., 2007). All these variables were obtained at each time point except for NMS-Q and RBDSQ scores, which were acquired at study entry and the last visit.
Neuropsychological and behavioural evaluations
At study entry and each follow-up visit, patients performed neuropsychological and behavioural evaluations within 48 h from MRI. The same test battery was applied in healthy controls at study entry. Evaluations were performed by expert neuropsychologists, blinded to the clinical data and MRI results as previously described (Stojkovic et al., 2018). The assessment evaluated global cognition with the Addenbrooke's Cognitive Examination-revised (ACE-R); memory with the Rey Auditory Verbal Learning Test (RAVLT), and the pattern (PRM) and spatial (SRM) recognition memory tests from the Cambridge Neuropsychological Test Automated Battery (CANTAB); executive functions with the digit span backward, Intra/Extra Dimensional Set Shift test (IED) from the CANTAB, and the Stroop color-word test; attention and working memory with the digit ordering test and the letter cancellation test; language with the Boston Naming Test (BNT) and the language subtest of ACE-R; fluency with semantic and phonemic fluencies; visuospatial abilities with the Hooper Visual Organization test and the visuospatial subtest of ACE-R. Mood was evaluated with the Hamilton Depression Rating Scale score (HDRS), Hamilton Anxiety Rating scale score (HAMA) and Apathy Evaluation Scale. The presence of impulsivecompulsive behaviour (ICB) was reported according to Questionnaire for Impulsive-Compulsive Disorders in Parkinson's Disease (QUIP) (Weintraub et al., 2009).
All the neuropsychological and behavioural variables were acquired at each time point except for the QUIP score, which was obtained at study entry and the last visit.
Cluster/subtype definition
Cluster analysis based on k-medoids method for data partitioning was applied on patients using the Gower distance calculated for baseline data on demographic/general clinical information (age, sex, education, age at onset, disease duration, family history), motor symptoms and signs (HY, UPDRS II-III total, UPDRS III axial and bradykinesia, presence of dyskinesia and fluctuations, FoG-Questionnaire), LEDD (Tomlinson et al., 2010), cognitive and mood data (ACE-revised, HDRS, HAMA, Apathy Evaluation Scale), and the presence of other non-motor manifestations (hallucinations, RBD, orthostatic hypotension, olfactory, gastrointestinal, urinary, sexual dysfunctions). Missing data were imputed using the Random Forest algorithm. They were very few, ranging from 0 to 1.3% for all the 29 variables.
Global composite outcomes
For analysis of clinical progression, we created four global composite outcomes (GCOs) as numeric indicators of prognosis (Fereshtehnejad et al., 2017), accounting for the most important clinical domains: motor signs/symptoms, non-motor symptoms, cognitive deficits, and mood. The four clinical domains included the following scores: 1) UPDRS-II and UPDRS-III (motor domain); 2) UPDRS I and the presence of non-motor symptoms based on the NMS-Q (non-motor domain); 3) a single test for each cognitive function according to the greatest mean rate of change for cognitionthe selected cognitive tests were ACE-total (global cognition), semantic fluency (language), Intra/ Extra Dimensional Set Shift (executive functions), letter cancellation test (attention) and Hooper (visuospatial function); and 4) HAMA, HDRS and Apathy Evaluation Scale (mood). For each GCO, the z-scores of each component were averaged. For calculating change (followup-baseline), we used the mean/standard deviation from baseline as reference (Fereshtehnejad et al., 2017). Assuming that k components (i.e. z 1 , z 2 , z 3 , …, z k ) are needed to calculate any of the GCOs, the following formula was used: Higher GCO scores indicate worse function for motor, non-motor and mood domains, while lower GCO indicates worse cognitive performance.
MRI analysis
MRI analysis was performed at the Neuroimaging Research Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy, by experienced observers, blinded to subjects' identity. The presence of vascular abnormalities, including WM hyperintensities and lacunes, was checked on DE images.
Cortical thickness measurement
Cortical reconstruction and estimation of cortical thickness were performed on the 3D T1-weighted TFE images using the FreeSurfer image analysis suite, version 5.3 (http://surfer.nmr.mgh.harvard.edu/) (Fischl and Dale, 2000). On all 3D TFE images, the contrast between GM and white matter (WM) was enhanced by nulling out all image values below the mean intensity of the cerebrospinal fluid (CSF), and by performing a rescaling of all image intensities above threshold to the new null value. After registration to Talairach space and intensity normalization, the process involved an automatic skull stripping, which removes extra-cerebral structures, cerebellum and brainstem, by using a hybrid method combining watershed algorithms and deformable surface models. Images were carefully checked for skull stripping errors. Then, images were segmented into GM, WM, and CSF, cerebral hemispheres were separated, and subcortical structures divided from cortical components. The WM/GM boundary was tessellated and the surface was deformed following intensity gradients to optimally place WM/GM and GM/CSF borders, thus obtaining the WM and pial surfaces (Dale et al., 1999). Afterwards, surface inflation and registration to a spherical atlas were performed (Dale et al., 1999) and the cerebral cortex parcellated into 34 regions of interest (ROIs) per hemisphere, based on gyral and sulcal structures (Desikan et al., 2006). Finally, cortical thickness was estimated as the average shortest distance between the WM boundary and the pial surface. Surface maps were generated following registration of all subjects' cortical reconstructions to a common average surface and then smoothed using a surface-based Gaussian kernel of 10 mm full width half-maximum. To evaluate longitudinal cortical changes in PD patients, the four serial 3D TFE images of each subject were processed with the Freesurfer longitudinal stream (Reuter et al., 2010). Specifically, an unbiased within-subject template space and image was created from the four scans using a robust, inverse consistent registration. Several processing steps (including skull stripping, Talairach transforms, atlas registration, as well as spherical surface maps and parcellations) were then initialized on the four scans, with common information from the within-subject template. This allowed to create surface maps of the four timepoints with a significantly increased reliability and statistical power compared to those produced by the cross-sectional Freesurfer pipeline (Reuter et al., 2010). Individual surface maps were registered to a common average surface and then smoothed using a Gaussian kernel of 10 mm full width half-maximum.
Deep GM volumes
FMRIB's Integrated Registration and Segmentation Tool (FIRST) in FSL (http://www.fmrib.ox.ac.uk/fsl/first/index.html) was applied to M. Filippi, et al. NeuroImage: Clinical 28 (2020) 102374 Table 1 Demographic characteristics at study entry in healthy controls (HC), mild PD, mild motor-predominant PD, mild-diffuse PD and moderate-to-severe PD patients. Values are reported as mean ± standard deviation (range) or absolute and percentage frequency (%) for continuous and categorical variables, respectively. Differences between groups at baseline were assessed using one-way ANOVA (for continuous demographic and general clinical variables). P-values were adjusted for multiple comparisons controlling the False Discovery Rate (FDR) at level 0.05 using Benjamini-Hochberg step-up procedure. Values in bold indicate statistically significant results. Abbreviations: HC = healthy controls; PD = Parkinson's disease. TFE images of each subject at each visit and used to automatically segment GM regions, i.e., caudate, pallidum, putamen, thalamus and nucleus accumbens, amygdala and hippocampus, bilaterally. Mean GM volumes were calculated and multiplied by the normalization factor derived from SIENAx to correct for subject head size (http://www. fmrib.ox.ac.uk/fsl/sienax/index.html).
Variables
2.8. Statistical analysis 2.8.1. Demographic, clinical and cognitive data Demographic and clinical general data were compared between groups using ANOVA models or Fisher exact test. For clinical motor, non-motor and cognitive variables, Poisson regressions, which accounted for overdispersion, were performed. Changes in continuous variables over time were assessed by the annualized mean rate of change (%), calculated from the regression slope of a generalized linear model for longitudinal data (using Poisson as link function) for clinical continuous variables, using time as continuous variable. Test for linear trend (associated with the annualized mean rate) was estimated in PD groups and group-by-time interaction was assessed to evaluate longitudinal between-group differences. P values were adjusted for multiple comparisons controlling the False Discovery Rate (FDR) at level 0.05 using Benjamini-Hochberg step-up procedure. Two-sided p value < 0.05 was considered for statistical significance. Analyses were performed using SAS (Release 9.4, SAS Institute, Cary, NC, USA).
Baseline MRI findings
A cross-sectional vertex-by-vertex analysis was performed to assess differences of cortical thickness between groups at baseline, using a general linear model in FreeSurfer adjusting for age. Maps showing baseline comparisons were obtained by thresholding the t-statistic at p < 0.05, FDR-corrected for multiple comparisons. The mean cortical thickness of 34 ROIs per hemisphere (Desikan et al., 2006) and the mean GM volumes were compared between groups using ANOVA models, FDR-corrected for multiple comparisons at level of 0.05 adjusting for age (SAS).
Longitudinal MRI findings
Longitudinal changes of cortical thickness occurring within PD patient groups (mild PD; moderate-to-severe PD; mild-diffuse PD; mild motor-predominant PD) and between groups over the four timepoints (group × time interaction: mild PD vs moderate-to-severe PD; milddiffuse PD vs mild motor-predominant PD) were assessed using Linear Mixed Effects Models in Freesurfer (Bernal-Rusiel et al., 2013) adjusting for age, LEDD change over time, and time interval between baseline and follow-up scans. Maps showing the rate of cortical thinning over time were obtained by thresholding the t-statistic at p < 0.05, FDR-corrected for multiple comparisons.
Changes over time in the mean cortical thickness of the 34 ROIs and the mean GM volumes were assessed by the annualized mean rate of change (%), calculated from the regression slope of ANOVA model for longitudinal data, using time as continuous variable. Test for linear trend (associated with the annualized mean rate) was estimated in PD groups and group-by-time interaction was assessed to evaluate longitudinal between-group differences (mild PD vs moderate-to-severe PD; mild-diffuse PD vs mild motor-predominant PD). Such models were adjusted for age and LEDD (treated as time-varying covariate). P values were adjusted for multiple comparisons controlling the FDR at level 0.05 using Benjamini-Hochberg step-up procedure. Two-sided p value < 0.05 was considered for statistical significance (SAS).
MRI prediction models of clinical evolution
In each PD group, linear regression models assessed the associations of baseline MRI metrics (which were found to be significantly different between groups) and baseline MRI metrics + 1-year change with the four GCOs. Stepwise model selection procedure was applied to candidate baseline MRI metrics and 1-year changes chosen a priori on the basis of MRI variables that were significantly different between patients and controls at baseline and that showed significant annualized mean rate of change in each group (significance level for entry and staying into the model: p = 0.10). Each GCO was considered as the dependent variable into each model, which also included age, baseline LEDD, and individual follow-up duration (independent variables). R 2 goodness of fit statistic was estimated for each model at issue, for each PD subtype separately. Two-sided p value < 0.05 was considered for statistical significance. Analyses were performed using SAS.
Baseline clinical findings
Two PD clusters were identified at baseline: 87 patients were classified as mild PD and 67 as moderate-to-severe PD, with the latter group having lower education, earlier PD onset, longer PD duration, more severe motor signs and symptoms, more severe and frequent non-motor manifestations, more severe cognitive dysfunctions and higher LEDD (demographic variables are presented in Table 1; general clinical variables in Table 2; and cognitive variables in Supplemental table 1). Within the mild PD cluster, two clinical subtypes were further identified: mild motor-predominant (N = 43) and mild-diffuse (N = 44), with the latter group being slightly older, more frequently male, and having later PD onset, shorter PD duration, more frequent non-motor manifestations (i.e., REM sleep behaviour disorders and urinary dysfunction) and greater global cognitive dysfunction and memory deficits (demographic variables are presented in Table 1; general clinical variables in Table 3; and cognitive variables in Supplemental table 2).
Longitudinal clinical findings
Clinical and cognitive changes in PD subtypes are reported in Tables 2 and 3, Supplemental Tables 1 and 2. Over time, mild PD compared to moderate-to-severe PD patients showed greater worsening of motor variables, while moderate-to-severe patients showed greater worsening of cognitive abilities. Both mild and moderate-to-severe PD groups showed a significant worsening of UPDRS I, depression and apathy scores, without differences between groups over time. Only the moderate-to-severe group showed anxiety worsening over the follow-up. Both mild and moderate-to-severe PD groups showed an increased frequency of fluctuations (mild FDR-corrected p < 0.001; moderate-tosevere FDR-corrected p = 0.01) and dyskinesia (mild FDR-corrected p = 0.02; moderate-to-severe FDR-corrected p = 0.01) over time, with mild PD showing also an increased frequency of REM sleep behavior disorders (FDR-corrected p = 0.001), orthostatic symptoms (FDR-corrected p = 0.03) and hallucinations (FDR-corrected p = 0.04).
Within the mild PD group, both mild-diffuse and motor-predominant PD clusters worsened in all motor variables and UPDRS I, without any significant difference between groups in time (Table 3). Mild motor-predominant cases had an increased frequency of orthostatic symptoms (FDR-corrected p = 0.04), dyskinesia (FDR-corrected p = 0.03), fluctuations (FDR-corrected p = 0.002) and REM sleep behavior disorders (FDR-corrected p = 0.02), while mild-diffuse PD cases showed increased frequency of fluctuations (FDR-corrected p = 0.005). Mild-diffuse PD patients showed significant worsening of executive functions, attention and visuospatial abilities, with no difference between groups over time. Both mild-diffuse and mild motor-predominant PD groups showed worsening of depression and apathy, with no difference between groups over time.
Cortical thickness
A widespread pattern of bilateral cortical thinning involving frontal, M. Filippi, et al. NeuroImage: Clinical 28 (2020) 102374 parietal, temporal and occipital lobes was found in moderate-to-severe patients relative to healthy controls and mild PD patients ( Fig. 2A). No significant cortical thickness differences were found in mild PD relative to healthy controls (Fig. 2B). A diffuse pattern of cortical thinning was observed in moderate-to-severe relative to mild motor-predominant PD patients (Fig. 3A). When compared to mild-diffuse PD, moderate-tosevere patients showed few spots of cortical thinning in parietal and occipital lobes bilaterally, right temporo-parietal junction and rostral middle frontal gyrus (Fig. 3B). No significant cortical thickness differences were found when mild motor-predominant and mild-diffuse PD patients were compared with healthy controls and each other ( Fig. 4A and B). Results were significant at FDR-corrected p < 0.05.
GM volumes
Results are shown in Table 4. At baseline, moderate-to-severe PD patients showed a reduced volume of bilateral caudate nuclei and right hippocampus relative to healthy controls and mild PD patients. Milddiffuse PD patients showed a reduced volume of the right hippocampus relative to mild motor-predominant PD subjects. Results were significant at FDR-corrected p < 0.05.
Cortical thickness over one and two years of follow-up
Over one year of follow-up, only mild motor-predominant PD patients showed cortical atrophy (Fig. 4A); no longitudinal atrophy changes were observed in the other PD groups ( Fig. 2A, 2B and 4B). In the comparison between groups over one year of follow-up, only mild motor-predominant relative to mild-diffuse PD patients showed greater atrophy accumulation in the left temporal lobe (group × time interaction; Fig. 4A). Results were significant at FDR-corrected p < 0.05.
Over two years of follow-up, mild PD patients showed widespread progressive cortical thinning in the left hemisphere (mainly in the temporal and parietal lobes) (Fig. 2B), while moderate-to-severe PD did not show cortical atrophy ( Fig. 2A). Cortical thickness changes over two years of follow-up were not significantly different between mild and moderate-to-severe PD patients (group × time interaction).
When mild PD groups were analyzed separately, mild motor-predominant PD cases showed cortical thinning in few regions of the left hemisphere (Fig. 4A), while mild-diffuse patients did not show any cortical atrophy over two years of follow-up (Fig. 4B). However, the direct comparison between mild motor-predominant and mild-diffuse PD groups did not show any significant difference in cortical thickness changes over two years (group × time interaction). Results were significant at FDR-corrected p < 0.05.
Cortical thickness over the whole follow-up
Over the whole follow-up, both mild and moderate-to-severe PD clusters showed progressive cortical thinning. A widespread whole brain cortical thinning was evident in mild PD patients (Fig. 2B), while less distributed atrophy progression was observed in moderate-to-severe PD patients (mainly in the left temporal lobe, orbitofrontal regions and occipital lobe; Fig. 2A). The direct comparison between mild and moderate-to-severe PD patients did not show any significant difference in cortical thickness changes over the whole follow-up (group × time interaction). Within the mild PD group, mild motor-predominant PD patients showed cortical thinning accumulation in few regions of the temporal, occipital and medial frontal lobes (Fig. 4A), while mild-diffuse PD cases showed the most widespread cortical thinning (Fig. 4B). Cortical thickness changes were not significantly different between mild motor-predominant and mild-diffuse PD patients over the whole follow-up (group × time interaction). Results were significant at FDR-
GM volumes over one and two years of follow-up
Mild and moderate-to-severe PD patients did not show any significant change in GM volumes over one and two years. When mild cases were clustered into two groups, mild-motor predominant PD patients showed a significant atrophy of the left pallidum over one year (FDR-corrected p = 0.04, annualized mean rate of change = -2.2%). GM volumes changes were not significantly different between none of the PD groups over one and two years of follow-up (group × time interaction).
GM volumes over the whole follow-up
Mild and moderate-to-severe PD patients did not show any significant change in GM volumes over time. When mild cases were clustered into two groups, mild-diffuse PD patients showed a significant atrophy of the left hippocampus (FDR-corrected p = 0.01). GM volumes changes were not significantly different between none of the PD groups over the whole follow-up (group × time interaction). Results are shown in Table 4.
3.5. The effect of structural brain changes on clinical PD progression Table 5 summarizes the ability of cortical thickness alterations at baseline and at baseline + 1-year change to predict clinical evolution over time in PD groups. The longitudinal changes over the whole follow-up of the 34 ROI mean cortical thickness values in PD groups were used to select variables for prediction models and are shown in Supplemental Table 3.
Discussion
Investigating biomarkers to predict progression of PD is of high priority. Longitudinal MRI studies have the potential to provide a characterization of disease progression related to clinical manifestations and might guide our understanding of the underlying neurodegenerative processes. Using serial structural MRI data from a large sample of PD patients, we showed that cortical and subcortical GM have different patterns and rates of atrophy according to disease stage and clinical subtype. Specifically, this study showed that cortical thickness analysis has a high sensitivity to progressive brain structural damage accumulation in PD, especially in the mild phase of the disease. A key finding was that baseline and 1-year cortical thinning was associated with long-term progression of motor, cognitive, non-motor and mood symptoms.
A severe brain atrophy was demonstrated in the basal ganglia, sensorimotor areas, frontal and posterior brain regions, particularly in bilateral occipital, temporal and parietal lobes, in moderate-to-severe PD patients at baseline. This pattern suggests a widespread GM modification in PD patients with high severity of motor and non-motor symptoms, in agreement with previous MRI results (Agosta et al., 2013b;Lewis et al., 2016;Melzer et al., 2012;Sterling et al., 2016) and pathological studies (Braak et al., 2006). Importantly, longitudinal analysis showed that no cortical thinning is present over one and two years of follow-up in moderate-to-severe patients, but brain atrophy increases over longer periods (three to four years) in these subjects, involving mainly in temporo-occipital and ventral frontal regions. Even if we did not have longitudinal MRI data from healthy controls, our findings are in line with previous studies comparing PD subjects with healthy controls over time Mak et al., 2015; Filippi, et al. NeuroImage: Clinical 28 (2020) 102374 The specific damage of associative frontal, parietal, temporal and occipital regions in moderate-to-severe PD patients has been related with a spectrum of cognitive deficits (Camicioli et al., 2011;Garcia-Diaz et al., 2018;Gorges et al., 2020;Hanganu et al., 2013;Lewis et al., 2016;Mak et al., 2015). A recent longitudinal study that compared PD patients and controls over five years of follow-up showed widespread cortical (fronto-temporal and parieto-occipital) thinning in PD patients with normal cognition, more pronounced damage in patients with cognitive impairment, and a correlation between cortical thinning in the caudal anterior cingulate and lower cognitive performance in patients that convert to cognitive impairment or dementia (Gorges et al., 2020). Non-motor PD manifestations such as depression (Hanganu et al., 2017), hyposmia (Baba et al., 2012), and hallucinations (Ibarretxe-Bilbao et al., 2010) were previously associated with a progressive loss of brain volume.
The most interesting findings were observed in the mild PD clusters. Screening patients for motor, cognitive and other non-motor features at baseline, our study identified a mild-diffuse PD cluster characterized by an initial aggressive disease and greater brain atrophy and cognitive dysfunction accumulation over time. These patients were initially older, more frequently male and had later PD onset, shorter PD duration, more frequent non-motor manifestations and more severe global cognitive dysfunction and memory deficits relative to mild motor-predominant cases. Over time, both mild subtypes showed motor and non-motor clinical evolution. However, mild-diffuse patients were more likely to have significant worsening of executive functions, attention and visuospatial abilities. Old age at onset and non-motor status are wellknown critical determinants of PD prognosis (Fereshtehnejad et al., 2017;van Rooden et al., 2010). Accordingly, MRI data pointed toward a relatively early diffuse neurodegenerative process in this group. Table 4 Grey matter volumes at study entry in healthy controls (HC), mild PD, mild motor-predominant PD, mild-diffuse PD and moderate-to-severe PD patients and changes over time in PD patients. Indeed, although no atrophy was observed at baseline in both mild clusters compared with controls, mild-diffuse PD patients were only slightly different from moderate-to-severe PD cases suggesting that the same amount of brain atrophy occurred in a shorter time span. In addition, longitudinal study showed that mild-diffuse patients, as moderate-to-severe PD cases, did not accumulate atrophy over one and two years, but had the most progressive GM atrophy pattern involving the cortex and the left hippocampus over a longer observation period. The most probable explanation is that the moderate-to-severe and milddiffuse groups of patients had already accumulated atrophy at an earlier phase of the disease, thus achieving a sort of plateau. This hypothesis is supported by previous studies comparing de novo PD subjects with healthy controls over time, showing that PD subjects accumulated widespread GM atrophy of the cortex and hippocampus relative to healthy subjects in the early phase of the disease (Sampedro et al., 2019;Mollenhauer et al., 2016;Tessa et al., 2014). On the contrary, mild motor-predominant patients seem to show a more constant but less wide accumulation of atrophy each year. Our results on mild-diffuse patients are in line with a recent study subtyping de novo PD patients based on a comprehensive list of clinical manifestations and biomarkers which showed more brain atrophy in patients classified as "diffuse malignant" subtype (Fereshtehnejad et al., 2017). Of note, previous MRI studies in mild PD cases have reported various brain structural results, ranging from the absence of cortical and subcortical alterations (Ibarretxe-Bilbao et al., 2012;Mak et al., 2015;Melzer et al., 2012;Weintraub et al., 2011) to widespread GM abnormalities in basal ganglia and fronto-temporal, parietal, and occipital areas relative to healthy controls Pereira et al., 2014). This heterogeneity is likely due to the variability of both the clinical samples (especially the presence/absence of cognitive impairment and other non-motor symptoms) and the MRI techniques. Together with previous evidence (Fereshtehnejad et al., 2017), our findings show that defining different subtypes of PD -early in the disease course -provides a unique opportunity to better understand the patterns of brain structural damage and to allow longitudinal assessment of disease progression.
Variables
Predicting patient trajectories is a challenge for clinicians. In this study, we demonstrated that cortical thinning at study entry and 1-year progression of atrophy predict subsequent long-term evolution of motor, cognitive, non-motor and mood dysfunctions in PD. Importantly, a greater involvement of the left inferior parietal lobe was a predictor of both motor and non-motor clinical evolution in all PD patients. The inferior parietal region has been involved in early mechanism of compensation in PD (Tahmasian et al., 2017). Its early and severe damage may presage a more rapid clinical progression in PD. Greater inferior parietal, orbitofrontal, precentral and anterior cingulate damage predicted the development of non-motor manifestations such as gastrointestinal, urinary, olfactory, orthostatic and sexual dysfunctions, indicating that the early presence of a diffuse neurodegenerative pattern identifies PD cases at high risk for non-motor symptomatology. Severe olfactory dysfunction was previously associated with atrophy of focal brain structures, including frontal and cingulate cortices (Baba et al., 2012). Our findings highlight the well-known central role of frontal and medial occipito-temporal structural damage in determining cognitive impairment in PD patients (Compta et al., 2013;Garcia-Diaz et al., 2018;Gee et al., 2017;Gorges et al., 2020;Hanganu et al., 2014;Melzer et al., 2015;Ramírez-Ruiz et al., 2005;Sampedro et al., 2019). These results are in line with theories regarding the etiology of cognitive impairment in PD, which include striatal dysfunction that results in secondary effects on the frontal lobe, primary frontal lobe dysfunction, and more widespread cortical dysfunction secondary to global neurotransmitter system deficits. Finally, mood alterations in the early phase of the disease, including anxiety, depression and apathy, were predicted by the cortical thinning of the middle frontal and postcentral regions, confirming the role of frontoparietal regions in mood control as suggested by previous cross- Values are reported as mean ± standard deviation (range) or number (%). Volumes are in mm 3 . Differences between PD patients and healthy controls and between PD groups at baseline were assessed using one-way ANOVA (statistical contrasts). Annualized mean rate of changes (%) was obtained as the percentage difference between the variable mean at the end of follow-up (estimated by means of the regression slope found in a longitudinal model which included time as continuous variable) and the estimated variable mean at baseline. P-values were adjusted for multiple comparisons controlling the False Discovery Rate (FDR) at level 0.05 using Benjamini-Hochberg step-up procedure. Analyses were adjusted for age (baseline) and for age and LEDD change over time (longitudinal). Values in bold indicate statistically significant results. Abbreviations: HC = healthy controls; L = left; PD = Parkinson's disease; R = right. Linear regression models were built to assess the association between baseline MRI measures (cortical thickness/grey matter volumes) or baseline + 1-year change MRI measures and GCOs. Each GCO was considered as the dependent variable into each model, which also included age, baseline LEDD, individual follow-up duration, and baseline MRI measures or baseline + 1-year change of MRI measures as covariates (independent variables). Both regression slope (along with its p-value) of MRI measures included into the models and adjusted R 2 goodness of fit statistic were estimated for each model at issue and for each PD subtype, separately.
sectional (Wee et al., 2016) and longitudinal (Hanganu et al., 2017) studies. Finally, a consideration should be also reserved to the left hemispheric lateralization seen in our results. It is already well known in the literature that the left hemisphere is more susceptible to damage in PD (Claassen et al., 2016) and in general in neurodegenerative diseases such as Alzheimer's disease, behavioral fronto-temporal dementia and non-fluent or semantic primary progressive aphasia (Janke et al., 2001;Rohrer et al., 2012;Tomlinson et al., 2010;Whitwell et al., 2013). To date, the etiology of this phenomenon is still unclear, but several reasons have been discussed including a greater vulnerability of the dominant hemisphere and disease specific issues. Indeed, the left hemisphere is recognized to be highly specialized for motor planning and organization of complex movements/ experienced actions, motor learning and language processing (Serrien et al., 2006;Serrien and Sovijärvi-Spapé, 2015), besides being more frequently the dominant hemisphere. Finally, left cortical atrophy could reflect the underlying nigrostriatal failure considering that the nigrostriatal system in PD seems to be firstly involved in its left side (Postuma and Dagher, 2006;van der Hoorn et al., 2012).
This study is not without limitations. First, we do not have longitudinal MRI data in healthy controls and this is a serious concern. Thus, even if our analyses are corrected for age, we cannot ignore that part of the structural changes we observed in patients was related to aging effects. However, our findings demonstrated different patterns of atrophy accumulation in PD subtypes with a similar age that are in line with previous studies including longitudinal comparison of PD patients with healthy controls. Second, the interpretation of differences over time among groups should be considered carefully because it is focused mainly on within group changes while most group × time interactions were not significant. Third, we used a 1.5 T MRI scanner, which is characterized by a lower spatial resolution compared with higher field strength scanners. Fourth, several studies have attempted to divide PD patients into clusters (Greenland et al., 2019). Our data-driven subtyping needs to be validated in independent cohorts before it can be translated in real-life practical application. Integration of neuroimaging data in the cluster solution may also improve prognostic stratification of patients (Uribe et al., 2016). Fifth, the attrition rate was relatively high in moderate-to-severe PD patients, which is however in line with previous studies (Uribe et al., 2019;Sarasso et al., 2020).
In conclusion, our data suggest trajectories of brain structural changes according to PD subtypes and prognosis. Cortical and subcortical atrophy is accelerated early after the disease onset and becomes prominent in later stages of disease suggesting that structural MRI may be useful for monitoring and predicting disease progression.
|
2020-08-08T13:21:30.685Z
|
2020-08-07T00:00:00.000
|
{
"year": 2020,
"sha1": "859b8e2971b8dbf7674e4790676bca501bf4975d",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nicl.2020.102374",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6aafede89c2b2b0870ed4c46362c3256a158cc5",
"s2fieldsofstudy": [
"Psychology",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214675251
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Scan Strategy on Mechanical Properties of AlSi12 Lattice Fabricated by Selective Laser Melting
In this study, the influence of scan strategies, such as scan order and scan patterns, on compressive load capacity was investigated for the aluminum alloy AlSi12-made lattice structures fabricated by selective laser melting. The scan order of concentric scan patterns affected compressive load capacity. Better mechanical properties were obtained when the scan order was set from the outside. Setting the scan order from the inside caused coarsening of the grains at the center of the strut, thus worsening the mechanical properties due to the reduced area fraction of the finegrained regions. The mechanism of such grain coarsening was explained based on the heat transfer direction. Furthermore, the scan pattern also affected the size and orientation of the grains in the lower zone of the strut as well as its geometrical accuracy. A stripe pattern with a rotation of 67° from layer to layer decreased the geometrical accuracy but increased the hardness of the strut owing to the smaller size and random orientation of the grains in the lower strut zone.
Introduction
Metallic lattice structures have been actively studied owing to their lightweight and industrial applications (such as automotive and aerospace industries). Lattice components are commonly fabricated by additive manufacturing systems because additive manufacturing is suitable for producing complex three-dimensional components [1]. The properties of the components fabricated by additive manufacturing vary with process parameters, namely, laser power, scanning speed, and scan strategy [2,3]. Scan strategy is a combination of scan pattern and scan order of each layer, and there are numerous scan strategy parameters such as raster, concentric, rotation, mesh, fractal, etc. [2,3,4,5]. Scan strategy has been mainly evaluated on a larger scale for tensile testing of materials. Finer scale testing has not been conducted, especially for lattice structures. In this study, we evaluated the influence of scan strategy on the mechanical properties of lattice structures. We used aluminum alloy powder to fabricate the lattice structure, as aluminum alloys are widely used in automotive and aerospace industries owing to their unique features (i.e., low density, good mechanical properties, and high wear resistance) [6].
Materials and methods 2.1 Materials
We used an AlSi12 alloy, which is one of the most popular aluminum-silicon alloys for casting. Recently, the AlSi12 alloy has been widely used as a powder material for selective laser melting (SLM). The powder size constituted D10 = 7.6 μm, D50 = 18.3 μm, and D90 = 34.2 μm.
Methods
Three scan strategies were used for fabrication, as illustrated in Fig. 1. The first strategy was a concentric scan pattern with the scan order set from the outside (C-OUT). The second employed strategy was a concentric scan pattern with the scan order set from the inside (C-IN). The third scan strategy included a stripe scan pattern with 67° rotation from layer to layer (STRIPE). The SLM process usually uses two types of scan patterns to fabricate an object. One is a fill scan for the inner zone of the object and the other is a contour scan for the outer skin zone of the object. In this study, only a fill scan was used to clearly distinguish between the effects of scan strategies on a thin strut.
Computer-aided design (CAD) data of the fabricated lattice structure included a body-centered cubic (BCC) structure with a strut diameter of 800 um, oriented 35.3° from the horizontal plane (Fig. 2).
A 3D Systems ProX200 was used as an SLM system for fabrication of the lattice structure. The processing parameters were set at a laser power of 240 W, scanning speed of 1200 mm/s, hatch spacing between each path of 0.120 mm, and layer thickness of 0.03 mm.
Measurements of the compressive load capacity were conducted both parallel and perpendicular to the built direction using a SHIMADZU Universal testing machine AG-100kNIS. Vickers hardness of the node and strut was measured using a Matsuzawa Vickers hardness tester MMT-X1A with a 0.5 kgf load. KEYENCE optical microscope VHX S550 was used for measuring the geometrical dimensions of the strut cross-section. JEOL scanning electron microscope (SEM) JSM-7001F was used for the investigation of the microstructure and crystal orientation by the electron backscattering diffraction (EBSD) mode. Sample preparation consisted of mechanical polishing by an oxide polishing suspension (OP-S) with colloidal silica serving as an abrasive material. Fig. 3 shows the lattice structure before and after the load testing. The broken point was confirmed at the strut. Fig. 4 shows the compressive load-displacement curves and the evaluated load capacity in the vertical and horizontal directions. It is clear from Fig. 4 that C-OUT has a higher load capacity than C-IN. However, both concentric strategies have a higher load capacity than STRIPE in both directions of compression. Therefore, first, we clarified the influence of the scan order on the concentric scan pattern, and then, the influence of the scan pattern was revealed. Both results reveal that the grain size in the upper zone of both struts was finer than that in the lower zone. By comparing the EBSD grain boundary maps, it was found that the grain size at the center of the strut of C-OUT was finer than that at the center of C-IN.
Results and discussion 3.1 Compressive load capacity measurements
To examine the difference in the microstructure between the fine-and coarse-grained regions, we observed typical areas of the fine-and coarse-grained regions using SEM with a higher magnification. Fig. 7 shows SEM images with the microstructures of C-OUT observed at the fine-grained region (point A) and the coarse-grained region (point B) of Fig. 6. As shown in Fig. 7(a), finer Si particles are dispersed between the fine columnar α-Al grains. On the other hand, in Fig. 7(b) coarser Si particles are formed in the dendrite arm spacing of the coarse α-Al grains. Table 1 shows the Vickers hardness test results from the upper and lower zones of the strut. The difference in the hardness of C-OUT and C-IN originates from the grain size: with the hardness of the fine-grained region being higher than that of the coarse-grained region. This result is consistent with a previous study on other aluminum alloys [7]. Furthermore, the results shown in Fig. 6 indicate that the area fraction of the fine-grained region of C-OUT is higher than that of C-IN. This suggests that the strength of the strut of C-OUT is higher than that of C-IN, representing the difference between the load capacities of two lattice structures.
The area fraction of the fine-grained region was affected by the scan order of the concentric scan strategy. Fig. 8 illustrates the direction of heat transfer of the concentric scan strategies based on different scan orders. At each layer, the scan pattern was presented by three scan paths. In the first scan, the heat transfer direction was solely downwards. In the second and third scans, the direction of heat transfer was oriented downwards and widthways. Therefore, different scan orders resulted in a varying volume fraction and position of the metallic object acting as a heat sink. In the case of C-OUT, for instance, the temperature at the center of the strut increased by the third scan. Then, it quickly decreased due to heat transfer in both downward and lateral directions, resulting in the finer grain size. Heat transfer in the lateral direction was via the structure that was fabricated by the first and second scans with a much larger volume fraction than that of the third scan. In contrast, in the case of C-IN, the temperature at the center of the strut increased from the start of the first scan, and then, increased by the second and third scans, resulting in coarsening of the grain size. This suggests that the strength of the strut of C-OUT was higher than that of C-IN, which resulted in the difference between the load capacities of two lattice structures. Fig. 9 shows the cross-sectional shapes of the STRIPE strut. It is visible that STRIPE possesses irregular geometry, which is different from the ellipsoidal crosssection. Due to the nonuniform cross-section with large surface roughness, the area of the strut affected by the load capacity was reduced.
Evaluation of the influence of scan pattern
More detailed investigation of the microstructures and hardness of the struts was conducted for C-OUT and STRIPE. Fig. 10 shows an EBSD grain boundary map of a vertical cross-section of the STRIPE strut. Most grains were classified as fine-grained in this cross-section. Therefore, the coarse-grained region was not sharply defined in the STRIPE sample, unlike in the C-OUT sample, as shown in Fig. 6(a). Grain size in the lower zone of the strut was found to be smaller than that of C-OUT in Fig. 6(a).
From Table 1, Vickers hardness in the lower zone of the strut of STRIPE was found to be higher than that of C-OUT. The increased hardness of STRIPE suggests that the strength of the strut of STRIPE was higher than that of C-OUT.
To investigate the microstructure differences between C-OUT and STRIPE, the grain orientations were evaluated. Fig. 11 shows EBSD inverse pole figure (IPF) map and a pole figure (PF) vertical cross-section of C-OUT. In Fig. 11 (a), the colors represent grain orientation. The red color indicates that the orientation of the grain was 001, i.e., parallel to the Z-direction. In Fig. 11(a), in the upper zone of the strut, the red grains were mainly distributed in three zones indicated by black dotted arrows. The preferential orientation of the red regions was parallel to the Zdirection. However, at the boundaries of these red zones, no preferential orientation was found to be parallel to the Z-direction. The locations of the red zones were spatially overlapped with the locations of the laser scan paths. The difference in the observed grain orientations was explained by the curved boundaries of a melt pool, as mentioned in a previous study [8]. Fig. 11(b) shows the distribution of grain orientations. In the lower zone of the strut in Fig. 11(b), the preferential orientations were found to be around 35° to the horizontal plane, consistent with the angle of the strut of 35.3°. This indicates that the heat transfer direction was along the strut at its lower zone. It also implies that the grains were columnar and anisotropic, displaying different mechanical properties. Fig. 12 shows an EBSD IPF map and a PF of a vertical cross-section of STRIPE. In the upper zone, the location of the grains exhibiting orientation parallel to the Z-direction varies because the scan path was rotated by 67° after passing each layer. In the lower zone, on the other hand, no grains possess any preferential orientation, suggesting that the grains were equiaxed. Therefore, STRIPE is preferable from the microstructural point of view as equiaxed grains are suitable for the isotropic mechanical properties.
From the above-mentioned results, C-OUT provides the preferable heat direction from the inside to the outside. It prevents grain coarsening at the center of the strut and leads to higher geometrical accuracy. In addition, STRIPE provides smaller and equiaxed grains. Therefore, it is recommended to use a combination of C-OUT for the outside and STRIPE for the inside to improve the load capacity.
Conclusions
The investigation of the mechanical properties of metallic lattices revealed that C-OUT had a higher load capacity than C-IN due to the higher area fraction of the finegrained region. The coarsening of grains at the center of the strut occurred in C-IN. The mechanism was explained by the direction of heat transfer during fabrication. The scan pattern also affected the hardness of the lower zone of the strut and its geometrical accuracy. Although the hardness of STRIPE was higher than that of C-OUT due to the finer grain size, the load capacity of STRIPE was lesser than that of C-OUT, which was caused by the low geometrical accuracy of STRIPE. Therefore, the presented results suggest that load capacity can be improved by using a combined scan strategy of C-OUT and STRIPE.
|
2020-03-26T22:01:51.751Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "73a6479b65dfa72016ff67730fef907e22c0cf59",
"oa_license": null,
"oa_url": "https://doi.org/10.2961/jlmn.2020.01.2002",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "73a6479b65dfa72016ff67730fef907e22c0cf59",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
30074410
|
pes2o/s2orc
|
v3-fos-license
|
The Biophysical Effects of Neolithic Island Colonization: General Dynamics and Sociocultural Implications
Does anthropogenic environmental change constrain long-term sociopolitical outcomes? It is clear that human colonization of islands radically alters their biological and physical systems. Despite considerable contextual variability in local specificities of this alteration, I argue that these processes are to some extent regular, predictable, and have socio-political implications. Reviewing the data for post-colonization ecodynamics, I show that Neolithic colonization of previously insulated habitats drives biotic homogenization. I argue that we should expect such homogenization to promote regular types of change in biophysical systems, types of change that can be described in sum as environmentally convergent. Such convergence should have significant implications for human social organization over the long term, and general dynamics of this sort are relevant in the context of understanding remarkably similar social evolutionary trajectories towards wealth-inequality not only islands, but also more generally.
Introduction: Long-Term Social Consequences of Environmental Change?
Are there ineluctable, long-term sociopolitical consequences of anthropogenic environmental change? To what extent are these consequences constrained by-and emergent frominitial socioecological dynamics?. In this paper I address the relationship between radical environmental reorganization and its sociopolitical implications through the lens of human-island ecodynamics. I argue that the anthropogenic introduction of invasive species and the eradication of endemics on islands during the Holocene reduced sum biodiversity in predictable ways that, in theory, should have exercised pronounced homogenizing effects on the function of biophysical systems. Such homogenization should, in turn, have promoted parallel adaptive strategies in discrete human populations.
The arrival of humans on islands throughout the later Quaternary exacerbated extirpation and extinction in endemic biotas already predisposed towards fragility, differentially reducing local and sum global taxonomic diversity. Repeated extinction events wrecked endemic ecologies while intact anthropic ecosystems of domesticates, commensals, and parasites were simultaneously introduced wholesale. In the first section of this paper, I review the evidence for these processes at a global scale, showing that food-producing human populations in particular radically reorganize insular biotas. Moving beyond areaspecific studies of insular ecology, I argue that these twin processes of extinction and introduction should have driven ecosystems separated in time and space towards parallel types of organization, a process of biotic homogenization (McKinney and Lockwood 1999). Because abiotic systems (i.e., soil composition; hydrological dynamics) interface with biota to form biophysical systems, emerging anthropogenic homogeneity in biotic organization also drove homogeneity in insular biophysical organization. I utilize modern and experimentally derived data to explore in detail the likely aspects of this biophysical homogeneity, including the emergence of parallel pedological and hydrological dynamics with ramifications for ecosystem organization and function. I build on the general dynamic theory outlined by Whittaker and colleagues (Whittaker et al. 2007(Whittaker et al. , 2008Borregaard et al. 2016) and suggest that these processes in sum are best described as environmental convergence, notwithstanding the recognition that islands with varied geological histories may have moved towards convergence along discrete pathways.
My main objective, in linking these well-understood but infrequently synthesized types of process, is to emphasize that discrete instances of prehistoric human colonization of islands should nonetheless in theory drive structurally similar environmental dynamics. Biophysical processes of this sort have sociocultural implications, however. In conclusion, I suggest that these general dynamics may have imposed parallel types of constraint on the development of otherwise unrelated island societies, forcing mitigation strategies centered around capital investment that, in turn, promoted exaggerated wealth inequality. The relationship between environmental constraint, returns on capital investment, and the emergence of sociocultural complexity on islands (as exemplary, on reduced spatial scales, of likely larger and longer-term general dynamics) demands substantive future attention.
This argument is not data-driven, nor does it lean heavily on exemplification via detailed case studies (indeed, in many cases, it is not clear that the data required to support an argument of this sort exist). Rather, it is explicitly deductive. My interest here is in suggesting: (a) that a particular recurring confluence of ontogenetic conditions-the collision of certain types of biophysical organization (insularity) with human food-production subsistence and its associated ecodynamics (introduction/extinction)-necessarily and radically constrains resulting socioecological process; and (b) that the existence of this constraint is simultaneously overlooked but yet extremely significant for social scientists seeking to account for apparent parallelism or convergence in long-term social and political trends, especially so in the current context of large-scale environmental change. The aim is not to propose a series of deterministic causations, but to emphasize that, just as human subsistence and behavioral choices narrow the window of possible environmental trajectories, so this narrowing in turn re-imposes restrictions on the panorama of human behavior.
Human Colonization and its Biological Effects Extinction and Extirpation
The expansion of our genus around the planet has been conditioned by environmental organization, but it has also recursively altered this organization both directly and indirectly. Direct predation, translocation and introduction, and domestication have all driven biotic change, but human modification of the planet (from forest clearance to changing the chemical composition of the atmosphere and ocean) has also had impacts on other taxa. This range of processes has promoted both extinction and the establishment of invasive taxa beyond their native ranges, thereby radically altering ecological structure but also, in the longer term, affecting genetic variability and remolding evolutionary landscapes.
The most obvious examples of anthropogenic extinctions come from the recent past and the present. A preponderance of evidence suggests that the planet is experiencing a mass extinction event (i.e., loss of >75% of extant species within a restricted timeframe) with the likelihood that this process is primarily anthropogenic (Barnosky et al. 2011;Ceballos et al. 2015). Despite the majority of these extinctions clustering towards the more recent end of the Holocene, the 'long tail' of this process extends back into the deeper Quaternary. I ignore here the controversy surrounding whether the extinction of the Pleistocene continental megafauna was driven by human agency, environmental change, or more complex feedback dynamics (Burney and Flannery 2005;Braje and Erlandson 2013;Cooper et al. 2015;Stuart 2015;Villavicencio et al. 2015;Bartlett et al. 2015). What is less open to debate is the profound effect colonizing humans have had on smaller, previously insulated landmasses.
The physiographic organization of islands-as relatively small, discrete types of habitat surrounded by spatially extensive qualitatively divergent habitat-profoundly influences their biotas. 1 Island biotas (not just faunas, although floras seem in general to respond differently to invasion (Gilbert and Levine 2013;Downey and Richardson 2016)) are peculiarly exposed to extirpation and extinction risks deriving from biological invasions; far more so than continental equivalents (Loehle and Eschenbach 2012;Szabo et al. 2012;Weigelt et al. 2013). This relates to a number of factors that combine to render insular populations fragile. Most obviously, open ocean impinges on dispersal via filtration effects, reducing gene flow between colonists and source populations and correspondingly driving allopatric speciation. Consequently tending to be rich in endemics, island biotas in total thereby contribute disproportionately to sum global biodiversity, with the result that islands offer more candidate taxa for extinction per unit area than non-insular environments. This threat is exacerbated by the restrictions that insular size imposes on population size (Brose et al. 2004) of a given taxon (endemic or not)-keeping populations low and accordingly more greatly exposed to demographic stochastic perturbations (Lande 1993)-and the effective fragmentation ocean promotes in metapopulations (see Rybicki and Hanski 2013). This, alongside the tendency for insular taxa to be very specialized and, in the absence of complex trophic structures, 1 Clearly, in this sense, Afro-Eurasia is just as insular as Tristan da Cunha; interest lies in scalar difference covering-in this instance-almost five orders of magnitude permitting trophic systems whose difference in complexity (measured, for example, either in species number or total population size) probably exceeds five orders of magnitude. See Brose et al. 2004. predator-naïve, renders island biotas highly responsive (Fordham and Brook 2010), although clearly and accordingly the degree of insular ecological sensitivity varies along a number of dimensions (including size and remoteness but also geologic antiquity and composition and 'type'-oceanic (high/atoll), continental) that causes island ecodynamics to vary broadly.
It should be emphasized that invading species-whether humans or their co-traveler taxa, such as commensals or domesticates (Chapuis et al. 1994;Nogales et al. 2004;Wanless et al. 2007)-do not simply exert pressure via direct predation, although clearly this is significant. In affecting the demographic robustness of a species by preying on it (deliberate forest clearance can be understood as a process comparable to predation (McWethy et al. 2010)), an invader promotes dynamism in the trophic neighbors of that species, potentially driving ecological release in those it consumes and demographic crashes in those that consume it (O'Dowd et al. 2003). The potential for such dynamics to affect the wider system are clear. Importantly, extinction need not be necessary to cause such ecological cascades so much as substantial population reduction, although the permanent removal of a species from an ecosystem (i.e., taxonomic diversity loss) should have observable consequences (i.e., functional diversity loss) (cf. Baiser and Lockwood 2011). Pressure is also exerted by invasive species on ecological relationships beyond the purely oppositional (i.e., predation and competition), with mutualisms, commensalisms, and parasitisms likely to be subject to disruption following extirpation or extinction events (Traveset and Richardson 2006;Sekercioglu 2011;Boyer and Jetz 2014). In general, the processes leading to extinction and extirpation following invasion are complex (e.g., Hanna and Cardillo 2014), but the result is a uniform, gross trend away from elevated biodiversity.
We should accordingly expect the arrival of humans on islands to be transformative. This is borne out in the data. The effect of human colonization (along with co-traveler taxa) on islands is evident from the Upper Palaeolithic, although there exists a possibility that we glimpse ephemeral traces of comparable processes in deep time, with the extinction of the proboscid Stegodon sondaari on Flores at 0.9 mya (van den ) intriguingly close to the earliest dates for hominin incursion into the Lesser Sundas at 1.02 mya (Brumm et al. 2010). The intentional colonization of island groups by hunter-gatherers belonging to our own species, certainly in the Mediterranean, but also in the Caribbean (Steadman et al. 2005), tracks closely with the disappearance of suites of endemic species, with 88.9% of mammalian endemics lost in the insular Mediterranean at the Pleistocene-Holocene boundary (Alcover et al. 1998). The impacts of hunter-gatherers in Near Oceania are harder to gauge. Steadman et al. (1999) suggest anthropogenic avifaunal loss in the Solomons and extinction and range-contraction of the varanids (giant monitor lizards) might be associated with human activity (Hocknull et al. 2009). In general, the patchy data hint at a role for humans in the eradication of several taxa including Stegodon and the elephantid Palaeloxodon, a role which-not least because of sea level rise driving processes of range fragmentation in Sunda and Sahul-is hard to disentangle from broader environmental processes (Louys et al. 2007). The same is true of the Californian Channel Islands; extinctions at colonization seem to have been limited to the duck Chendytes lawi, but there is evidence for extensive resource depletion and concomitant ecosystem effects (Rick et al. 2012;Braje et al. 2017b).
A more reliable signature corresponds with the spread of agricultural, agropastoral, or horticultural (i.e., foodproducing; 'Neolithic' hereafter) 2 lifeways (see Braje et al. 2017a). In the Caribbean and Mediterranean, with their previous exposure to hunter-gatherers, extinctions continued to occur during the establishment and expansion of agropastoral lifeways (e.g., Steadman and Franklin 2015;Bover et al. 2016). Remote Oceania, which in contrast had not experienced human presence until the arrival of food-producing communities and was by virtue of its geographic organization rich in endemic taxa, underwent radical change at and beyond human colonization horizons, with endemic avifauna and flora in particular witnessing catastrophic losses (Steadman and Martin 2003;Boyer 2010). These losses are almost certainly in part attributable to large-scale environmental change induced, not only by humans behaviors, but also by ecological release in commensal species, in this case the Polynesian Rat Rattus exulans (Hunt 2007;Athens 2009). The Pacific example is striking, not least in the extent to which exaggerated environmental disruption witnessed between 3000 and 500 BP in Remote Oceania prefigures the final cataclysmic arrival of European colonists, their livestock, and their diseases after 500 BP (a topic which I do not consider in detail here). This should not obscure, however, that the rapid traumatization of the endemic Pacific echoed more drawn-out but equally dramatic Holocene eradications in the Mediterranean and the Caribbean.
Translocation and Invasion
The arrival of humans and their co-traveler species in habitats previously insulated from them drives radical and highly variable ecological change, but the outcome is almost always the same: pronounced biodiversity loss. Such loss is not only an outcome of processes of attrition, however, but also of processes of translocation and invasion. It should be stressed that ecologists distinguish between newly-arrived species according to their long-term reproductive success and degree of associated ecological impact, in terms of translocated (taxa that reach a new habitat), introduced (taxa that reach a new habitat yet have minimal impact), and invasive (taxa that reach a new habitat and have large-scale impact). Many of the taxa discussed subsequently are commonly viewed as invasive (e.g., Rattus exulans), yet it is worth emphasizing that it may be the case that many introduced taxa-whose impact is not readily observable at the scales or timeframes at which ecological analysis of invasion tends to be undertaken (e.g., Thiara spp.)-may contribute to biodiversity reduction and homogenizing processes. During incipient domestication events around the planet, such species have been bundled together in recurring packages which, in effect, represent artificial ecosystems comprising finite sets of repeated ecosystemic interactions (Çilingiroĝlu 2005). During island colonization, these bundles have often been transplanted wholesale (Kirch 1984), preserving in part extant ecosystemic links between groups of species that co-evolved under the peculiar but aggressively selective conditions of domestication (Boivin et al. 2016).
In the Mediterranean, and subsequently much of the planet after 500 BP, translocations included the classic Southwest Asian Neolithic package of cereals and pulses including barley (Hordeum spp.) and wheat (Triticum spp.) (Willcox 2013;Arranz-Otaeguia et al. 2016) alongside domesticated ungulates (most notably cow, Bos taurus; pig, Sus scrofa domesticus; sheep, Ovis aries; and goat, Capra hircus) (Zeder 2008). In the Pacific (and across the Indian Ocean, to Madagascar and the Comoros ), major translocations include coconut (Cocos nucifera), taro (Colocasia esculenta), yam (Dioscorea spp.), banana (Musa spp.), and breadfruit (Artocarpus altilis), alongside pig, dog, and the domesticated chicken, Gallus gallus domesticus. It should be noted that in the Pacific there exists considerable variability among islands regarding the precise composition of domesticated, introduced biotas (for example, all three domesticated fauna rarely co-occur). Moreover, the precise sources of these species and the specific dynamics of their domestication are considerably more contentious than in the Mediterranean, but in general a Southeast Asian and Island Southeast Asian origin is a common theme (e.g., Denham 2011;Gunn et al. 2011;Barker and Richards 2013;Pitt et al. 2016), with pigs perhaps deriving from domesticated stock in mainland China, but with potential parallel domestication events elsewhere in Southeast Asia (Larson et al. 2010;Bellwood 2011). The Caribbean example diverges slightly because of the absence of domesticated ungulates or cereals (excepting of course maize, Zea mays) from the Americas. Continental faunas were translocated, however, possibly to provide naturally corralled stocks of non-domesticated protein; instances include armadillo (Dasypus sp.), agouti (Dasyprocta sp.), guinea pig (Cavia sp.), peccary (Tayassu/ Pecari sp.), and opossum (Didelphis sp.) (Giovas et al. 2011(Giovas et al. , 2016also Stahl 2009). The extent of deliberate translocations of plant foods is not immediately clear. Sporadic evidence for other early Antillean translocations includes Manilkara spp., as Fitzpatrick (2015) notes in providing a comprehensive overview of flora exploited for nutritional purposes during the Ceramic Age (traditionally understood to have involved much more intensive and deliberate management of domesticates and quasi-domesticates than the preceding Archaic), including common bean (Phaseolus vulgaris), sweet potato, (Ipomoea batatas), and marunguey (Zamia spp.), although it is not clear whether the Antilles were included in the native ranges of these taxa. Maize itself-certainly a translocationis evidenced in human dental calculus from the island Caribbean from around 2000 BP (Mickleburgh and Pagan-Jimenez 2012). Data from Trinidad may push this introduction back to the Mid Holocene (Pagán-Jiménez et al. 2015), although the insular status of the island during Mid Holocene seastands is unclear and, even if Trinidad was insular during the Mid Holocene, the over-water distance to the South American mainland would have been negligible.
Various other taxa have been translocated accidentally, such as the house mouse Mus musculus domesticus, as well as various species of shrew, during Mediterranean Neolithicization (Vigne 1988;Cucchi et al. 2005), and the aforementioned Polynesian Rat, Rattus exulans, during the colonization of Polynesia (although there is debate regarding whether this was indeed an accidental or deliberate translocation (Matisoo-Smith et al. 1998;Allen 2015)). Similarly, murids of several types were unintentionally translocated in the Indian Ocean (Fuller et al. 2011). The process was not limited to vertebrates; other instances include the translocation of the aquatic snail Thiara spp. and other molluscs to Remote Oceania, potentially in the context of wetland taro cultivation (Kirch 1996;Kirch et al. 2009). There are also other, less archaeologically conspicuous examples. Movement of exotic pathogens has attracted most attention within the context of European expansion and colonialism after 500 BP (e.g., Nunn and Qian 2010), but colonizing Neolithic humans inevitably translocated novel pathogenic bacteria and viruses (as well as relatively benign unicellular organisms; e.g., gastrointestinal microbiota) to previously insulated environments, provoking new, if now unrecoverable, ecological relationships.
In the Mediterranean, the Caribbean, and the Pacific, Neolithic colonization-essentially comprising a series of repeated biological invasions-drove substantial change in biotic organization. These processes were to an extent regionally specific (as regards the ecological coherence of discrete types of Neolithic package) but, in terms of general process, are comparable; I return to this distinction shortly. These initial Neolithic island colonizations were followed by both intentional and accidental translocations, as well as extinctions (Ratcliffe and Calaby 1958;Helmus et al. 2014), on a prodigious scale, associated with the European expansion over the last half millennium. This second phase of introductions has clearly exacerbated and more comprehensively generalized these biotic changes. The time depth of these processes beyond the Late Holocene (and concomitant implications for human social development) is, however, less frequently emphasized, and accordingly I focus on initial Neolithic colonization. Specifically, I suggest that drastic, short-term reduction in sum biodiversity on islands is essentially a process of homogenization. This homogenization is likely to have had effects and these effects in turn should have severely constrained the landscape of human subsistence choices in the medium to long-term.
Neolithic Colonization as a Driver of Biotic Homogenization
The effect of island colonization by humans over the recent Quaternary has not simply been loss of biodiversity and destruction of endemic ecosystems, but replacement of these ecosystems with highly anthropogenic portmanteau ecologies. Evidently, at local scales each ecological situation is contextually unique, and we should expect different types of insular geologic and biogeographic environments to exhibit varied responsiveness, but the repeated introduction of species with finite behavioral repertoires encouraged similar types of ecosystemic relationships; for example, impressively varied late Pleistocene ungulate herbivory on insular Mediterranean maquis flora, involving a range of higher taxa, was replaced during the Holocene largely by two species of Caprini. This is a process of biotic homogenization.
Biotic homogenization occurs when human activity affects the organization of biota in a non-random fashion, negatively affecting a larger number of taxa and positively affecting a smaller number. The outcome, as large numbers of taxa undergo extinction and smaller numbers of taxa experience range expansion, is reduced spatial biodiversity (McKinney and Lockwood 1999;Olden 2006). Taxonomic variety should not be simply equated with behavioral diversity; ecological roles performed by a given endemic species may, following its eradication, have been closely approximated by an invasive species, and we cannot assume that loss of ecosystem function necessarily accompanies species loss (i.e., we must differentiate taxonomic and functional homogenization (Olden et al. 2004;Baiser and Lockwood 2011)). In general, however, we can expect repeated eradication of endemics and introductions of very limited biotic suites to suppress behavioral diversity in a manner comparable to, if not matching in extent, loss of genetic diversity, driving the emergence of dominant, repeated sets of ecological interactions.
Different types of environment are likely to witness different rates and types of homogenization process. There is reason to suppose that the same factors that make island biota relatively responsive may also combine to make them more susceptible to homogenization processes (Cassey et al. 2007). This perhaps in part relates to the possibility that homogenization is greater at low species richness (Olden and Poff 2003), a condition characteristic of insular environments, and that it depresses those factors that encourage allopatric speciation (Olden et al. 2004). While the general trend is towards insular biotic homogenization (Rosenblad and Sax 2016), it does appear that fauna and flora gravitate towards homogeneity at differing rates (Shaw et al. 2010;but Kueffer et al. 2010), although scales of temporal analysis adopted affect assessment of overall process (Rosenblad and Sax 2016). The key recognition, based on the foregoing review of extinctions and translocations, is that we can establish homogenization as an active ecological process with a substantial time depth, potentially up to and beyond the Pleistocene-Holocene boundary (although more recent in, e.g., Remote Oceania).
I suggested above that these processes of homogenization were regionally specific regarding the ecological coherence of Neolithic packages but, in terms of general process, were comparable. Unlike current homogenizing processes, individual island theaters associated with discrete Neolithic packages experienced greater intra-regional than inter-regional convergence prior to 500 BP; related invasive dynamics on, for example, Mid Holocene Cyprus and Crete drove greater homogeneity between them than between Cyprus and Late Holocene Puerto Rico. This is not substantially problematic, however, as-from the perspective of the wider implications of processes, rather than within a context of conservation biology-it is the outcomes of such processes, rather than the degree of sum global homogeneity, that is most relevant.
This review of data for extinction and translocation indicate that biotic homogenization has been an active ecological process on islands over the later Quaternary. I now explore the possibility that biotic homogenization promoted similar processes beyond the biosphere. In particular, that homogenizing biotas, interfacing with abiotic systems, should have driven regular and repeated biophysical changes that resulted in processes of insular environmental convergence.
Biophysical Outcomes of Insular Biotic Homogenization
Biota interface with abiotic physical systems. Various organic including growth, metabolism, mobility, and decay involve the structural reorganization of the abiotic environment around the organism. Accordingly, it is appropriate to describe those aspects of environmental systems in which biotic and abiotic processes are especially intertwined (most notably pedology and hydrology) as biophysical systems. Biotic homogenization has implications for insular biophysical systems.
Soil Biogeochemistry and Integrity
Soils interface with organisms in a variety of qualitatively discrete yet related ways, and are central to ecosystem organization and function (Vereecken et al. 2016). Most obviously, microbiota in soils form parts of larger ecosystems, but more strictly biophysical relationships include the manner in which biota affect the biogeochemical composition of soils, as well as their mechanical and structural properties. An important resulting recognition is that structural cohesion of soils is not independent of biogeochemical and taxonomic diversity in soil and associated plant communities, and that the types of biotic homogenizing processes evident on islands during Holocene colonization have, in other contexts, driven predictable processes as regards soil content and integrity.
The composition of plant communities is a primary determinant of soil biogeochemistry, as flora and soils exist in dynamic biochemical feedback relationships (plant-soil feedbacks (PSFs), Ehrenfeld et al. 2005). Accordingly, changes in the composition of plant communities during the transition from more heterogeneous to more homogeneous biota have implications for pedological dynamics. Community composition, as noted, can be affected along a variety of dimensions; invasive plants can outcompete and replace endemics, and behavior in faunas-most obviously herbivory, but also including mobility and associated disturbance, excretion, composition of gastrointestinal microbiotas, etc.-homogenizing along similar gradients can drive parallel types of change in soil nutrient organization (e.g., Sánchez-Piñero and Polis 2000). We can deal first with plant-plant interactions. Plant-soil feedbacks are interrupted or modified during biological invasions (Wolfe and Klironomos 2005;Stinson et al. 2006), although the manner in which these disruptions occur is not fully understood (Suding et al. 2013;Schittko et al. 2016). What is clear is the capacity of invasive species to interrupt plantsoil feedbacks to their own comparative advantage and to the disadvantage of native species (Callaway et al. 2004;Perkins et al. 2016), constructing preferentially less biogeochemically diverse soils. In addition to community turnover in strictly floral terms, herbivory on the part of invasive and comparatively Homogeneous faunas (especially but not only domesticated ungulates) may adversely affect native plants more severely than invasive species in a series of manners, either biased against them because of their relative ecological naïveté or, where there is no clear herbivorous preference for endemics versus exotics, nonetheless having greater effect on endemic species via smaller population sizes and resulting greater exposure to demographic stochasticity. The sum effect of biotic homogenization, via these varied impact pathways, is radical reorganization away from diversity in soil biogeochemistry.
The effects of homogenization in soil biogeochemistry are substantial; not only in terms of of the constraints that homogeneous soils subsequently impose on ecosystem development as regards biomass potential and nutrient availability (in the context of different plant taxa possessing different requirements (Marschner and Marschner 2012)), but also in terms of pedological structural properties. Soil chemistry has implications for the robustness of pedological units insofar as it determines the spatial distribution of community members and thereby root structure, which directly promotes soil integrity (Bergmann et al. 2016). In the context of PSF dynamics in which increasing homogeneity in soil biogeochemistry promotes biodiversity-loss in communities (Perkins et al. 2016), experimental work that suggests that decreased floral biodiversity correlates positively with inability of soils to resist erosion is consequently significant (Berendse et al. 2015); more so if exacerbated soil loss and decreased biodiversity exist in a feedback relationship (cf. Garcia-Fayos and Bochet 2009;Bergmann et al. 2016). Bautista et al. (2007) find that higher functional diversity corresponds with lower runoff because of the relationship between higher diversity and patch density. These studies indicate that biogeochemical homogenization of soils drives community dynamics that in combination negatively affect soil stability and retention.
It is not only shifts in biogeochemical organization associated with root depletion that promote macro-scale change in soils, and here we can briefly consider ungulate herbivory and its pedological effects in terms of direct mechanical impacts. Comparative data from the relatively recent introduction of the domestic goat Capra hircus to the Pacific and South Atlantic suggest that substantial biophysical change might be anticipated during comparable introductions in the Early-Mid Holocene insular Mediterranean. Goat herbivory is implicated in erosion of topsoils (Mwendera et al. 1997;Yong-Zhong et al. 2005;Yocom (1967) reports 1.9 m of soil lost from the Haleakalā Crater on Maui since introduction). This derives from their eradication of native (as well as invasive) flora and consequent sub-surface biomass loss (Cronk 1989;Chynoweth et al. 2013), capacity to create ecological conditions that favor invading taxa (Wilcove et al. 1998), and the associated breakdown of nutrient-cycling (Hata et al. 2014), processes that are intrinsically interlinked. While the capacity of Capra to drive parallel types of biophysical reorganization in invasions is substantial, the domestic pig Sus scrofa domesticus was introduced prehistorically not only to the insular Mediterranean but also the Pacific, driving similarly large-scale and parallel biophysical change. The omnivory of pigs can promote predictable ecological dynamics at broad spatial scales. Hawai'ian studies indicate that the most conspicuous effects of pig behavior are reduced growth and survival in plant prey-species (Cole et al. 2012;Murphy et al. 2014; see also Campbell and Rudge 1984), which has profound implications for soil structure. On O'ahu, areas with pigs present versus areas with pigs excluded experienced much more substantial runoff, associated with the effects of pig-rooting (Dunkell et al. 2011). Nogueira-Filho et al. (2009 note that effects associated with the presence of pigs also include facilitation of dispersal of exotic flora and soil degradation via trampling and mobility. Overall, in contexts where biotic homogenization includes as a main component the introduction of stocks of domesticated mammals, there is an evident cross-cultural regularity in resulting pedological processes. Other taxa beyond introduced ungulates drive homogenizing processes in the biogeochemistry and structural dynamics of island soils. Studies of the emergent environmental properties of commensal invasions focus to some extent on humanmediated introductions to very biogeographically isolated islands after 500 BP, but we can in part retroject these ecodynamic trajectories onto Early-Mid Holocene colonization events. Impacts of commensals, especially murids, have attracted most attention (Angel et al. 2009;Bolton et al. 2014;Simberloff 2009), not least in their capacity to adversely affect nutrient input (especially of phosphorous and potassium) via predation on seabirds, which otherwise act as biogeochemical vectors between marine and terrestrial ecosystems (Mulder et al. 2011), although other modes of predation also drive down taxonomic biodiversity. The general trend is again one of impoverishment, with human commensal species contributing to processes of biogeochemical soil depletion-and consequently sum homogenization-alongside deliberately introduced fauna.
What are the main implications of insular biotic homogenization for pedological structure? Biotic homogenization is equivalent to loss of variability, and this is mirrored in biophysical systems. A direct consequence of the agropastoral colonization of islands should be increasingly parallel dynamic soil processes: loss of heterogeneity in the type of PSFs encouraging expansion of exotics at the expense of native taxa, with resulting impoverished and uniform patterns of nutrient distribution reducing below-surface biomass and degrading pedological integrity. This in turn promotes predictable changes in soil dynamics at large spatial scales, as the hydrological system interacts with soils. Root biomass is a primary inhibitor of rill erosion (Gyssels et al. 2005); decreasing root biomass diminishes the efficacy of vegetation as a break on the evolutionary transition from splash/sheet erosion to rill/ gully erosion (Woodward 1999;Di Stefano et al. 2013) (a function also of slope gradient and rainfall dynamics), with the latter two types accounting for >90% of actual sediment transport (Fang et al. 2014).
Erosion, Hydrology, Sedimentation
The movement of large amounts of sediment around the landscape by water has ramifications along a number of axes. It substantially reconfigures the distribution of soil biogeochemical components (whether derived from biotic processes or from weathering of geological substrate), in gross terms moving them downslope and towards-and sometimes beyondthe coastline (Pimentel and Kounang 1998;Pimentel 2006). This redistribution involves both the deposition of the material elsewhere, and sum material loss (Ritchie et al. 2007). For example, studies of available soil organic carbon (SOC) have found that SOC eroding from upland soils is differentially retained in downslope soil formations (Nadeu et al. 2014;cf. Kirkels et al. 2014), while some carbon is lost through fluvial transport to, ultimately, the sea. Similar dynamics apply to nutrients, in particular nitrogen and phosphorous; complex interactions between these various components notwithstanding, disruption of nutrient-cycling and general degradation of soil quality is evident generally, especially at parent soils (Quinton et al. 2010). We are essentially dealing with two types of degradation: absolute nutrient loss from the environment, and spatially specific reorganization of the nutrient landscape. The major outcome of this is overall impoverishment of the pedological environment, but a secondary outcome is preferential distribution to areas of deposition and deleterious effects in eroding areas. As nutrient distribution is an important determinant of autotroph biomass (Marschner and Marschner 2012), sediment transport forecloses on the sustainability of thresholds of growth in some areas and provides a new basis for growth in others. It should be emphasized that there is a temporal lag in processes of redistribution that is important at timescales relevant for biota and ecosystem function; soil destruction and movement (i.e., transition to sediment) can be rapid, but pedogenic processes after sediment deposition are much less rapid (Vereecken et al. 2016). The recursive consequences of soil dynamism for ecosystem organization are substantial.
Sediment movement via water transport changes landscape morphology (notably slope), which recursively alters hydrological dynamics, driving up flow-rates in areas of higher relief and retarding waterflow across plains as sediment is deposited. Hydrological organization is a dynamic system, with this dynamism constrained non-randomly by various factors including slope and sediment input and transport (Perron et al. 2012;Willett et al. 2014). Accordingly, increased sediment input into river systems via increased erosion alters the extent to which this input exercises constraint on hydrological dynamics. This has biological implications, as well as those for the physiographic organization of landscapes; increasing volumes of water-borne sediment have impacts on riverine and lacustrine ecosystems, affecting trophic webs and reproduction (Wood and Armitage 1997). Beyond terrestrial hydrological systems, studies suggest that terrigenous sediments can radically disrupt near-shore (and especially lagoonal) environments (Fabricius 2005). We cannot directly correlate modern with prehistoric sedimentation as the chemical burden of modern suspended sediments is likely to be more diverse and more deleterious than in Early-Mid Holocene near-shore deposition. Nonetheless, evidence that modern sedimentation can reconfigure coral reef ecosystems via a series of impact pathways beyond those related to toxin accumulation is significant; suspended particles drive changes as diverse as inhibiting photosynthesis by obstructing photic penetration of the water-column (Storlazzi et al. 2015) to altering feeding behavior in fish, as well as entering the trophic system via ingestion (Tebbett et al. 2017). Sedimentation associated with interior island erosion following episodes of prehistoric colonization should be expected to have influenced marine ecosystems along similar pathways, with this influence not only limited to tropical latitudes (e.g., Airoldi and Cinelli 1997).
Downslope sediment transport denudes uplands, exposing geology that had been previously covered by regolith. Weathering of such geology is considered to be central to several biophysical processes, not least the carbon cycle (e.g., Maher and Chamberlain 2014), leading to the introduction of new chemicals to the zone in which the lithosphere and atmosphere interact (Anderson 2012). The potential clearly exists for nutrient losses associated with erosion to be replenished by fresh weathering, although (a) with altered structural conditions in the regolith the release and distribution of such nutrients is likely to be very variable, and (b) heterogeneity in the underlying geology may mean that types and proportions of nutrients released do not conform to previous nutrient ratios. Plant-soil feedbacks suggest that any establishing plant community will, other factors being equal, accordingly diverge from the structural composition of the preceding community.
Clearly, having stressed the degree to which biotic and physical systems intermesh in a series of feedback relationships, it would be possible to continue to track the subsequent implications of these (and other) biophysical outcomes of biotic homogenization, but to do so would be to labor the point. It is evident that various types of ecological processes can cause cascade effects that move through biophysical systems. These effects, in reorganizing physiographic structure, then re-frame the physical conditions which constrain biota, and in this sense this relationship is recursive, and likely to obtain in different instances of island colonization.
The General Dynamics of Neolithic Island Colonization Environmental Convergence
Data clearly suggest that agropastoral colonization of previously insulated habitats during the later Quaternary initiated (via a dual mechanism of extinction and translocation) a series of homogenizing biotic processes. Accordingly, these anthropogenic insular biotas came to more closely resemble one another in terms of relative lack of taxonomic and functional diversity. As biota interface with abiotic systems, so increasingly homogeneous biotas in increasingly homogeneous ecosystems interfaced with biophysical systems. Modern and experimentally derived data on biophysical dynamics suggest that this process would have driven repeated types of dynamics across a range of scales, from the very small (soil biogeochemistry) to the very large (alluvial formation), admitting that the rate and severity of these dynamics should vary significantly depending on local ontogenetic conditions. This suggests that the central dynamics of postcolonization insular environmental change should be predictable (Whittaker et al. 2007(Whittaker et al. , 2008Borregaard et al. 2016). Whittaker et al.'s 'general dynamic theory of oceanic island biogeography' (Whittaker et al. 2007;Whittaker et al. 2008) adds a further dimension to the original assumptions of MacArthur and Wilson (1967), emphasizing that the physiographic organization of small volcanic oceanic islands changes over evolutionary time in ways that are generally knowable (Borregaard et al. 2016). In the case of volcanic islands, the taxonomic diversity of each can be contextualized within its position on the temporal spectrum from initial formation through the cessation of volcanic mass-building to erosion and ultimately atoll formation. The evolutionary specifics of the model are not here of immediate interest; more relevant is the notion that initial ontogenetic conditions, interacting with biota, constrain resulting biophysical processes within certain parameters. This concept can be utilized to consider interactions between human colonists and insulated environments in terms of a general dynamic theory. Despite evident diversity in the cultural component of such colonizations, the ecodynamic outcomes are largely similar: Neolithic colonization and its predictable biotic effects impose constraints on biophysical systems such that their organization on islands comes to more closely approximate other such systems. The general outline-parallel types of contextual constraint driving unrelated morphologies towards a common form-suggests that we may best consider this a process of environmental convergence.
I use convergence here in a loose sense to describe a process in which two entities come to resemble one another in their present present morphologies yet whose resemblance masks deep initial morphological variation. In evolutionary biology, convergence is increasingly seen as having a genetic basis; even at macro scales, it is understood to be an outcome of explicitly Darwinian processes. (Losos 2011;McGhee 2011;Stern 2013). Demonstrably, inorganic systems are not under Darwinian selective pressure. Natural selection itself, however, is one of many structured sorting processes, and this provides a useful insight into considering the nature of insular biophysical convergence. With very different pre-colonization biophysical conditions, the suite of effects associated with colonization by agropastoral humans exercised comparable types of ecodynamic pressure on islands. Because of the structural parallels in how island biophysical systems are organized, these comparable types of pressure drove comparable emergent outcomes: specifically, biotic homogenization followed by abiotic dynamics associated with homogenization and biodiversity loss. These cascades were recursive, with the biophysical outcomes of biodiversity loss driving greater biotic homogenization. Accordingly, the structural organization of island environments separated in time and space converged to more closely approximate one another.
The goal of this paper has been to emphasize that prehistoric human colonization of islands ineluctably forces biophysical changes along limited and consequently quasipredictable axes. Recognizing that the human capacity to alter biophysical systems such that environmental convergence is the outcome has broader implications, not least for the scholarship regarding the nature and time-depth of the Anthropocene (e.g., Braje and Erlandson 2013). However, I now briefly consider how anthropogenic environmental convergence may have iteratively affected social and political organization of island communities in the aftermath of colonization.
Sociopolitical Implications: Mitigation, Capital Investment, and Wealth Inequality
From the perspective of human landscape management, the general dynamic landscape processes sketched above are mostly deleterious, in particular soil and nutrient redistribution, but also degradation of marine environments and unpredictability in hydrological organization. Clearly, deleterious processes of this sort (whether anthropogenic or not) have occurred across the planet, but the foregoing discussion suggests we might expect them to be more rapidly or acutely experienced on islands. It is a reasonable assumption that Neolithic societies cross-culturally and characteristically act to mitigate processes viewed as deleterious, in general acting to boost resilience and minimize risk in the short-medium term (e.g., Quintus et al. 2016). I now address how these general environmental dynamics may have prompted strategic responses from Neolithic communities.
Strategies to mitigate such processes would cluster around attempts to limit pedological loss (and boost productivity) via capital investment; programs might take the form of terracing, hydrological management, and nutrient replacement. Diversification within horticultural/agropastoral regimes should also serve to offset calorific losses associated with upslope soil depletion. Recognizing the relatively high productivity of coastal environments in mid-and low-latitude contexts (especially within the latitudinally-determined coral belt), diversification into and increasingly intensive exploitation of offshore resources might also be expected. There is some supporting evidence for this, notably for terracing (e.g., Bevan and Conolly 2011;Quintus et al. 2016) and soilmanagement (e.g., Ladefoged et al. 2005) in the Mediterranean and Pacific, as well as fishpond aquaculture in Hawai'i.
Clearly, investment in landscape capital is not an islandspecific phenomenon. Nonetheless, in the face of substantial change in biophysical systems deriving from colonization impacts, we might expect such investment in island contexts to be: (a) comparatively more spatially expansive than in nonisland contexts, because of biophysical disruption affecting more of the total productive landscape; (b) more costly per capita, because of a relatively low population/high investment cost ratio; and (c) more central to maintaining surplus flows, because capital-intensive systems such as terracing or aquaculture should constitute a greater proportion of the total subsistence system.
Essentially, to mitigate risk, non-hierarchical societies with limited inter-generational transmission of wealth would have initially invested heavily in spatially heterogeneous programs of landscape modification. This would have significantly transformed how quickly and in which parts of the community surplus accumulated, with spatially variable landscape investment driving spatially (and thereby socially) variable patterns of productivity and thereby allowing capital to aggregate rapidly and unevenly. Differential investment of labor and resources in mitigatory infrastructure may be more significant in the context of insular social organization often counterintuitively exhibiting evidence for hierarchical and complex social forms. This runs against the grain of prevailing social evolutionary theory and its emphasis on demographic blocs and large surpluses.
Clearly, linking landscape capital and emergent systems of inequitable wealth distribution is not an original argument; however, this should be contextualized within our understanding of insular carrying capacity and recent discussions of the precise socioeconomic mechanisms that can drive exaggeratedly skewed wealth-distributions. Piketty (2014), exploring conditions that may promote or depress wealth inequality, demonstrates that very skewed wealth-distributions may emerge in contexts of comparatively high returns on capital and comparatively depressed overall growth. This has a central, hitherto under-appreciated relevance as a mechanism for explaining emergent inequality in low productivity environments. Island contexts-relatively marginal compared to, for example, extensive continental patches of fluvial or loessial Quaternary sediments-should permit relatively low overall socioeconomic growth under conditions of Neolithic subsistence strategies. Investment of landscape capital by those segments of society with the capacity to do so should drive exaggeratedly high returns relative to comparable non-insular (i.e., a more productive and resilient) contexts. This approximates a low growth-high capital returns scenario, driving greater disparities in wealth than in more productive or less biophysically responsive environments. This is beyond the scope of this paper, but addressing the extent to which finite and predictable responses to general ecodynamic processes drive similarly comparable emergent effects in terms of social organization and wealth disparity is an important focus for future research.
Conclusions
Islands, despite their evident diversity in terms of physiography and biology, share certain ecological tendencies in terms of responsiveness to invasion. Neolithic packages, also evidently enormously diverse, nonetheless are organized such that their introduction tends to radically destabilize island biota. A range of data proxies indicate that this destabilization manifested along several predictable pathways during the Neolithic colonization of Mediterranean, Pacific, and Caribbean islands. The predictable and recognizable effects of biological invasions should, I suggest, be evident also in those physical systems that interface with biota, which experience predictable types of process as they converge via a general ecodynamic trajectory. This process of environmental convergence has significant implications for insular human ecology, not least in terms of providing routes into testing spatio-temporal models of maritime dispersal and highlighting the deep history of human impacts on the geosphere, rather than simply on the biosphere.
Beyond the implications more usually considered within the framework of human or historical ecology, this model has ramifications for social organization itself that I have addressed here only briefly. Environmental convergence would have demanded mitigation strategies on the part of island colonists, in particular comparatively expansive (for small-scale Neolithic societies) programs of capital investment. Island environments nonetheless remain comparatively marginal, and it is in the dynamic tension between capital investment, growth, and returns (different ratios of which can drive or suppress emergent inequalities), Neolithic egalitarian ideologies, and marginal and less resilient environments (islands are exemplary, but perhaps not unique in their biophysical responsiveness) that future work may locate the kernel of emergent insular social complexity.
Finally, I stress that islands are not essentially representative of qualitatively distinct or otherwise unique types of environmental organization. Instead, they are illustrative of broader processes by virtue of the extent to which they are exemplary of the tendency of the surface of the geosphere to be both heterogeneous and scalar in this heterogeneity. In this sense, then, the issue becomes one of relative scale (cf. Brose et al. 2004), and we might suppose that the types of socioecological constraint sketched above are noteworthy in insular perspective only to the degree that they are felt more acutely and rapidly in more restricted 'island' contexts than others. Recognizing that these dynamics anticipate largerscale and longer-term continental processes would contribute substantially to bridging the divide between palaeoenvironmental and ecodynamic research on the one hand, and models of social evolutionary development on the other.
|
2018-04-03T01:35:08.878Z
|
2017-10-25T00:00:00.000
|
{
"year": 2017,
"sha1": "d3fe5b37fcc5613f56bb4b2ac07695c61a648372",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10745-017-9939-9.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3fe5b37fcc5613f56bb4b2ac07695c61a648372",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
254275718
|
pes2o/s2orc
|
v3-fos-license
|
Modeling the relationship between vertical temperature profiles and acute surface-level ozone events in the US Southwest by spatially smoothing a functional quantile regression estimator
Abstract With the proliferation of gridded data products, modeling the relationship between a scalar response and a functional covariate over the points on a regular lattice is becoming increasingly important. In this work, our overall aim is to better understand the relationship between high quantiles of surface-level ozone and the vertical temperature profile (VTP), a functional covariate, over the US Southwest in the summer. We develop our penalized functional quantile regression based approach within the framework provided by functional data analysis. As we assume that coefficient functions at points on the lattice exhibit spatial similarity, we obtain improved estimates by penalizing dissimilarity between nearby coefficient function estimates. In order to better account for the high degree of diversity of this region, we more strongly weight differences between coefficient function estimates at cells that exhibit a higher degree of geographical and climatological similarity. Our analysis suggests that the VTP is associated with acute surface-level ozone events during the summertime over this region, and the nature of this relationship differs spatially.
Introduction
Surface-level ozone (O 3 ) has negative health consequences that may be further magnified when it reaches its highest levels. The elderly and those with complicating health conditions, such as asthma, may be even more vulnerable to the impacts of surface-level O 3 . O 3 is formed via a chemical reaction between NO x and volatile organic compounds (VOCs) in the presence of sunlight. Therefore, emissions play a role in characterizing O 3 levels on the surface. However, it is not uncommon for 2 days with identical NO x and VOCs emissions to have drastically different surface-level O 3 amounts. It is well established that local and regional meteorological conditions account for a large portion of this discrepancy (Jacob and Winner, 2009;Tai et al., 2010;Liu and Cui, 2014;Porter et al., 2015). Broadly speaking, these works conclude that meteorological variables such as surface-level air temperature, circulation, and stagnation are related to surface-level air pollution; however, there are often several approaches for characterizing these meteorological conditions.
One way to capture these types of meteorological conditions is by considering atmospheric profile variables (APVs). An APV is considered to be a function mapping the altitude above a location on Earth's surface to a scalar measurement. The vertical temperature profile (VTP), which reports the temperature of the air at increasing altitude is an important APV in climate sciences. Modeling surface-level air pollution as a function of APVs has been considered previously (Du et al., 2013;Rendón et al., 2014Wolf et al., 2014Russell and Porter, 2021), but has received much less attention in the literature in comparison to approaches that consider surface-level (scalar) covariates exclusively. In this work, we investigate the association between the VTP and high levels of surface-level O 3 through the use of methods developed within the framework of functional data analysis (FDA). In FDA, some or all observations are taken to be functions, as opposed to scalars and/or vectors. To this end, in this work, we consider the VTP to be a functional covariate.
Modeling a scalar response variable as a function of a functional covariate is often done via the use of the functional linear model Horváth and Kokoszka (2012). As in a standard linear regression model, a functional linear model (with a scalar response) estimates the conditional mean response given the functional covariate. If one is interested in higher quantiles of the response distribution, a functional quantile regression model may be of use. Along these lines, Russell and Dyer (2017) suggest a penalized functional quantile regression model to investigate the impact of APVs on surface-level air pollution at locations in South Carolina and Florida. These authors conclude that surface-level air pollution is related to APVs at different pressure levels, and that these associations differ by location and season. As acute levels of surface-level O 3 may have a disproportionately higher impact on human health outcomes, we focus on modeling higher conditional quantiles of the O 3 response in this work.
The last 10 or 15 years have seen the proliferation of gridded data products, which offer climate and air pollution data over large spatial domains with excellent temporal coverage and no missing values. By its nature, the grid utilized in a gridded data product could be thought of as a regular spatial lattice. Others have considered approaches for analysis of data in these contexts Zhu et al., 2010;Zheng and Zhu, 2012;Reyes et al., 2015), although our research objectives and methods differ from these works. We emphasize that our primary goal is to better understand the relationship between the VTP and high surface-level O 3 in the summer in the US Southwest based on output from a gridded data product. That is, our objective is to model the relationship between a functional covariate and conditional quantiles of a scalar response at points on a regular lattice.
A simple analysis approach would be to fit independent functional linear regression models at each point on the lattice. Unfortunately, such an approach would likely be difficult to interpret due to the presence of excess noise. For our data, we believe that it is reasonable to assume that the functional regression relationships at adjacent points on the lattice are similar. For this reason, we consider an approach that institutes a penalty for the dissimilarity between estimated coefficient functions at adjacent locations on the lattice. This approach helps us with our objective of better understanding the relationship between VTP and key quantiles of surface-level O 3 at all points on the lattice.
Pointwise Functional Quantile Regression Modeling
For the random process Y, X f gwith iid copies Y t , X t f g t ∈ ℕ , assume the response variable of interest is Y t ∈ ℝ, and X t : 0, H ½ !ℝ is a functional covariate. Further, assume that X t f g t ∈ℕ are in L 2 , the separable Hilbert space determined by the set of measurable real-valued square-integrable functions on [0,H]. Given e21-2 Brook T. Russell and William C. Porter the value of the functional covariate, we directly model Q τ Y t jX t ð Þ, the conditional τth quantile of the scalar response variable for some We note that the intercept α τ ∈ℝ, and we call Þthe centered covariate function. Additionally, the twice differentiable function β τ is called the coefficient function for the τth quantile. In a typical analysis, estimation and inference regarding β τ in equation (1) is a primary aim, as portions of 0, H ½ over which the coefficient function is positive (negative), imply a positive (negative) relationship with that conditional quantile of the response. Additionally, we assume that β 00 τ ∈ L 2 , and that β τ and β 0 τ are both absolutely continuous.
As this is commonly done in FDA, we assume that β τ can be approximated by a linear combination of a finite set of known basis functions, ϕ k f g k¼1,…,K . This implies β τ s ð Þ≈ In reality, functional covariates x i f g i¼1,…,n are infinite dimensional and therefore not fully be observed. Instead, the analyst observes ,M is another set of known basis functions. Therefore, the conditional quantile in equation (1) can be expressed via Here, the M Â K dimensional matrix J is constructed such that the entry in the mth row and kth column is where ρ τ is the check-loss function commonly used in quantile regression (Koenker and Bassett Jr., 1978). The resulting optimum from equation (3) leads to the coefficient function estimator via the relationship
Functional Quantile Regression Modeling on a Grid
Climate scientists often make use of gridded data products, which can be thought of as producing output over the points on a regular spatial lattice. Assume that D is a spatial domain of interest, and that we observe data over the set of points on a regular lattice D 0 ⊂ ℤ 2 , such that D 0 ⊂ D. Despite the fact that the primary objective is to describe the association between the functional covariate and a key quantile of the scalar response everywhere in D, we assume that modeling this relationship ∀ s ∈D 0 will be useful. Additionally, we assume that the true coefficient functions at nearby locations are similar, which makes the approach of borrowing information from nearby cells a potentially promising way to improve coefficient function estimates. Assume that at time t ∈ 1, …, n f g , we observe realizations of a functional covariate and scalar response at locations s l f g l¼1,…,L such that s l ∈ D 0 ∀ l∈ f1,…,Lg, at a sequence of pressure levels 0 < h 1 < ⋯ < h U ≤ H. We propose a procedure to simultaneously estimate all L coefficient functions β τ,l È É l¼1,…,L , under the assumptions described above. We again assume that the true coefficient function at s l is well approximated by a finite linear combination of known basis functions, giving β τ,l h ð Þ≈ P K k¼1 b l,k,τ ϕ k h ð Þ ¼ ϕ T h ð Þb l,τ . Observations from the functional covariate are denoted by x l,t h ð Þ f g l ∈ 1, …, L f g ,t ∈ 1, …, n f g , and assume further that there exists a l,t ∈ ℝ M such that Environmental Data Science e21-3 x l,t h ð Þ≈ P M m¼1 a l,t,m ψ m h ð Þ ¼ ψ T h ð Þa l,t . We then approximate the conditional τth quantile of the response at time t and location s l by One could consider the estimatorà The estimator in equation (5) does not incorporate any sort of penalty for dissimilarity between neighboring estimated coefficient functions. As we believe that nearby coefficient functions exhibit similarity in our data application, we supplement the loss function in equation (5) by an additional penalty term.
Penalizing spatial dissimilarity
Our spatial region of interest is composed of lush forests, arid deserts, coastal regions, and high mountain terrain. Because of this geographical heterogeneity, some pairs of neighboring cells on the lattice may exhibit a higher (or lower) degree of spatial similarity in terms of their true underlying coefficient functions. For this reason, for neighboring locations s l and s l 0 (l 6 ¼ l 0 ), we propose the weighting function w θ,γ s l , s l 0 ð Þ¼θ exp Àγδ s l , s l 0 ð Þ ð Þ , where θ > 0 and γ ≥ 0 are unknown parameters and the function δ models geographic and or climatological dissimilarity between two locations. We propose the penalized estimator where Λ τ is defined in equation (6) and We note that equation (9) gives the sum of the spatial dissimilarity penalties over all pairs of adjacent cells in D 0 . Here, I Á f g denotes the indicator function, and s l $ s l 0 implies that s l and s l 0 are adjacent. The definition of adjacency is flexible and can be selected in a way that makes sense for a specific data application. Importantly, the b . The absolute loss penalty in equation (9) is used, because the absolute value function can be expressed in terms of the check-loss function, as seen in equation (10). This makes it straightforward to implement the penalty by augmenting the quantile regression design matrix.
Analysis of Surface-Level Ozone in the US Southwest
The primary research objective in this work is to gain a better level of understanding regarding the effects of VTP on acute daily surface- In order to more effectively penalize the dissimilarity between adjacent grid cells, we seek to identify a data product that is able to assess the degree to which two adjacent grid cells are similar from a geographical and climatological perspective. For this purpose, we use the PRISM data product (Daly et al., 1997) as it explicitly incorporates information about rainfall and implicitly incorporates information about geographical features such as elevation. Denote the 30-year average annual precipitation totals for PRISM cell Pr 30 i ð ÞI i∈ l f g ð Þ = P N Prism i¼1 I i∈ l f g, and i ∈l is the event that the midpoint of PRISM cell i is contained in cell l. Essentially, η s l ð Þ is the average of all PRISM 30-year annual precipitation totals for all PRISM locations in the grid cell l. The PRISM data product takes both geographical and climatological information into account, and therefore we find it useful to leverage it for this purpose. We implement our analysis procedure using B-spline basis functions to represent both the coefficient function as well as the functional covariates. We utilize 7 B-spline basis functions of order 4 with equally spaced interior knots to represent the functional covariates and estimated coefficient functions. The parameters θ and γ are estimated via a two-dimensional grid search using Bayesian information criterion (BIC) as the model comparison criterion. In keeping with our objective of modeling acute surface-level O 3 , we take τ ¼ 0:90. Figure 2 plots the resulting coefficient function estimates at all locations in the spatial domain for a sequence of eight equally spaced pressure levels: 825, 850, …, 975, 1,000. Recall that a pressure level of approximately 1,000 corresponds with the Earth's surface. At higher pressure levels (above 925 or 950), warmer temperatures are associated with larger high O 3 events. Around the pressure level 950, we begin to see this relationship disappear over large portions of this region. Instead, as we go lower in the atmosphere, larger high O 3 events are associated with cooler air over Nevada, Southern Utah, and Arizona, as well as coastal California. Interestingly, we see the opposite over the high mountains of California and in the far northern part of the state. This implies that air temperature inversions are associated with acute surface-level O 3 events over the majority of the US Southwest; in contrast, warmer air at the surface is linked to these types of air pollution events over the Sierra Nevada Mountains and in Northern California.
Discussion
In this work, at each location on a spatial lattice, we estimate the relationship between the VTP and high daily surface-level O 3 events during the summer in the US Southwest states of California, Nevada, Arizona, and Utah. We take a functional quantile regression approach, and as we believe that nearby coefficient functions are similar, we estimate at all points on the lattice simultaneously and penalize the spatial dissimilarity. In this work, we model acute surface-level O 3 events by estimating the conditional 0.90 quantile of the response distribution. Because of the geographic heterogeneity of this diverse region, we weight differences between coefficient function estimates at neighboring cells more if they are similar geographically and/or climatologically. Our analysis concludes that temperature inversions are a primary driver of acute O 3 events over Arizona, Utah, Nevada, and coastal California. In contrast, higher temperatures are associated with acute O 3 events over far Northern California and the Sierra Nevada Mountains. We determine two cells on the grid to be adjacent if they are connected to the N, S, E, or W. In the future, we hope to expand this definition to further outlying cells. Doing so would complicate estimation, but may improve estimates. Also, the model developed in this work does not incorporate temporal dependence. Although we believe that the results presented here would not change greatly, we hope to refine the modeling approach to account for temporal dependence in the future. We would also like to further refine our methodology to simultaneously consider other quantiles of interest, and to investigate the impact of alternative weighting functions. In the future, we believe that it would be interesting to consider combinations of other regions and seasons, to compare and contrast the results presented here. Similarly, it would be interesting to consider models other than GEOS-Chem model version 12.2.1. Other extensions to our modeling procedure include developing modeling procedures that are able to incorporate both gridded model output and in situ observations (O 3 and VTP). We hope to consider these ideas in future work.
We note that equation (9), with its absolute value penalty terms written in terms of finite differences, can also be expressed via the check-loss function, relying on the fact that ∀ τ ∈ ð0,1Þ and u∈ ℝ: Therefore, in practice, the optimization in equation (8) can be performed in a computationally efficient manner by constructing the design matrix determined by the loss function in equation (6), and augmenting it with the design matrix determined by the penalty Λ spat in equation (9), relying on the relationship in equation (10)). For this reason, functions designed for quantile regression with scalar covariates, such as R's quantreg package, may be utilized for optimization. With a large number of adjacent cells, the resulting design matrix will be very large, but also very sparse. This sparsity can be leveraged to make estimation reasonable.
|
2022-12-07T14:05:15.087Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "216443cec89150774fa0bd296a63716ff4824104",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/44F49CCBB7A294554A09D1F1E253D631/S2634460222000255a.pdf/div-class-title-modeling-the-relationship-between-vertical-temperature-profiles-and-acute-surface-level-ozone-events-in-the-us-southwest-by-spatially-smoothing-a-functional-quantile-regression-estimator-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "216443cec89150774fa0bd296a63716ff4824104",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
266373770
|
pes2o/s2orc
|
v3-fos-license
|
Associations of academic environment, lifestyle, sense of coherence and social support with self-reported mental health status among dental students at a university in Brazil: a cross-sectional study
Objectives The study evaluated the association of academic environment, lifestyle, sense of coherence (SOC) and social support with self-reported mental health status among dental students. Design Secondary analysis of data from a cross-sectional, questionnaire-based survey conducted from August to October 2018. Setting Dental school of a public-funded university in the south-eastern region of Brazil. Participants 233 undergraduate dental students recruited across all years of the course. Outcome measures Socioeconomic and demographic characteristics, city of origin and student’s academic semester were obtained through self-completed questionnaires. Perception of the academic environment (Dundee Ready Education Environment Measure (DREEM)), individual lifestyle (Individual Lifestyle Profile Questionnaire (ILPQ)), SOC (SOC Scale (SOC-13)), social support (Medical Outcomes Study Scale (MOS)), and depression, anxiety and stress (Depression, Anxiety and Stress Scale-21 (DASS-21)) were assessed using validated instruments. The relationships between variables were investigated through multivariable negative binomial regression to obtain the rate ratios (RRs) and 95% CIs. Results Female sex was associated with greater scores of anxiety (RR 1.74, 95% CI 1.10 to 1.97) and stress (RR 1.52, 95% CI 1.12 to 2.06). Students who perceived a better academic environment and those reporting a greater SOC had a lower probability of depression, anxiety and stress. Furthermore, a favourable lifestyle was associated with lower depression scores (RR 0.99, 95% CI 0.97 to 0.99). Social support did not remain associated with depression, anxiety and stress after adjustment. Conclusions The present findings suggest that self-reported mental health status is associated with students’ sex, academic environment, SOC and lifestyle. Enhancing the educational environment and SOC, and promoting a healthy lifestyle may improve the psychological health of dental students.
IntrODuCtIOn
During academic life, undergraduates may experience emotional, psychological, social and financial challenges that may affect their mental health.Of these, anxiety, stress and depression are common emotional disorders among university students in several countries. 1 2Undergraduate students may experience higher levels of anxiety, depression and stress compared with the general population. 1 Psychological distress among students has been shown to influence their physical health and academic performance. 3Alcohol misuse, substance abuse and social isolation have also been related to psychological distress among undergraduate students. 4 5he impact of the challenges experienced during the course on students' mental health depends on different aspects related to the strEnGtHs AnD LIMItAtIOns OF tHIs stuDY ⇒ The study comprehensively examined the predictors of mental health among undergraduate dental students, considering the role of demographics, perception of the academic environment, lifestyle and protective psychosocial factors using valid questionnaires.⇒ This study focuses on the protective factors of dental students' mental health in a developing country setting.⇒ The study used a representative sample of dental students with a high response rate (90.3%).⇒ Our findings should not be generalised to dental students attending private dental schools who pay tuition fees or take student loans to cover such costs.⇒ Other possible predictors of the mental health status among dental students, including bullying, preexisting mental health problems, and family history of mental illnesses, were not assessed.
Open access academic environment as well as individual characteristics. 6 7Individual risk factors of poor mental health among university students may include female sex, 1 8 9 ethnicity, 1 low family income, 7 city of origin distinct from the city of the university, 10 low consumption of healthy foods 11 and low levels of physical activity. 12nxiety and stress symptoms in undergraduate students enrolled in health-related courses were higher than those enrolled in other courses. 13In addition, dental courses are considered more demanding and stressful than other health-related courses. 1 Dental students' primary sources of stress include the demanding nature of the course, the large amount of work to be learnt and examinations, the competitive environment, heavy laboratorial and clinical workload, and fear of failing. 14Studies have also confirmed the importance of the student's perception of the academic environment on psychological distress. 1 14he academic environment is the general atmosphere of the education process, encompassing the different intellectual, social, emotional and physical aspects that can aid the learning experience or distract from it. 15he first years of the undergraduate dental curriculum demand a substantial amount of time involving theoretical subjects and preclinical (laboratory) activities.Over the years, the dental curriculum has also included clinical training that requires commitment and responsibility for patient dental care through carrying out complex dental procedures and the completion of clinical prerequisites and exams.Thus, dental course demands students' intellectual, manual, psychosocial and interpersonal skills to succeed during the course and in their future careers. 14ecently, the role played by protective psychosocial factors in undergraduate students' mental health has been investigated.Studies have highlighted the importance of a sense of coherence (SOC) and social support on students' psychological well-being. 8 14The salutogenic theory's main construct is SOC (salute=health; genesis=origin), which represents a global orientation towards perceiving life as organised, manageable and emotionally meaningful. 16People with a higher SOC are more likely to effectively deal with life's difficulties and therefore maintain mental health. 16Social support is a reciprocal process of formal and informal relations among people by which they feel cared for, cherished and part of a network of mutual commitments. 17These relationships are commonly developed between people with similar everyday routines who establish enduring patterns of social ties. 17SOC and social support seem to protect the mental health of university students. 10 18o date, few studies have assessed the complex associations of students' academic characteristics, psychosocial traits, and lifestyle with well-being among dental students, which prompted us to conduct a questionnairebased survey to evaluate inter-relationships between the aforementioned predictors and quality of life and mental health.Our recent findings showed that better quality of life was associated with greater social support, higher SOC, lower anxiety and healthier lifestyle. 19Since direct and indirect relationships between psychological suffering and students' quality of life were identified in our previous research, 19 it would be relevant to examine the determining factors of dental students' mental health.Moreover, there is a dearth of studies examining the influence of protective factors on dental students' mental health. 1 2 20 Considering the potential protective role played by psychosocial factors in and the relevance of academic factors to students' mental health, we undertook a secondary analysis of our previous survey 19 that aimed to evaluate the association of perception of the academic environment, lifestyle, SOC and social support with self-reported mental health status among dental students.
MEtHODs study design and participants
As previously reported, 19 a cross-sectional study was carried out involving dental students enrolled in the second semester of 2018 at the Dental School of Fluminense Federal University, Niterói campus.Fluminense Federal University is a public-funded university in the state of Rio de Janeiro, south-eastern Brazil where a dental degree is offered over a nine-semester course.Dental students aged 18 years or older attending the 2018 academic year in all semesters were invited.The Consensus-Based Checklist for Reporting of Survey Studies (CROSS)was used to report the study.
recruitment and data collection procedures
Initially, the course coordinator provided the entire list of dental students enrolled in the course.Data collection was scheduled in advance with teachers of all academic semesters, without disrupting their academic activities.Data were collected in the dental school's classrooms, laboratories and dental clinics from August to October 2018.On the day scheduled for data collection, all students in the classroom were informed of the study objectives and received detailed instructions on how to respond to the questionnaire provided by one researcher.Any query about the research was clarified at this stage.Participants who met the inclusion requirements were invited to complete a structured questionnaire after receiving the appropriate instructions.At least three additional attempts were made to identify students who were absent on the scheduled day of data collection.
A self-administered questionnaire was used to collect data on socioeconomic and demographic characteristics, city of origin, student's academic semester, perception of the academic environment, lifestyle, psychosocial factors and mental health (online supplemental file 1).The scales used to assess lifestyle, perception of the academic environment, psychosocial factors and mental health were previously cross-culturally adapted for the Brazilian population.
The questionnaire was pretested in advance of the main study with 21 undergraduate nutrition students from the Open access same university campus to evaluate the questionnaire's clarity and to estimate the response time to the items.The changes in the vocabulary of a few items were undertaken to ensure the participants' ability to understand the questionnaire, and the average time to fill out the questionnaire was 14 min.
response rate and study power In total, the dental school of the Fluminense Federal University had 258 undergraduate students enrolled in 2018.Of these, 246 students aged 18 years or older were identified during the recruitment period and data collection.One student declined to participate and 12 additional students were excluded from the analysis due to incomplete data.Therefore, the studied sample included 233 undergraduates, resulting in a response rate of 90.3%.
The final sample size of 233 participants would lend a power of 96% to detect statistically significant effects of 0.05 (small effect size), considering the 5% type I error probability and 11 independent variables in a multiple regression model. 21
theoretical model
The WHO Conceptual Framework for Action on Social Determinants of Health was adopted to investigate the determinants of self-reported mental health (figure 1). 22ccording to this framework, the determinants of health are hierarchically organised into structural and intermediary factors.Structural determinants reflect the place within social hierarchies, which in turn affect intermediary determinants and health outcomes.The intermediary determinants refer to different direct exposures to physical and mental health problems.Demographic characteristics and socioeconomic factors were the structural determinants.Intermediary determinants included behaviours, academic characteristics and psychosocial factors, whereas depression, anxiety and stress were the mental health outcomes.
Mental health status
The mental health status of the participants was evaluated using the Depression, Anxiety and Stress Scale (DASS-21). 23DASS-21 is composed of 21 items that evaluate the self-reported negative emotional states of depression, anxiety and stress.The scores of the depression, anxiety and stress subscales were recorded on a Likert scale ranging from 0 ('Strongly disagree') to 3 ('Totally agree').The DASS-21 has seven items per subscale related to symptoms from the previous week.The scale used in the study was validated in Brazil by Vignola and Tucci. 24The final scores of depression, anxiety and stress were obtained by adding up the scores of the items corresponding to each subscale and multiplying by two to evaluate the severity of each mental health status according to the cut-off points presented in table 1. 23 In this study, the categories 'Severe' and 'Extremely Severe' were merged for descriptive purposes.
structural determinants
The structural determinants were demographic characteristics and socioeconomic factors.The former included age, sex and ethnicity.Self-reported skin colour was used to assess ethnicity according to the following options: white, yellow, indigenous, brown and black.Monthly family income, social/racial inclusion quotas and student's city of origin were the socioeconomic factors.The monthly income was recorded in Brazilian reals (R$) and classified according to the number of Brazilian minimum wages (BMW) per family into <3 BMW, 3-6 BMW, >6-10 BMW and >10 BMW.University admission using social or racial quotas (no/yes) and information on whether the city of origin of the student was different from the city of the campus (no/yes) was also registered.
Intermediary determinants
The intermediary determinants included academic characteristics, and lifestyle and psychosocial factors.The academic characteristics were the current academic semester in which the student was enrolled and the perception of the academic environment.The current academic semester was originally registered ranging from 1 to 9 and then grouped as 1-3, 4-6 and 7-9, representing the initial, intermediate and advanced study periods of the course.Students' perception of the academic environment were assessed using the Dundee Ready Education Environment Measure (DREEM), validated for the Brazilian population. 25 26The questionnaire is composed of 50 items followed by a 5-point Likert scale that is organised in five domains: 'learning' (12 items), 'teachers' (11 items), 'academic' (8 items), 'atmosphere' (12 items) and 'social' (7 items).DREEM scores may range from 0 Open access to 200.Higher scores indicate a better perception of the teaching environment.The student's individual lifestyle was measured using the Individual Lifestyle Profile Questionnaire (ILPQ) developed in Brazil. 27The ILPQ has 15 items that are registered using a 4-point Likert scale and comprise five components: 'nutrition', 'physical activity', 'preventive behaviour', 'social relationship' and 'stress control'.The higher the ILPQ score, the more favourable the student's lifestyle.
The investigated psychosocial factors were SOC and social support.Student's SOC was collected using the Brazilian version of the SOC Scale (SOC-13 scale) proposed by Antonovsky. 16 28SOC-13 is a 5-point Likert scale consisting of 13 items.The responses to the SOC code were summed to obtain the final score, which may vary from 13 to 65.The higher the SOC-13 Score, the stronger the SOC.Social support was assessed using the Medical Outcomes Study (MOS) Scale, 29 adapted and validated for Brazilian adults. 30The MOS scale has 19 items involving the five dimensions of social support: 'material', 'affective', 'positive social interaction', 'emotional' and 'informational'.The participant should indicate how frequently they experience each type of support using a Likert scale.Higher MOS scores indicate greater perception of social support.
Data analysis
Demographic characteristics, socioeconomic factors, lifestyle, academic characteristics and psychosocial factors were described according to the severity of depression, anxiety and stress through means (SD) and proportions.
The relationship of structural and intermediary determinants with each domain of DASS-21, namely the scores of depression, anxiety and stress, was evaluated using the rate ratio (RR), 95% CIs and p values.Initially, the association between each independent variable and mental health outcomes was assessed through unadjusted negative binomial regression.This regression analysis was used to account for overdispersed outcome variables, since the variance of the scores of depression, anxiety and stress exceeded the respective means.Statistical modelling using multivariate negative binomial regression was carried out to obtain adjusted estimates.The variables that presented p<0.10 31 in the unadjusted analysis were considered in the multivariable statistical models following the theoretical model (figure 1).The significance level established for the adjusted negative binomial regression models was 5% (p≤0.05).All analyses were performed using statistical software IBM SPSS Statistics V. 29 (IBM, Armonk, New York, USA).
Patient and public involvement
None.
rEsuLts
The average age of the participants was 22.2 years and almost 83% of the sample were female university students.More than 50% of the sample had a monthly family income of less than six minimum wages and were from a different city than the city of the campus.The mean scores of DASS-21 subscales of depression, anxiety and stress were 7.39 (SD=5.72),6.96 (SD=5.49)and 11.43 (SD=5.60),respectively.The sociodemographic and academic characteristics, and lifestyle and psychosocial factors of the participants are presented according to the severity categories of depression, anxiety and stress (online supplemental file 2).The mean age of the participants was 22.2 (SD=3.7)years, and most participants were female (82.8%), had white skin colour (59.4%), and family income between 3 BMWs and 6 BMWs (33.6%).Most students were not admitted through social quotas (57.9%) and moved from their city of origin to attend the dental course (54.5%).
The unadjusted analysis revealed that favourable student lifestyle, better perception of the academic environment, greater SOC and greater social support were statistically associated with lower levels of self-reported depression, anxiety and stress.In addition, female sex was associated with higher anxiety and stress levels (table 2).
Table 3 reports the multivariate negative binomial regression on the association of sex, academic environment, student lifestyle, SOC and social support with selfreported depression, anxiety and stress.Female students were expected to have anxiety and stress mean scores of 47% (95% CI 1.10 to 1.97) and 52% (95% CI 1.12 to 2.06) higher than male students.Better perception of the academic environment decreased the likelihood of depression (RR 0.99, 95% CI 0.98 to 0.99), anxiety (RR 0.99, 95% CI 0.98 to 0.99) and stress (RR 0.99, 95% CI 0.98 to 0.99).Greater SOC was associated with lower scores of depression (RR 0.94, 95% CI 0.92 to 0.95), anxiety (RR 0.96, 95% CI 0.95 to 0.98) and stress (RR 0.96, 95% CI 0.95 to 0.98).Students with a more favourable lifestyle were less likely to have greater scores of depression (RR 0.99, 95% CI 0.97 to 0.99).
DIsCussIOn
Dental undergraduate students face several challenges during the course of study that can influence their mental health.Of these, academic work overload, laboratorial and clinical training, peer competition, and patient responsibility can possibly affect their mental wellbeing.In this study, dental students reported high levels Open access of poor mental health status according to their scores of depression, anxiety and stress.Also, a better perception of the academic environment and greater levels of SOC were related to lower self-reported depression, anxiety and stress.In addition, a favourable student lifestyle was related to lower depression.
Previous research has shown that almost half of Brazilian dental students from another public university reported common mental disorders, including anxiety, depression and somatic symptoms. 32High levels of stress, depression and anxiety among dental students have been found in many other countries. 1 9 14 33Our findings are comparable to other studies that have used the DASS instrument to assess the mental health of dental students.The levels of anxiety symptoms in our study (43.0%) were similar to those found among dental students in Australia (50.2%), 34 while the frequency of stress symptoms (70.3%) was comparable to those reported by dental students in Saudi Arabia (70.8%) 7 and the USA (66.8%). 35Saudi Arabian dental students exhibited higher levels of depression than those in the present study (56.1%). 13n this study, the period of the course was not associated with self-reported depression, anxiety or stress.This result is in accordance with a recent systematic review that concluded that scores of self-reported depression did not differ between dental students in different years of study. 9imilar to our findings, the role of a poor academic environment on dental students' mental health has already Open access been reported.Student's negative perception of the academic environment has also been linked to greater levels of stress, depression and anxiety. 34Several aspects related to the academic environment, such as academic work (exams, grades and workload), dental training, laboratory and clinical requirements, and low satisfaction with faculty and peer relationships are considered the main sources of stress and anxiety among dental students. 13 14revious findings showed that an unfavourable academic environment may negatively influence students' mental health. 1 Academic stressors, including high workload and development of technical skills, are relevant components of the dental learning environment that may have contributed to the poor mental health status of the participants in this study. 1 14epressive symptoms were associated with the lifestyle of dental students in the present study.The heavy workload during dental undergraduate training possibly limits the time available for physical activity, relaxation and leisure, and social relationships, contributing to the development of depressive symptoms among students.Furthermore, smoking and substance abuse are common coping strategies adopted by dental students to relieve the stress and tensions generated by academic overload. 4 5Thus, the challenging academic routine seems to contribute directly (by stressors) and indirectly (via poor lifestyle) to the development of depressive symptoms among dental students.
However, psychosocial factors, such as SOC, may mitigate the impact of challenging situations during dental training on psychological distress.In this study, higher SOC was associated with lower scores of anxiety, stress and depression.Previous studies have already demonstrated such a relationship among dental students. 18 36OC represents an individual psychosocial attribute that reflects the ability to deal with adversities using available material and symbolic resources to deal with challenging situations in a way that promotes health.Students with a higher SOC may develop the skills and abilities to cope with daily academic tensions using available resources from the social environment.SOC has been associated with positive reframing and active coping for dealing with stressful situations among dental students. 37Thus, a greater SOC seems to protect students' mental health, decreasing symptoms of anxiety, stress and depression as reported in this study.
In this study, social support was not associated with self-reported depression, anxiety and stress in dental students.This result differs from previous research that reported that university students with greater social support showed lower levels of mental and psychological distress. 8 10Possible explanations for such discrepancies may include sample characteristics, such as students from courses other than dentistry and from other countries, as well as the instrument used to assess social support.Additionally, social support was associated only with the three mental health outcomes in the unadjusted analysis.These relationships did not remain significant in the adjusted multivariable models.Thus, the lack of controlling for confounders in previous studies seems to be a solid explanation for the disparities between the present findings and previous studies.
According to our findings, female participants showed higher levels of stress and anxiety than male.Studies involving undergraduate students in different countries have reported similar results. 1 8 10 Women tend to be more expressive about their feelings and more emotionally vulnerable due to cultural and biological factors. 38In the present study, there was a greater proportion of female students than of male.Even though this was expected, as the unbalanced distribution of sex is common in dental courses in Brazil, 32 the levels of poor mental health in the Open access studied sample would possibly have been lower if more male students had been included.However, this aspect did not affect the main findings once sex was included in all the adjusted regression models.Most studies examining the factors associated with university students' mental health have been carried out in developed countries.Also, few studies on this topic have examined the role of protective psychosocial factors on students' mental health. 1 2 This study comprehensively assessed the predictors of dental students' mental health in a developing country, including demographic aspects, perception of the academic environment, lifestyle, social support and SOC.All instruments used in the present study were validated.
The following limitations of the present study should be acknowledged.The cross-sectional nature prevents causal inference between independent variables and mental health problems.Only dental students enrolled in a public university in Brazil, which is free of charge, were recruited.In addition, the study was conducted in only one dental school in south-eastern Brazil, which is considered the wealthiest region of the country.Therefore, our findings should not be generalised to dental students attending private dental schools who pay tuition fees or obtain student loans to cover such costs, as well as those enrolled in public universities located in other regions of Brazil.A small number of participants were of yellow and indigenous skin colour, which affected the precision of the estimates obtained in the regression models.Furthermore, other possible predictors of mental health status among dental students, including bullying, pre-existing mental health problems and a family history of mental illnesses, were not evaluated.Future longitudinal studies should be conducted to assess the aforementioned predictors of students' mental health throughout the course.In addition, intervention studies should be developed to investigate possible strategies to improve the academic environment and consequently promote students' mental health.
Improving the academic environment can possibly contribute to the reduction of depression, stress and anxiety among dental students.Many academic factors can be modified, including the adoption of educational strategies that reinforce the student's SOC, resilience, autonomy, self-esteem, sense of belonging and empowerment. 39 40This is particularly relevant to reducing mental health inequalities in universities where students are from different socioeconomic backgrounds, such as the university where this research was conducted.Instead of being a risk factor for students' poor mental health, the university environment should support the academic community to thrive, flourish and achieve psychological well-being. 40ther promising initiatives to improve student mental health may include the adoption of individual academic tutors to support students in developing strategies for stress management throughout the course, such as life coaching programmes, 41 mindfulness-based stress reduction and deep breathing exercises. 42 addition, screening for psychological distress at the beginning of the dental course can be useful in identifying the most vulnerable students and in directing strategies to promote their mental health and the ability to cope with adversity.Furthermore, early identification of dental students with depressive symptoms is especially relevant, as it can contribute to the development of healthier lifestyle habits.Therefore, psychological support services are needed for undergraduate dental students.
COnCLusIOn
The present study showed that a poor academic environment and lower SOC were associated with self-reported depression, anxiety and stress among undergraduate dental students.Furthermore, female students were more likely to report greater anxiety and stress than male students.The favourable lifestyle of the student was also associated with lower depression.The educational environment, SOC and lifestyle are potential areas of intervention to improve the psychological health of dental students.
Contributors Conceptualisation, Silva AN da and MVV; methodology, Silva AN da and MVV; formal analysis, Silva AN da and MVV; investigation, Silva AN da; data curation, Silva AN da and MVV; writing-original draft preparation, Silva AN da and MVV; writing-review and editing, Silva AN da and MVV; project administration, Silva AN da.All authors have read and agreed to the published version of the manuscriptSilva AN da and MVV are responsible for the overall content as guarantors.
Figure 1
Figure1Theoretical model for the study of structural and intermediary determinants of mental health outcomes, adapted from the WHO Conceptual Framework on Social Determinants of Health[22]).
Table 2
Crude negative binomial regression on the relationship of socioeconomic factors, student characteristics, and lifestyle and psychosocial factors with depression, anxiety and stress BMW, Brazilian minimum wages; RR, rate ratio.
Table 3
Adjusted negative binomial regression on the relationship of socioeconomic factors, student characteristics, and lifestyle and psychosocial factors with depression, anxiety and stress
|
2023-12-21T06:17:42.031Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "a94d61d051612f323828a1314f892c754dad30e9",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/13/12/e076084.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "810fd9cd46b3f1485638176843d735223049def0",
"s2fieldsofstudy": [
"Medicine",
"Education",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.